text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
public class PolynomialSplineFunction extends Object implements DifferentiableUnivariate UnivariateFunction derivative() derivativein interface DifferentiableUnivariateFunction public PolynomialSplineFunction polynomialSplineDerivative() public int getN() public PolynomialFunction[] getPolynomials() public double[] getKnots()
http://commons.apache.org/proper/commons-math/javadocs/api-3.0/org/apache/commons/math3/analysis/polynomials/PolynomialSplineFunction.html
CC-MAIN-2013-48
en
refinedweb
#include "DenseLinAlgPack_DMatrixAsTriSym.hpp" Go to the source code of this file. These functions implement level 1 and 2 BLAS like linear algebra operations on unsorted sparse vectors. These templated sparse vector objects give information about the sparse vector (size, nonzeros etc.) and give access to the sparse elements as iterators. The iterators yield sparse elements that give the elements indice in the full vector and its value. The specification for these interfaces is as follows: {verbatim} class SparseElementTemplateInterface { public: typedef .... value_type; typedef .... indice_type; value_type& value(); value_type value() const; indice_type indice() const; }; class SparseVectorTemplateInterface { public: typedef ... difference_type; typedef ... element_type; // SparseElementTemplateInterface compliant typedef ... iterator; // *(iter) yields a element_type typedef ... const_iterator; // *(iter) yields a const element_type typedef ... reverse_iterator; // *(iter) yields a element_type typedef ... const_reverse_iterator; // *(iter) yields a const element_type Information size_type size() const; // size of the full vector size_type nz() const; // number of nonzero elements difference_type offset() const; // ith real real indice = begin()[i-1] + offset() bool is_sorted() const; // true if elements are sorted by indice iterate forward (sorted) through elemlents iterator begin(); const_iterator begin() const; iterator end(); const_iterator end() const; iterate backward (sorted) through elemlents reverse_iterator rbegin(); const_reverse_iterator rbegin() const; reverse_iterator rend(); const_reverse_iterator rend() const; }; /end{verbatim} In all of these functions where we have some operation that yields a dense vector being added to another function such as: v_lhs = operation + vs_rhs2 it is allowed that v_lhs.overlap(vs_rhs2) == SAME_MEM. In this case no unnecesary operations will be performed. Also, it is up the the user to ensure that there is not an alias problem where if v_lhs is the same as vs_rhs2 and vs_rhs2 is also used in the operation. This has undefined results. If a future version of the library this may be handeled but for now it is not. These operations use the same nameing convensions as those for DVector and DVectorSlice in DenseLinAlgPack with the acception that the sparse vectors are given the tag #SV# instead of #V# so as to not destroy the intended behavior of the operations in DenseLinAlgPack and the implicit conversion of a DVector to a DVectorSlice. It should be noted that these operations will be more efficient for large dense vectors and sparse vectors with many nonzero elements if the sparse elements are sorted by indice. This is because in many operations the elements of the dense vectors are accessed using random access and this could cause virtual page thrashing if the nonzero sparse elements are not sorted.
http://trilinos.sandia.gov/packages/docs/r10.8/packages/moocho/browser/doc/html/AbstractLinAlgPack__SparseVectorOpDecl_8hpp.html
CC-MAIN-2013-48
en
refinedweb
Introduction Here I will explain how to use richtextbox and how we can save our richtextbox data in database and how we can retrieve and display saved richtextbox data into our application using asp.net. Description: Today I am writing this post to explain about freely available richtextbox. Previously I worked on one social networking site for that we got requirement to use richtextbox at that time we search so many websites to use richtextbox but we didn’t find useful ones. Recently I came across one website and I found one free available richtextbox. We can validate and we can use richtextbox very easily and by using this richtextbox we can insert data in different formats like bold, italic and different color format texts and we can insert images also. To use free Richtextbox download available dll here FreeTextbox . After download dll from that site create one new website in visual studio and add FreeTextbox dll reference to newly created website after that design aspx page like this After that Design your aspx page like this After that run your application richtextbox appears like this Now our richtextbox is ready do you know how we can save richtextbox data into database and which datatype is used to save richtextbox data and do you know how to display richtextbox on our web page no problem we can implement it now. To save richtextbox data we need to use datatype nvarchar(MAX) now Design table like this in your SQL Server database and give name as RichTextBoxData After that add following namespaces in code behind Now write the following code in code behind Demo Download sample code attached 56 comments : Hi Suresh i have problem wit tis cod at saving data with style in tbl the Error is:- A potentially dangerous Request.Form value was detected from the client (FreeTextBox1="sdsd"). TNX You might want to add Server.HtmlEncode(freetextbox.Text) when saving to the database and Server.HtmlDecode() when retrieving. Otherwise a serious security problem exists. Subhash, you have to set ValidateRequest="false" on the @Page line, that opens the door to security problems but you should be ok if you use HtmlEncode/Decode. Thank u very much sir ..Ur Great..It helped me a lot.. A potentially dangerous Request.Form value was detected from the client (FreeTextBox1="sdsd"). this error is coming after giving ValidateRequest="false" in iis. Could you please help in this thanks so lot dear. Its really so good and excellent. thanks again. Raja chaturvedi please add with me at facebook krishna chaturvedi gonda up or send invitation at-krishnachaturvedimca@gmail.com @subhash check the below post to solve your problem Hi Suresh, Thank you very much for this article. It is simply good. ADODB.Recordset rs = new ADODB.Recordset(); con.Open(); SqlCommand cmd = new SqlCommand("select RichTextData from table1",con); rs.Open("select RichTextData from table1",con,ADODB.CursorTypeEnum.adOpenKeyset,ADODB.LockTypeEnum.adLockPessimistic,1); When run that above the error becomes Arguments are of the wrong type, are out of acceptable range, or are in conflict with one another. plz give solution for it...... After enter the data that will store it in database.then how to edit the typed content? in your above exercise can you explain with vb coding instead of C#... required validations not working either through jquery or through fieldvalidators. Folks will you pls help me I'm doing windowsform in that we are using saving data of richtextbox field in ".Rtf" format and i'm adding dataset required field to ".Rdlc" and showing it in reportViewer but Rtf format data is not working it showing same as it saving..?? but i tried to do it in html I do modified one record in html when i'm showing its working but how to save data in Html for RTB field?? do you have any idea's... Thanks Pruthvi Hi, i'm having problems with the formatting of the control on my site? Looks like it inherits from the site master css? Is there a way of setting it to ignore the css or defaulting to what it should look like? Thanks Rich Different font colors in Text box Hi all . I have a requirement to add details of a paragraph with its header. The color of the header is different from the paragraph. can any one tell me how to add different colors in a textbox sir,,this code not working.... how to use fck editor in our web site insert data through fck editor or show data into fck editor can u tel me how to use it in the wpf,c# What is the coading og Logout in aasp dot net using c#? How to create a rich tex in asp dot net using c#? Hi, i have problem with integrating... Error message is: Error 4 Unable to resolve type 'FreeTextBoxControls.FreeTextBox, FreeTextBox, Version=3.3.1.12354, Culture=neutral, PublicKeyToken=5962a4e684a48b87' in file license.licx Could you help me please? Sorry, problem solved:) I have the same probles as @kapuc. Error 1 Unable to resolve type 'FreeTextBoxControls.FreeTextBox, FreeTextBox, Version=3.3.1.12354, Culture=neutral, PublicKeyToken=5962a4e684a48b87' in file licenses.licx Need help! How did you make it? Hi, I am using FreeTextBox in my web application which is developed in asp.net 4.0. I have button under the free textbox.Button is working fine when there is no text in FreeTextBox,but if we enter any text inside FreeTextBox,button is not working. Whats wrong? in the case of mvc how is doing Hi, i have write ValidateRequest="false" in page directive but still i have problem with this code. It shows the following error:- A potentially dangerous Request.Form value was detected from the client (FreeTextBox1="<font face="Georgia"..."). how to disalble right click and Copy,paste options in free text box? gkhghghghg gjghjhjh kjbjhbjhjh jhjhjhjhhjhjh 456789076543245678 Master Of asp.net Hi Evertone, i need to retrive image and data stored in dattabase and display it in textbox not in gridview what to use for it can help me please ? data is inserted using the freetextbox concept. Thanks in advance. How to retrieve freetextbox placed in gridview ccccccccc Hello Sir, How to auto enter data in all fields when we search any particular entry from the list. When we type some characters in search field if its relevant data is available in database then its details should appear in all the available fields. Help me out sir! thanks in advance! do send me the code on my id: surajjavarat17@gmail.com I am using Microsoft Studio Pro V9 SP1 2008 always gives this error: Server Error in '/WebRichText' email is dladlzn@gmail.com Source Error: Line 24: protected void BindGridview() Line 25: { Line 26: con.Open(); Line 27: SqlCommand cmd = new SqlCommand("select RichTextData from RichTextBoxData", con); Line 28: SqlDataAdapter da = new SqlDataAdapter(cmd); Source File: c:\Documents and Settings\xxxxxxx\My Documents\Visual Studio 2008\WebSites\Tutor\RichTextboxSample\RichTextboxSample\Default.aspx.cs Line: 26 nfgfgnfng i just want how to insert image from local drive to rich text box....... i just want how to insert image from local drive to rich text box....... SUTESH BHAI KI JAI.......... dear sir; I have Problem when I use tag and use fonts color are error. A potentially dangerous Request.Form value was detected from the client (MainContent_FreeTextBox2="sf<font color="#FF1493..."). Please tell me too.... Sir, I want a particular record from the database that we already inserted into database....please help me... thanks. How to add text editor in windows form by vb.net sandeep yadav sandeepyadav.bca@gmail.com How Can I Implement With Jquery.. plz suggest me other free Editor bcoz this editor is conflict with my other Ajax control. jhjh jhjh kkjk kkkm Nice application .. You are genious. Good example, Hi suresh can you provide code for Downloading functionality as you provide in your article just like( Download sample code attached) Really Good Post. Thanks Hi Suresh, Thanks for really nice post. But i have problem here : I can enter the richtext into the database but how can we display the text as a label in rich text format. Kindly help me .... How to add text editor in windows form by vb.net sandeepyadav.bca@gmail.com I have a problem: error is: A potentially dangerous Request.Form value was detected from the client (adminmaster_ftxtname="<font color="#000080..."). how do i use that downloaded dll ..do i have to install it or where do i place it please help Hi,suresh if i am copy text from anywhere and on right click paste it i want on that paste validate that textbox for enter digit only MVC 3.0 the FreeTextBoxControls showing disabled I am saving the text as a html file, the text appears properly in html but the image doesn't get diplayed, i.e because it get saved in temp folder of the system, I want to save it to my path. Hi Suresh, may i know why we are using the free rich text control when we have it is asp. ajaxcontroltoolkit
http://www.aspdotnet-suresh.com/2011/05/richtextbox-sample-in-aspnet-or-how-to.html?showComment=1327054830972
CC-MAIN-2013-48
en
refinedweb
22 August 2008 02:00 [Source: ICIS news]ASAHI SHIMBUN, Front page?xml:namespace> Land prices falling in prime locations While Business & Industry No new updates Front page British PM defends Games visit British Prime Minister Gordon Brown said that he looked forward to celebrating the Olympic spirit in NDRC to focus on balanced growth Business & Industry No new updates Front page Chinese soldiers kill 140 Tibetans, Dalai Lama says The Dalai Lama accused Chinese soldiers yesterday of firing on a crowd in Russia, US spar over Abkhazia The Business & Industry FITEL pressured to raise cash On Wednesday, the company held a press conference at the request of the NCC to clarify why it had not paid NT$40 million in fees and royalties Prosecutors target former Far Eastern Air executives The Civil Administration confirmed that the airline owes a total of approximately NT$160 million in airport landing fees and other charges Front page Housing market gets a shot in the arm The government yesterday announced a package of measures to bolster the struggling construction industry and the slumping housing market. Moon's fate hanging in the balance Whether or not the parliament will approve the arrest of Moon Kook-hyun, head of the minor Renewal of Korea Party, is drawing keen attention. Business & Industry Housing stimulus short of expectations Amid rising credit risk, the government yesterday announced supply-focused housing policy measures. Experts say that without a major revamp in tax and home financing rules, the package of measures may not have significant effects on the slowing economy. Portal regulations spark controversy The Korean government and ruling party are scurrying to regulate Web portals in the wake of the recent candlelight vigils against imports of NEW STRAITS TIMES, Front page Transport firms dragging their feet A FEW years ago if you ask a car owner if he or she would take a bus or train to work, their answer would be a resounding no. Back then, public transport service was for the low income group. Rail operators promise action The increase in the number of public transport users has led to capacity problems, but transport operators are taking steps to address these. Business & IndustryBusiness & Industry IJM Plantations confident of good results Twelve months ago, the crude palm oil price was RM2,000 per tonne and the company had a good year, so at RM2,400, IJM is still good, says a company official Overseas ambition Bursa Malaysia-bound Vastalux Energy aims to become a preferred engineering company for oil and gas and petrochemicals in the region BUSINESS TIMES, Front page Key barometer signals deeper US gloom In a sign of further Two new faces in Business & Industry Stanchart launches transparent home loan pegged to Sibor STANDARD Chartered Bank has upped the ante in the mortgage market here with a new transparent home loan pegged to the SEVENTEEN of Front page Bombs rock South A reporter was killed and about twenty others, including a police superintendent, were wounded when two bombs went off at a restaurant in the southern Passport protest About 1,000 supporters of People's Business & Industry No new updates Front page No new updates
http://www.icis.com/Articles/2008/08/22/9150867/in-fridays-asia-papers.html
CC-MAIN-2014-49
en
refinedweb
14 May 2012 17:33 [Source: ICIS news] LONDON (ICIS)--Zaklady Azoty Tarnow (ZAT) is looking to add another 50,000 tonnes/year of capacity to its caprolactam (capro) operations in ?xml:namespace> The option of both adding the extra capacity and going ahead with a joint venture in partnership with the other Polish capro producer, Zaklady Azotowe Pulawy (ZAP), to construct an 80,000-120,000 tonne/year capro plant in either The expansion in In March, ZAT launched its latest capro expansion initiative, a new 15,000 tonne/year installation at its production
http://www.icis.com/Articles/2012/05/14/9559649/polands-zat-looks-at-additional-50000-tonnesyear-of.html
CC-MAIN-2014-49
en
refinedweb
#include <gphoto2/gphoto2_port.h> The libgphoto2_port library was written to provide libgphoto2(3) with a generic way of accessing ports. In this function, libgphoto2_port is the successor of the libgpio library. Currently, libgphoto2_port supports serial (RS-232) and USB connections, the latter requiring libusb to be installed. The autogenerated API docs will be added here in the future. libgphoto2(3), The gPhoto2 Manual, [1]gphoto website, automatically generated API docs, [2]libusb website
http://www.makelinux.net/man/3/L/libgphoto2_port
CC-MAIN-2014-49
en
refinedweb
Controlling Chromium in Haskell Introduction The chromium browser has a built-in WebSockets server which can be used to control it. I first heard about this through an issue with my WebSockets project. Since I am currently rewriting the WebSockets library to have it use io-streams, I wanted to give this a try and made it into a blogpost. As a sidenote – the port is going great, io-streams is a very nice library and I managed to solve a whole lot of issues in the library (mostly exception handling stuff). Note that this code uses a yet unreleased version of the WebSockets library – you can get it from the github repo though, if you want to: check out the io-streams branch. This file is written in literate Haskell so we will have a few boilerplate declarations and imports first: > {-# LANGUAGE OverloadedStrings #-} > module Main > ( main > ) where > import Control.Applicative ((<$>)) > import Control.Monad (forever, mzero) > import Control.Monad.Trans (liftIO) > import Data.Aeson (FromJSON (..), ToJSON (..), (.:), (.=)) > import qualified Data.Aeson as A > import qualified Data.Map as M > import Data.Maybe (fromMaybe) > import qualified Data.Text.IO as T > import qualified Data.Vector as V > import qualified Network.HTTP.Conduit as Http > import qualified Network.URI as Uri > import qualified Network.WebSockets as WS Locating the WebSockets server To enable the WebSockets server, chrome must be launched with the --remote-debugging-port flag enabled: chromium --remote-debugging-port=9160 Now, in order to connect to the built-in WebSockets server, we have to know its URI and this requires some extra code. We will first use cURL to demonstrate this: $ curl localhost:9160/json [ { "description": "", ... "webSocketDebuggerUrl": "ws://localhost:9160/devtools/page/8937C189-5CED-8E34-E26E-A389641FE8FF" } ] That webSocketDebuggerUrl is the one we want. Let us write some Haskell code to automate obtaining it. We create a datatype to hold this info. Currently, we are only interested in a single field: > data ChromiumPageInfo = ChromiumPageInfo > { chromiumDebuggerUrl :: String > } deriving (Show) We will use aeson to parse the JSON. We need a FromJSON instance for our datatype: > instance FromJSON ChromiumPageInfo where > parseJSON (A.Object obj) = > ChromiumPageInfo <$> obj .: "webSocketDebuggerUrl" > parseJSON _ = mzero The http-conduit library can be used to do what we just did using curl: > getChromiumPageInfo :: Int -> IO [ChromiumPageInfo] > getChromiumPageInfo port = do > response <- Http.withManager $ \manager -> Http.httpLbs request manager > case A.decode (Http.responseBody response) of > Nothing -> error "getChromiumPageInfo: Parse error" > Just ci -> return ci > where > request = Http.def > { Http. , Http.port = port > , Http. } One remaining issue is that the JSON contains the WebSockets URL as a single string, and the WebSockets library expects a (host, port, path) triple. Luckily for us, the standard network library has a Network.URI module which makes this task pretty simple: > parseUri :: String -> (String, Int, String) > parseUri uri = fromMaybe (error "parseUri: Invalid URI") $ do > u <- Uri.parseURI uri > auth <- Uri.uriAuthority u > let port = case Uri.uriPort auth of (':' : str) -> read str; _ -> 80 > return (Uri.uriRegName auth, port, Uri.uriPath u) Once we are connected to Chromium, we will be sending commands to it. A simple Haskell datatype can be used to model these commands: > data Command = Command > { commandId :: Int > , commandMethod :: String > , commandParams :: [(String, String)] > } deriving (Show) We use the aeson library again here, to convert these commands into JSON data: > instance ToJSON Command where > toJSON cmd = A.object > [ "id" .= commandId cmd > , "method" .= commandMethod cmd > , "params" .= M.fromList (commandParams cmd) > ] What is left is a simple main function to tie it all together. > main :: IO () > main = do > (ci : _) <- getChromiumPageInfo 9160 > let (host, port, path) = parseUri (chromiumDebuggerUrl ci) > WS.runClient host port path $ \conn -> do > -- Send an example command > WS.sendTextData conn $ A.encode $ Command > { commandId = 1 > , , commandParams = [("url", "")] > } > > -- Print output to the screen > forever $ do > msg <- WS.receiveData conn > liftIO $ T.putStrLn msg Conclusion This is a very simple example of what you can do with Haskell and Chromium, but I think there are some pretty interesting opportunities to be found here. For example, I wonder if it would be possible to create a simple Selenium-like framework for web application testing in Haskell. Thanks to Gilles J. for a quick proofread and Ilya Grigorik for this inspiring blogpost!
http://jaspervdj.be/posts/2013-09-01-controlling-chromium-in-haskell.html
CC-MAIN-2014-49
en
refinedweb
US7512537B2 - NLP tool to dynamically create movies/animated scenes - Google PatentsNLP tool to dynamically create movies/animated scenes Download PDF Info - Publication number - US7512537B2US7512537B2 US11/086,880 US8688005A US7512537B2 US 7512537 B2 US7512537 B2 US 7512537B2 US 8688005 A US8688005 A US 8688005A US 7512537 B2 US7512537 B2 US 7512537B2 - Authority - US - United States - Prior art keywords - input - natural language - sentence - user - animation - 24 - 230000000875 corresponding Effects 0.000 claims description 7 - 230000036651 mood Effects 0.000 claims description 3 - 238000001514 detection method Methods 0.000 claims 1 - 230000002079 cooperative Effects 0.000 abstract description 2 - 238000003058 natural language processing Methods 0.000 description 27 - 238000010586 diagrams Methods 0.000 description 12 - 210000002356 Skeleton Anatomy 0.000 description 7 - 239000000203 mixtures Substances 0.000 description 7 - 210000003128 Head Anatomy 0.000 description 6 - 238000009877 rendering Methods 0.000 description 6 - 238000006243 chemical reactions Methods 0.000 description 5 - 238000004891 communication Methods 0.000 description 5 - 230000003068 static Effects 0.000 description 5 - 230000036536 Cave Effects 0.000 description 3 - 238000005516 engineering processes Methods 0.000 description 3 - 239000011435 rock Substances 0.000 description 3 - 230000002104 routine Effects 0.000 description 3 - 241000282326 Felis catus Species 0.000 description 2 - 210000000245 Forearm Anatomy 0.000 description 2 - 239000000976 inks Substances 0.000 description 2 - 230000002093 peripheral Effects 0.000 description 2 - 210000000988 Bone and Bones Anatomy 0.000 description 1 - 281000056986 Peoples Daily companies 0.000 description 1 - 281000171460 Rambus companies 0.000 description 1 - 241000220317 Rosa Species 0.000 description 1 - 230000001058 adult Effects 0.000 description 1 - 230000004075 alteration Effects 0.000 description 1 - 230000001427 coherent Effects0000035622 drinking Effects 0.000 description 1 - 235000021271 drinking Nutrition 0.000 description 1 - 239000004744 fabrics Substances 0.000 description 1 - 230000004634 feeding behavior Effects 0.000 description 1 - 239000000835 fibers Substances 0.000 description 1 - 281999990745 international associations companies 0.000 description 1 - 239000003550 marker Substances 0.000 description 1 - 230000005055 memory storage Effects 0.000 description 1 - 230000004048 modification Effects 0.000 description 1 - 238000006011 modification reactions Methods 0.000 description 1 - 230000003287 optical Effects 0.000 description 1 - 238000005457 optimization Methods 0.000 description 1 - 239000006072 pastes Substances 0.000 description 1 - 230000002123 temporal effects Effects 0.000 description 1 - 230000000007 visual effect Effects 0.000 description 1 - 239000011901 water Substances 0.000 - G—PHYSICS - G06—COMPUTING; CALCULATING; COUNTING - G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL - G06T13/0012—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects Abstract The subject invention provides a unique system and method that facilitates integrating natural language input and graphics in a cooperative manner. In particular, as natural language input is entered by a user, an illustrated or animated scene can be generated to correspond to such input. The natural language input can be in sentence form. Upon detection of an end-of-sentence indicator, the input can be processed using NLP techniques and the images or templates representing at least one of the actor, action, background and/or object specified in the input can be selected and rendered. Thus, the user can nearly immediately visualize an illustration of his/her input. The input can be typed, written, or spoken—whereby speech recognition can be employed to convert the speech to text. New graphics can be created as well to allow the customization and expansion of the invention according to the user's preferences. Description The subject invention relates generally to animation and in particular, to generating an illustrated or animated scene that corresponds to natural language input in real time. Throughout the last several years, computer users have incorporated both desktop and mobile computers in their lives due in large part to the efficiency and convenience that these types of devices often provide. Despite the many advances of computer technology and its presence in so many different aspects of people's daily routines, some computer-related tasks continue to lack full performance optimization due to assumed obstacles and barriers; and thus remain inefficient and cumbersome to perform. The integration of natural language processing and graphics generation is one such example. In practical terms, imagine a user has a movie idea with a very weak or vague story line that he would like to propose to his supervisor. To support his relatively shallow text, he would like to add illustrations to help his supervisor visualize the idea. Using conventional systems and techniques, the user must scavenge through a multitude of image sources to find any images that, at best, remotely convey his text and/or the meaning of the text. Unfortunately, this task can be painstakingly slow, impracticable, and even hinder user productivity and performance. In educational scenarios, students are often tasked with creative writing assignments as a means to learn vocabulary and proper word usage and grammar, to improve writing skills and to foster creativity. However, many learning tools currently available to students inadequately satisfy these needs. In many cases, for instance, the student is limited to words or text only—no pictures can be incorporated. When including pictures is an option, too much time is often required to find the appropriate one or else only a limited variety of pictures is provided to the student. Thus, there remains a need for a more flexible system or tool that facilitates user efficiency and provide a novel approach to associating animations with natural language processing. In particular, the system and method provide for generating a scene or animation based at least in part upon text entered in a natural language form. This can be accomplished in part by selecting one or more images and templates according to the user's input and then applying the appropriate images to the appropriate templates to generate the animation. As a result, a user can readily view an illustration of their text as it is provided or entered (in natural language). Thus, graphics and/or animation can be rendered dynamically, thereby relieving the user of the task of searching through any number of databases for the relevant image(s) to correspond to his/her text. The subject invention can be accomplished in part by analyzing the user's natural language input and then rendering the most appropriate graphical illustration corresponding to the user's input. According to one aspect of the invention, natural language input can be analyzed one sentence at a time, for example, in order to generate a potentially different illustration for each sentence. The generation of each illustration can depend on the input. Through the use of natural language processing, various types of information can be extracted and identified from the input such as the “actor”, “action”, “location” or background, object, as well as other functional roles pertaining to the input including, but not limited to mood, color, dimension, or size. This information can be characterized in XML format, for example, which facilitates identifying and selecting the most suitable graphical image for each respective piece of information. Using the XML-based information, the appropriate image(s) can be accessed from a graphics library and assembled to create a scene that is representative of the input. It should be appreciated that other languages or formats in addition to XML can be utilized as well to carry out the subject invention and such are contemplated to fall within the scope of the invention. The graphics library can include a plurality of default actors, actions, objects, and backgrounds. For example, the default actors can include a man, a woman, a dog, a cat, etc. . . . Thus, when the user inputs “the woman jumped”, a generic image of a woman (female actor) jumping can be generated and visualized almost immediately after it is entered by the user. The number of times an action is performed can be determined by the form of the input. For instance, “the woman jumped” can indicate a rendering of the woman performing a single jump, whereas “the woman was jumping” can indicate a rendering of the woman jumping more than once or continuously until the next input is received and processed. In addition to the default set of graphics initially provided to the user, the graphics library can also be customized by each user. That is, the user can readily create his or her particular actors, actions, backgrounds, and objects as well as replace any existing ones. Consequently, the user can personalize the graphic environment and use the desired vocabulary such as slang terms, made-up terms, technical terms, and/or uncommon dictionary terms. Moreover, an environment is created that can provide an immediate visualization of a scene to the user based on any text entered by the user. Furthermore, as a plurality of text is entered, the scenes can accumulate and be stored in the order in which they were generated to yield a series of scenes. The scenes can be assembled and replayed such as in a movie format. Sound or speech can be added to individual scenes, actors, actions, objects, and/or backgrounds or to the overall series as well. The natural language input can be entered in the form of speech whereby various speech-to-text recognition techniques can be employed to translate the speech into a form suitable for natural language processing. Due to the extensibility of the invention, it can be utilized in a variety of personal, educational, and commercial applications across different age groups and languages. new graphics to correspond to actors, actions, objects, and backgrounds. For example, dimensions, position, and the like can be automatically optimized and accommodated for depending on the content of the input. terms “template” and “skeleton” are also used throughout the description of the subject invention. “Template” generally refers to a placeholder for an image. This placeholder can (eventually) move, turn, change size, etc. Templates can be used in different ways. For example, a template can be used as a placeholder of an object (e.g., a ball) that follows a parabolic trajectory when kicked (or otherwise acted upon) by an actor. In addition, a template can be used for an actor's head placeholder. Similarly, one can imagine a 2-D actor's body created with templates where each template contains a part of the body (right forearm, right shoulder, left forearm, etc.). Inside a template, one could also imagine having an animation (series of images) instead of a static image. For instance, a user may want to use an actor's head based on a series of images of a head that blink the eyes and move the mouth instead of a static head. Skeletons are generally more related to 3-D motion files (but the same could also be applied to 2-D). For instance, in the exemplary screen captures which follow (e.g., FIGS. 8-10 and 13-17, infra), a series of images for any action can be pre-rendered (i.e., the woman kick, the man kick, the dragon kick, the man drink, etc). For example, one motion file for “kick” can be applied to all or almost all the actor meshes. The motion file makes a skeleton do the action (e.g., kick) and then this same motion file can be applied to the different actors' meshes. This works because all of the bones of the actors' skeletons are named the same. For instance, the same motion file for the “kick” action can be applied to the mesh of a woman and the mesh of a man. The subject invention links the arrangement of graphical images with natural language processing. Thus, as natural language input is processed, appropriate images can be instantly rendered according to the natural language input. In practice, essentially any kind of natural language input can be illustrated. For example, as a user types in a story, a series of illustrated or animated scenes can be generated to go along with the story. Both children and adults can utilize this kind of tool to facilitate learning, story telling, communication, comprehension, and writing skills. For purposes of discussion, the subject invention will now be described from the perspective of a user writing a story; though it should be appreciated that many other applications are possible and within the scope of the invention. From at least the perspective of story writing, the invention mitigates the drudgery and the distraction of (manually) searching through art or other image stores for just the right picture by automatically selecting the most appropriate pictures and rendering them on the fly as the story is created and continues to unfold. Referring now to FIG. 1, there is shown a high-level block diagram of a natural language illustration system 100 in accordance with an aspect of the subject invention. The system includes a language processor component 110 that can receive input such as text or audio input from a user. The input can be provided by the user in natural language form such as “the dog jumped over the fence” rather than in a computer coded form. Furthermore, the language processor component 110 can support and analyze a language in natural language format. For example, aside from English, other languages can be processed as well. Once received, the language processor component 110 can analyze the input and extract any pertinent information that identifies the types or names of images to render and the manner in which to render them. This information can be communicated to an animation engine 120 which selects and renders the appropriate images to match to the user's input. The animation engine 120 can pull at least one image and/or at least one template from one or more databases to provide an adequate visual depiction of the user's input. The images can include color or animation and can be scaled to an appropriate dimension with respect to the other images included in the overall scene. Thus, by simply entering natural language input into the system 100, an animation or illustrated scene based at least in part upon that input can be created and readily viewed by the user. Referring now to FIG. 2, a natural language-to-illustration conversion system 200 is depicted in accordance with an aspect of the invention. In general, the system 200 can create a new scene that the user can replay for each statement entered into the system 200. The system 200 includes a natural language processing (NLP) component 210 that processes the statement into a form usable by an animation engine 220—which then dynamically illustrates and/or animates the statement for the user's view. In particular, an NLP module 230 can be called upon receipt of the new statement (input) when an indication of the end of a sentence or end-of-sentence indicator (e.g., hitting “Enter”, a hard return”, a period) is detected. The NLP component 210 can understand the basic semantic structure, or logical form, of a given sentence—that is, WHO (subject-actor) did WHAT ACTION (verb-action) to WHAT (object) WHERE (location or background)—based on conventional NLP guidelines. For example, the logical form of the sentence “On the beach the man kicked a ball.” is: The logical form information can also include different types of attributes for each actor, action, location, or object. Such attributes include but are not limited to dimension, size, color, and mood or expression. Once the sentence is converted to its logical form, the NLP module 230 can read the logical form information and eventually translate the logical form information into core terms. A core term is essentially a word that is associated with a graphic and that may have a plurality of synonyms that could be used in its place. Because natural language is infinite, it is practically impossible to manually associate every word with a graphic. Thus, to reduce the burden of rendering a graphic for every word, some words can be considered synonyms of other words already associated with graphics. For instance, consider the words dwelling, home, house, cabin, and cottage. Each of these words stands for related objects and hence could be illustrated in the same way. Therefore, it may be unnecessary to generate different graphics for each word. Instead, they can all be assigned the core term HOUSE and associated with the same corresponding graphic. So in the sentence—“My cat ran through the cottage.”—the NLP module 230 can identify that the core term for “cottage” is HOUSE and then use the core term HOUSE in the logical form rather than the input word (cottage). In addition to resolving synonym usage, there are many linguistic issues that must be resolved before information that is necessary and sufficient to represent the meaning of a sentence can be extracted for use by the graphics component. Anaphora resolution, negation, ellipsis, and syntactic variation are a few representative examples. When writing a story or any type of prose, it is inevitable that the user will employ pronouns to mitigate unnecessary repetition and to create a natural, coherent, and cohesive piece of text. For example, in the two samples (A and B) of text below, “man” is repeated in each sentence in sample A, whereas pronouns are used appropriately in sample B: The pronoun “He” in the second sentence and “him” in the third sentence refer to a man in the first sentence. Pronouns cannot be passed to the graphic component 220 without first associating them with the appropriate graphics. The problem of resolving the referent of a given pronoun (“anaphora resolution”) can be dealt with by the NLP component 210. The NLP component 210 can understand who “he” and “him” are and communicate this information to the graphics component 220, which otherwise would be at a loss to associate a graphic with the pronoun. The NLP component 210 includes the referents of pronouns (and core terms thereof), rather than the pronouns themselves, in the logical form. Hence, in the subject system 200, users can employ pronouns at their leisure to mitigate the redundancy of particular nouns as their stories develop. Other linguistic issues such as negation and ellipsis can also arise in the process of understanding text which can make it more challenging to communicate only interesting and relevant information to the graphics component 220. Take, for example, the following sentence which exemplifies both issues: The man was not jumping on the beach, but the woman was. Notice that only man is explicitly associated with jumping in this sentence. However, because the verb is negated, the NLP component 210 must be careful not to pass the information that man is the actor and jump is the action in this sentence on to the graphics component 220. Moreover, even though there is no explicit verb jump following woman, the NLP component 210 must extract the information that woman is the actor of interest and jump is the action of interest. In the end, the graphics of a woman, not a man, jumping on the beach should be generated to coincide with the author's (user's) intentions. Finally, syntactic variation can be resolved by the NLP component 210 as well. Sentences like C and D below are different syntactically, but the difference is normalized by the NLP component 210 so the system 200, in general, needs to do no extra work to generate the same graphics for either sentence (C and D): Thus, the NLP component 210 addresses and resolves synonym usage and a variety of linguistic issues before composing the logical form of a sentence. Overall, the fine-grained analysis of the NLP component 210 minimizes the work needed to be done by the graphics component 220 while providing the user with flexibility of expression. Once the logical form information is determined, it can be translated into output 240 in an XML format. It should be appreciated that in one approach, the XML formatted output includes only core terms rather than any input words; however other approaches can include any relevant input words in the output 240. The NLP component 210 communicates the output 240 to the animation engine 220 which includes an XML parser module 250 and a scene generator module 260. The output 240 from the NLP component 210 becomes the input for the XML parser module 250 that can read the actor, action, object, and/or location information. The XML parser module 250 calls the scene generator module 260 to access the relevant actor, action, object, and/or location from a graphics library 270. The scene generator module 260 arranges the respective images from the graphics library 270 to generate a scene 280 which may be in 2-D or 3-D space and either static or animated. The graphics library 270 can include a plurality of images for a variety of actors, actions, objects, and locations. Alternatively or in addition, the graphics library 270 can also include a plurality of templates/skeletons for actors, actions, locations, and/or objects to which one or more images can be applied to result in a dynamic graphical rendering of the user's sentence. In addition to using existing graphics or images in the graphics library 270, graphics or images can be rendered dynamically (on-the-fly) using 3-D meshes, motion files and/or texture files. The user can also customize the graphics library 270 by creating new images or modify existing ones. This can be accomplished in part by employing pen and ink technology such as on a tablet PC, importing (e.g., copy-paste; drag-drop) 2-D images such as photos or video stills, and/or adding sound. When a new image is made, the user assigns it a name which gets added to the NLP component's term dictionary or database as a core item or term. Thus, imagine a new actor is created and named “Gus”. Whenever the user enters a sentence and includes “Gus” therein, the image of Gus as created by the user is accessed from the graphics library 270 and is rendered in the scene 280. As demonstrated in the diagram 300 of FIG. 3, suppose that a user 310 has entered the following statement 320: A man kicked a ball in the cave. The marker of the end of an input (e.g., period, hard “return”, etc.) can be detected which signals the natural language processing of the text input to begin (330). The resulting output 340 in XML format can identify MAN as the actor, KICK as the action, BALL as the object, and CAVE as the background (location). This output 340 can then be passed to an animation engine 350 and in particular to an XML parser 360 that construes the output 340 and communicates it to a scene generator 370. The scene generator 370 retrieves a MAN graphic, a KICK graphic, a BALL graphic, and a CAVE graphic from one or more graphics stores 380 and renders the graphics as indicated by the XML format into a scene 390. The scene 390 can be animated depending on whether any of the graphics are animated and/or appear in 3-D. When the user is finished entering statements, the generated scenes can be combined for replay like a movie or slide show presentation. Each set of scenes (each movie) can be saved and replayed again at a later time. Audio can also be added to each scene or to the overall presentation. Furthermore, the graphics and “words” of each sentence can maintain a cooperative relationship; thus, each scene can maintain a link to its respective sentence. For instance, if the user wants to reorder at least one of the sentences, doing so also reorders the corresponding scene to ultimately produce a new movie. The new movie can be saved as well. As previously mentioned, the graphics library or database can be customized by the user, thereby allowing the user to alter or add new image items thereto. A block diagram of an exemplary image generation system 400 is depicted in FIG. 4 in accordance with an aspect of the subject invention. The system 400 includes an image generation component 410 that creates new image items such as actors, actions, locations and/or objects based on input from the user. The user can sketch or draw the new item or make use of one or more templates to facilitate the creation process. For example, to create a new male actor that resembles the user's friend TOM, the user can retrieve a male actor template (skeleton). The template can be of a “man” graphic capable of performing animation with the face portion left blank for the user to complete—either with a drawing or with a digital image of TOM's face. Once completed to the user's satisfaction, this particular graphic can be named TOM and stored in the appropriate image database 420. When the user enters a statement that includes the new actor's name, the NLP component (e.g., FIG. 2, 210) associates the name TOM with an identical new core term, TOM, and sends that information to the graphics component (e.g., FIG. 2, 220). Anytime the user includes TOM in a sentence, the TOM graphic can be rendered in the resulting scene. Pre-existing images can be altered by way of an image editor component 430. As with new graphics, pre-existing graphics can be replaced by changing color, texture, fabric patterns, sound, dimension, and the like. Replaced graphics can be stored as new graphics with new names or under their original names depending on user preferences. Any images saved to the image database(s) 420 can be accessed by a scene generator 440. Turning now to FIGS. 5-18, there is shown an exemplary sequence of screen captures demonstrating the employment of a dynamic natural language-to-illustration conversion system by a user as described in FIGS. 1-3, supra. FIG. 5 depicts an exemplary user interface 500 of the system when the system is launched. When assistance is needed before or during use of the system, a HELP screen 600 such as the one shown in FIG. 6 can be displayed to the user. The HELP screen 600 can include any or all of the terms for which graphics are associated (e.g., exist in the graphics library) as well as other tips or information to aid the user. Now suppose that the user is ready to begin writing his story. As shown in FIG. 7, a first sentence 710 (sentence 1) is entered: Once upon a time a dragon flew over a beach. Note that as indicated in the figure, a period is not yet entered and no scene is rendered in the viewing window 720. After entering the period (end-of-sentence indicator), an animated scene of the dragon flying over a beach can be seen in the viewing window 810 as demonstrated in the screen capture 800 in FIG. 8. As a guide, the sentence being illustrated can be viewed in the window as well 810. To further assist the user in determining or learning how a scene is generated, the user can select a debug control 820 to view the actor, action, background, and/or object extracted from the user's sentence. According to the first sentence, dragon is the ACTOR, fly is the ACTION which is performed continuously in this case, and beach is the BACKGROUND or location. In FIG. 9, the scene from the previous sentence is still viewable in the window 910 as the user enters sentence 2—Suddenly, it fell down—in the appropriate field. As soon as the period is entered, the user can see an animation of the dragon falling down in the window 1010 ( FIG. 10). Such action can occur continuously or a set number of times as defined by the user and/or by the particular term used (e.g., “fell”=1 time; “falling”=continuous). Now imagine that the user would like to add a new actor graphic. To do so, the user can select a “sketch” or other similar control 1020. Other navigation related controls 1030 may be present as well to assist the user in viewing previous scenes or to see all the scenes in order from beginning to end (publish 1040). When the sketch control 1020 is selected, a new window 1100 can be opened to display a plurality of image templates as well as other controls that facilitate the creation of the custom image. In this instance, the user is creating a custom boy (male) actor named Littleguy. Because the user is making use of the boy actor template, the user only is asked to fill in the head portion of the actor. The rest of the body appears and behaves in a default manner. As shown in FIG. 12, it appears that the user has imported (e.g., cut-pasted, drag-drop) a photo of a person's head and face and then manually added further detail to the hair to complete Littleguy's custom appearance. Once the user clicks “OK”, the image of Littleguy is saved to the graphics library or database. Continuing on with the story that began in FIG. 7, the user is entering his third sentence (sentence 3)—Littleguy kicked the dragon—in the appropriate field in FIG. 13. As soon as the period is entered, the user can see animation of Littleguy kicking the dragon in FIG. 14. The user continues with sentence 4—The stunned dragon rose and ran away as shown in FIG. 15 and animated in FIG. 16. In FIG. 17, the user enters his last sentence—Littleguy was thirsty and drank his water in the forest. Once again, after entering the period, the animation of Littleguy drinking in the forest is shown ( FIG. 18). As previously mentioned, new actor images can be created using a sketch feature or pad. In some cases, an appropriate template can be selected and employed to assist in the creation of new images. Alternatively, new images (or graphics) can be created without the use of a template (e.g., from scratch). For example, FIG. 19 illustrates an exemplary user interface for drawing new object images. In this case, a user has drawn an image named “rock”. When the term “rock” is entered by the user such as “The man kicked the rock”, a scene as illustrated in FIG. 20 can result. FIG. 21, there is a flow diagram of an exemplary method 2100 that facilitates linking graphics with natural language input to instantly render a scene. The method 2100 involves receiving natural language input from a user at 2110. The input can be in sentence form that is typed, written (e.g., tablet PC), or spoken (e.g., speech-to-text conversion) by the user. More importantly, the input does not need to be coded or arranged in any particular order, but rather can follow natural language patterns for any language supported by the method 2100. At 2120, the natural language input can be processed such as by using natural language processing techniques to yield a logical form of the input. The logical form can then be translated into output that is usable and understandable by a graphics selection engine. XML format is one such example. At 2130, the XML formatted output can be employed to select and render the appropriate graphics (from a graphics library or database) to generate a scene corresponding to the user's initial input. Referring now to FIG. 22, there is shown a flow diagram of an exemplary method 2200 that facilitates generating scenes which correspond to and illustrate a user's natural language input. The method 2200 involves receiving the user's input in sentence form to generate one scene per sentence such as one sentence at a time (at 2210). At 2220, an end-of-sentence indicator such as a period or hard return (e.g., hitting “Enter) can be detected. Once detected, the input can be analyzed to determine the actor, action, object, and/or background (output) specified in the input at 2230. At 2240, the output can be communicated to an animation engine, which can select a graphic for each of the actor, action, object, and/or background specified in the input (at 2250). At 2260, the selected graphics can be arranged and rendered to illustrate the user's natural language input. The previous can be repeated (beginning with 2210) at 2270 for each new input (e.g., sentence) received from the user. When no additional input is desired, the method can proceed to the method 2300 in FIG. 23. The method 2300 involves replaying the series of illustrated scenes in the order in which they were created as a cohesive movie-like presentation (2310). Each scene can include animation and color and appear in 2-D or 3-D space. At 2320, audio can be added to the presentation and/or to individual scenes of the presentation. In addition, should the user wish to change some portion of the story, at least one scene can be modified or reordered by moving the corresponding text to its new position or by changing the appropriate words in any one sentence. The presentation can be “reassembled” and then saved for later access or replay at 2340. Turning now to FIG. 24, there is illustrated a flow diagram of an exemplary graphics creation method 2400 that facilitates expanding and customizing the graphics library from which graphics can be selected to generate scenes as described hereinabove. The method 2400 involves selecting a type of graphic template at 2410, if available. For example, when the user wants to create a new male actor that resembles a friend, the user can select a male actor template or skeleton. At 2420, the relevant new portions of the male actor can be created by the user. This can be accomplished in part by dragging and dropping or importing images such as photos or drawings created elsewhere onto the respective portion of the template. The user can also draw directly on the template in the appropriate locations such as by using pen and ink technology (tablet PC). In practice, the male actor template may only require the user to add in the head and face portions. Other templates may allow for body portions to be modified or created from scratch. At 2430, the new graphic can be named and saved to the appropriate graphics database or library. Once the graphic is named, the graphic will be rendered whenever that particular name or synonyms thereof are recognized as being any one of an actor, action, object, and/or background specified in the user's natural language input. In order to provide additional context for various aspects of the subject invention, FIG. 25 and the following discussion are intended to provide a brief, general description of a suitable operating environment 2510 25. 25, an exemplary environment 2510 for implementing various aspects of the invention includes a computer 2512. The computer 2512 includes a processing unit 2514, a system memory 2516, and a system bus 2518. The system bus 2518 couples system components including, but not limited to, the system memory 2516 to the processing unit 2514. The processing unit 2514 can be any of various available processors. Dual microprocessors and other multiprocessor architectures also can be employed as the processing unit 2514. The system bus 25 2516 includes volatile memory 2520 and nonvolatile memory 2522. The basic input/output system (BIOS), containing the basic routines to transfer information between elements within the computer 2512, such as during start-up, is stored in nonvolatile memory 2522. By way of illustration, and not limitation, nonvolatile memory 2522 can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM), or flash memory. Volatile memory 2520 includes random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in many forms such as static RAM (SRAM), dynamic RAM (DRAM), static DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and direct Rambus (DRDRAM). Computer 2512 also includes removable/nonremovable, volatile/nonvolatile computer storage media. FIG. 25 illustrates, for example, a disk storage 2524. Disk storage 2524 includes, but is not limited to, devices like a magnetic disk drive, floppy disk drive, tape drive, Jaz drive, Zip drive, LS-100 drive, flash memory card, or memory stick. In addition, disk storage 2524 to the system bus 2518, a removable or non-removable interface is typically used such as interface 2526. It is to be appreciated that FIG. 25 describes software that acts as an intermediary between users and the basic computer resources described in suitable operating environment 2510. Such software includes an operating system 2528. Operating system 2528, which can be stored on disk storage 2524, acts to control and allocate resources of the computer system 2512. System applications 2530 take advantage of the management of resources by operating system 2528 through program modules 2532 and program data 2534 stored either in system memory 2516 or on disk storage 2524. It is to be appreciated that the subject invention can be implemented with various operating systems or combinations of operating systems. A user enters commands or information into the computer 2512 through input device(s) 2536. Input devices 2514 through the system bus 2518 via interface port(s) 2538. Interface port(s) 2538 include, for example, a serial port, a parallel port, a game port, and a universal serial bus (USB). Output device(s) 2540 use some of the same type of ports as input device(s) 2536. Thus, for example, a USB port may be used to provide input to computer 2512 and to output information from computer 2512 to an output device 2540. Output adapter 2542 is provided to illustrate that there are some output devices 2540 like monitors, speakers, and printers among other output devices 2540 that require special adapters. The output adapters 2542 include, by way of illustration and not limitation, video and sound cards that provide a means of connection between the output device 2540 and the system bus 2518. It should be noted that other devices and/or systems of devices provide both input and output capabilities such as remote computer(s) 2544. Computer 2512 can operate in a networked environment using logical connections to one or more remote computers, such as remote computer(s) 2544. The remote computer(s) 2544 can be a personal computer, a server, a router, a network PC, a workstation, a microprocessor based appliance, a peer device or other common network node and the like, and typically includes many or all of the elements described relative to computer 2512. For purposes of brevity, only a memory storage device 2546 is illustrated with remote computer(s) 2544. Remote computer(s) 2544 is logically connected to computer 2512 through a network interface 2548 and then physically connected via communication connection 2550. Network interface 25) 2550 refers to the hardware/software employed to connect the network interface 2548 to the bus 2518. While communication connection 2550 is shown for illustrative clarity inside computer 2512, it can also be external to computer 2512. The hardware/software necessary for connection to the network interface 25) 1. A natural language illustration system comprising: a language processor component that analyzes natural language input that comprises at least one action from a user to determine a logical form of the input and translates the logical form into an output, the output comprising at least one of an actor, action, object, and background, wherein the natural language input comprises a plurality of sentences; and an animation engine that selects at least one image, applies at least one template to the at least one image and generates for each sentence an animation dynamically based on the at least one image and the at least one template, wherein each animation is linked to the sentence for which the animation was generated and conveys the meaning of the sentence, the animation engine arranges the animations in a same order as the sentences are provided in the natural language input and playing the animations in order to simulate a movie, wherein upon the user changing the order of the sentences the animation engine changes the order of the animations to match the changed order of the sentences without re-analyzing the natural language input and without generating new animations. 2. The system of claim 1, wherein the language processor component is a natural language processor component that comprises a natural language processor module that processes the natural language input to determine the logical form of the input. 3. The system of claim 1, wherein the output is in XML format. 4. The system of claim 3, wherein the animation engine comprises an XML parser component that receives XML formatted output from the language processor component and determines which images to retrieve and render based at least in part on the XML formatted output. 5. The system of claim 3, further comprising a scene generator that arranges the images in a manner consistent with a meaning of the natural language input. 6. The system of claim 1, further comprising a graphics library that comprises a plurality of images corresponding to a multitude of at least the following: actors, actions, backgrounds, and objects. 7. The system of claim 1, wherein the animation is generated in at least one of 2-D or 3-D space. 8. The system of claim 1, wherein the output further comprises at least one of size, dimension, age, mood, and color. 9. The system of claim 1, wherein the input comprises natural language text in sentence form. 10. The system of claim 9, wherein the animation engine generates one scene at a time after each sentence is entered. 11. The system of claim 1, wherein the language processor component begins analysis of the input upon detection of an end-of-sentence indicator in the input. 12. The system of claim 1, further comprising an image generation component that creates and stores new images that are individually associated with a particular term as determined by a user. 13. The system of claim 1, wherein the language processor component assigns a core term to a plurality of synonyms to mitigate needing a different image for each synonym and communicates the core term to the animation engine instead of the synonym specified in the input. 14. The system of claim 1, wherein the animation engine renders the scene after the natural language input is received by the language processor component. 15. A method that facilitates generating illustrations in an instant manner from natural language input comprising: employing a processor to execute the illustrations generating methodology, the illustrations generating methodology comprising: analyzing natural language input to determine a logical form of the input and translate the logical form into a formatted output, the output comprising at least one of an actor, action, object, and background, wherein the natural language input comprises a plurality of sentences; selecting at least one image; applying at least one template to the at least one image; generating for each sentence an animation dynamically based on the at least one image and the at least one template, wherein each animation is linked to the sentence for which the animation was generated and conveys the meaning of the sentence;. 16. The method of claim 15, wherein processing the natural language input begins after detecting an end-of sentence indicator. 17. The method of claim 15, further comprising creating at least one new image and adding it to a graphics library from which the images are selected based on the logical form information. 18. A natural language illustration system comprising: means for analyzing natural language input to determine a logical form of the input and translate the logical form into a formatted output, the output comprising at least one of an actor, action, object, and background, wherein the natural language input comprises a plurality of sentences; means for selecting at least one image; means for applying at least one template to the at least one image; means for generating for each sentence an animation dynamically based on the at least one image and the at least one template, wherein each animation is linked to the sentence for which the animation was generated and conveys the meaning of the sentence; and means. Priority Applications (1) Applications Claiming Priority (1) Publications (2) Family ID=37036294 Family Applications (1) Country Status (1) Cited By (19) Families Citing this family (20) Citations (13) - 2005 - 2005-03-22 US US11/086,880 patent/US7512537B2/en not_active Expired - Fee Related
https://patents.google.com/patent/US7512537
CC-MAIN-2021-39
en
refinedweb
In this user guide, we will learn to create a web server with BME680 environmental sensor, which is used to measure ambient temperature, barometric pressure, relative humidity, and gas (VOC) or Indoor air quality (IAQ). We will learn how to display sensor values on a web page using ESP32 and ESP8266 NodeMCU and MicroPython firmware. This web server will act as a weather station as it will show temperature, humidity, pressure and IAQ readings on the web page. We will use MicroPython firmware to build a responsive EP32/ESP8266 web server that can be accessed through any device which has a web browser, and the device should be connected to your local area network. That means the device should be connected to the same network to which the ESP32/ESP8266 board is connected. > We will cover the following content in this MicroPython tutorial: - How to create a BM680 web server using ESP32/ESP8266 and MicroPython - Display Temperature, Humidity, Gas, and Pressure readings on a web page with ESP32 and ESP8266 We have similar guide to display BME680 sensor readings on MicroPython shell:. BME680 Web Server Code: Display Temperature, Humidity, Gas, and Pressure In this part of the user guide, we will learn how to display readings obtained from a BME680 on a web server using ESP32/ESP8266 NodeMCU and MicroPython firmware. Follow the steps in order to successfully display sensor readings on a local webserver. - We will be required to make three files in our IDE. - BME680.py - boot.py - main.py Out of these three, we have already uploaded the BME680.py file to ESP32/ESP8266. This was the file that contains the BME680 library. - Similarly, we will also upload a boot.py file. This contains all the major configurations like the setting up of Pins, I2C GPIOs, importing BM680 library, and importing other relevant libraries. - Finally, we will upload the main.py file which contains our final code to configure the web server which runs after boot.py. boot.py File Copy the following code in your boot.py file and upload it your ESP board. Note: Make sure to change the Wi-Fi name and password to the one you are currently using. try: import usocket as socket except: import socket from time import sleep from machine import Pin, I2C import network import esp esp.osdebug(None) import gc gc.collect() from bme680 import * # ESP32 - Pin assignment i2c = I2C(scl=Pin(22), sda=Pin(21)) # ESP8266 - Pin assignment #i2c = I2C(scl=Pin(5), sda=Pin(4)) ssid = 'Enter your WiFi Name' password = 'Enter your WiFi Password' station = network.WLAN(network.STA_IF) station.active(True) station.connect(ssid, password) while station.isconnected() == False: pass print('Connection successful') print(station.ifconfig()) Importing MicroPython Libraries In this code, we first import all the modules and from the modules, we import the necessary classes. After that, we define the I2C pins for ESP32 and ESP8266 board. We have set the default I2C GPIO pins for both. Next, we will connect to the network. First, we will use sockets and the socket API of Python to develop our web server. Hence, we will import the socket library first. try: import usocket as socket except: import socket Now, we will import the sleep class from the time module to include delays. We will also import I2C, network, BME680, and Pin libraries. As we have to connect our ESP32/ESP8266 board with Wi-Fi that is why we will also import the network library. from time import sleep from machine import Pin, I2C import network from bme680 import * Vendor OS debugging messages are turned off through the following lines. import esp esp.osdebug(None) In these lines, we are calling the garbage collector. Object memories which are no longer used by the system are reclaimed through this. import gc gc.collect() Defining ESP32/ESP8266 GPIO Pins I2C. The third parameter specifies the maximum frequency for SCL to be used. # ESP32 - Pin assignment i2c = I2C(scl=Pin(22), sda=Pin(21)) # ESP8266 - Pin assignment #i2c = I2C(scl=Pin(5), sda=Pin(4)) Connect to WiFi Network Make sure you enter your Wi-Fi name in SSID and your wi-fi password as well so that your esp board connects with your local network. ssid = 'Enter your SSID name' password = 'Enter your SSID password here' In these lines, we are setting our ESP32/ESP8266 development boards as a WI-FI station. station = network.WLAN(network.STA_IF) #Connecting to the network In order to activate our Wi-Fi station, we write the following line: station.active(True) These next lines, authenticate our Wi-Fi and password and only proceeds with the program code when the ESP board is properly connected. station.connect(ssid, password) while station.isconnected() == False: pass BME680 main.py Now write the following code in a new file and save it as main.py. Make sure all your three files (bme680.py, boot.py, main.py) are uploaded to ESP32/ESP8266 and you can see them under the device option in uPyCraft IDE. def web_page(): bme = BME680_I2C(i2c=i2c) temp = str(round(bme.temperature, 2)) + ' C' hum = str(round(bme.humidity, 2)) + ' %' pres = str(round(bme.pressure, 2)) + ' hPa' gas = str(round(bme.gas/1000, 2)) + ' KOhms' html = """>""") response = web_page() conn.send('HTTP/1.1 200 OK\n') conn.send('Content-Type: text/html\n') conn.send('Connection: close\n\n') conn.sendall(response) conn.close() except OSError as e: conn.close() print('Connection closed') Create Web Page to Display BM280 Sensor Readings We start off our program code by defining a web_page() function. In order to build our web page, we have to create an HTML document that sets up the web page. This HTML variable is returned through the web_page function which we will discuss later. def web_page(): Inside the web_page() function, we create an object of the BME680 method named bme and specify the I2C communication protocol to read data from a sensor. You can also specify SPI pins if you are using SPI to get BME680 sensor readings. bme = BME680_I2C(i2c=i2c) After that, get the sensor reading by using an object “bme” on temperature, humidity, pressure, and gas methods. Also save these values in temp, hum, pres, and gas string variables. temp =str(round(bme.temperature, 2)) + ' C' hum = str(round(bme.humidity, 2)) + ' %' pres = str(round(bme.pressure, 2)) + ' hPa' gas = str(round(bme.gas/1000, 2)) + ' KOhms HTML and CSS File In this HTML document, we use cards, paragraphs, links, icons, headings and title tags to create a web page. This web page displays temperature, humidity, gas (VOC) or Indoor air quality (IAQ) and pressure readings of BME680 sensor. > This meta-tag http-equiv provides attributes to HTTP header. The http-equiv attribute takes many values or information to simulate header response. In this example, we use the http-equiv attribute to refresh the content of the web page after every specified time interval. Users aren’t required to refresh the web page to get updated sensor values. This line forces the HTML page to refresh itself after every 10 seconds. <meta http- We have explained rest o the HTML and CSS part of in our previously published articles. You can read these articles here: - ESP32/ESP8266 MicroPython Web Server – Control Outputs - MicroPython: DHT11/DHT22 Web Server with ESP32/ESP8266 - MicroPython: DS18B20 Web Server with ESP32/ESP8266(Weather Station) BME680 Web Server with MicroPython Socket API In these lines of code, we create a socket object called ‘s’. This is formed by creating a socket and stating its type. s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) By using the bind() method, we will bind the socket with an address. We are passing two parameters. The first one is an empty string ‘ ’, which specifies our ESP32/ESP8266 board and the second parameter is the port which we have stated as ‘80’. s.bind(('', 80)) Now we are creating a socket used for listening. We have passed ‘5’ as the parameter which means that 5 is the maximum number of queued connections. s.listen(5) Next, we use an infinite loop through which we will check when a user connects to the web server. In order to accept the connection, the accept() method is used. A new socket object is saved in the conn variable and it also saves the address there. while True: try: if gc.mem_free() < 102000: gc.collect() conn, addr = s.accept() conn.settimeout(3.0) print('Got a connection from %s' % str(addr)) By using send() and recv() methods, we exchange the data between the user and the web server. A request variable is created which saves it in the newly created socket. request = conn.recv(1024) conn.settimeout(None) request = str(request) Prints the whatever data is present in the request variable. print('Content = %s' % request) The variable ‘response’ contains the HTML text response = web_page() By using send() and sendall() methods, we will be sending the response to the socket user. conn.send('HTTP/1.1 200 OK\n') conn.send('Content-Type: text/html\n') conn.send('Connection: close\n\n') conn.sendall(response) This line closes the connection with the socket which was created. conn.close() except OSError as e: conn.close() print('Connection closed') In summary, we created a web server with ESP32/ESP8266 which could handle HTTP requests from a web client. Whenever the ESP32 or ESP8266 board received a request on its IP address, the program code creates a socket which in turn responds to the client request with an HTML page to display the current readings on the BME680 sensor regarding temperature, humidity, and pressure. BME680 MicroPython Web Server Demo To test the MicroPython BME680 web server with ESP32 and ESP8266, upload all the files to ESP32/ESP8266. After uploading MicroPython scripts, click on Enable/Reset button of ESP32: Sometimes later, your ESP board will make a connection with your WiFi router and shows a “successful connection” message and also prints the IP address on the MicroPython shell as follows: Now, open your web browser either on your laptop or mobile and type the IP address which we have found in the last step. As soon as you type the IP address on your web browser and hit enter. The ESP32/ESP8266 web server will receive an HTTP request. The web page function will be called. You will see the web page with the latest BME680 sensor values in your web browser: On Mobile, you will see web page with BME680 sensor readings like this: Video Demo: More MicroPython tutorials: - MicroPython: DS18B20 Web Server with ESP32/ESP8266(Weather Station) -: PWM with ESP32 and ESP8266
https://microcontrollerslab.com/micropython-bme680-web-server-esp32-esp8266/
CC-MAIN-2021-39
en
refinedweb
mbed LPC1114FN28 Data logger 1) Summary This is a redesign version. Original version is below link. Modification points are; 1) Change hardware construction way from bread board style to soldering on PCB 2) Stand alone operation -> separation mbed debug mode and operation mode 3) Energy save mode -> use wake up function from sleep mode (LPC1114 runs with around 20mA/Operation and around 2mA/Sleep) 4) Show all data on screen -> use 20 characters x 4 line LCD 2) Picture 3) Hardware Circuit 4) Software function 4-1) Wake up function I'm using following liblary. Thanks Mr. Erik Olieman. In the beginning, I could NOT use the library because LPC1114 needs some additional effort both software and hardware configuration. Below comments are only for LPC1114. (1) Please use dp24(P0_1), PwmOut as default setting. I have tried another pin, dp2(P0_9) but I couldn't success. If you assign default dp24, you don't need any declaration and others procedure. (2) Define external interrupt pin for trigger to wake up. I use dp25(P0_2) because it is easy to connect both pin dp24 and dp25. (3) Hardware configuration -> connect dp24 and dp25 (4) Software point of view, you can see a sample program. A sample program using WakeUp library #include "mbed.h" #include "WakeUp.h" #include "WakeInterruptIn.h" // Important!!: connect dp24 and dp25 WakeInterruptIn event(dp25); // wake-up from deepsleep mode by this interrupt DigitalOut myled(dp14); // LED for Debug int main() { WakeUp::calibrate(); while (true) { myled = 1; wait(2); myled = 0; WakeUp::set(2); // Wait with sleep wait(0.001); // looks important for works well } } 5) Software Import programLPC1114_data_logger Data logger: Sensors -> Barometer & temperature (BMP180), Humidity & temp. (RHT03), Sunshine (Cds): Display -> 20 chracters x 4 lines: Strage -> EEPROM (AT24C1024): Special functions -> Enter sleep mode to save current, reading the logging data via serial line Please log in to post comments.
https://os.mbed.com/users/kenjiArai/notebook/mbed-lpc1114fn28-data-logger/
CC-MAIN-2021-39
en
refinedweb
posix_trace_attr_destroy, posix_trace_attr_init - destroy and initialize the trace stream attributes object (TRACING) [OB TRC] #include <trace.h>#include XBD: - [EINVAL] - The value of attr is invalid. The posix_trace_attr_init() function shall fail if: - [ENOMEM] - Insufficient memory exists to initialize the trace attributes object. None. None. None. The posix_trace_attr_destroy() and posix_trace_attr_init() functions may be removed in a future version. posix_trace_create, posix_trace_get_attr, uname XBD <trace.h> First released in Issue 6. Derived from IEEE Std 1003.1q-2000. IEEE PASC Interpretation 1003.1 #123 is applied. The posix_trace_attr_destroy() and posix_trace_attr_init() functions are marked obsolescent. return to top of pagereturn to top of page
https://pubs.opengroup.org/onlinepubs/9699919799/functions/posix_trace_attr_init.html
CC-MAIN-2021-39
en
refinedweb
The Spring Ecosystem There are a two stable, mature stacks for building web applications in the Java ecosystem, and considering the popularity and strong adoption, the Spring Framework is certainly the primary solution. Spring offers a quite powerful way to build a web app, with support for dependency injection, transaction management, polyglot persistence, application security, first-hand REST API support, an MVC framework and a lot more. Traditionally, Spring applications have always required significant configuration and, for that reason, can sometimes build up a lot of complexity during development. That’s where Spring Boot comes in. The Spring Boot project aims to make building web application with Spring much faster and easier. The guiding principle of Boot is convention over configuration. Let’s have a look at some of the important features in Boot: - starter modules for simplifying dependency configuration - auto-configuration whenever possible - embedded, built-in Tomcat, Jetty or Undertow - stand-alone Spring applications - production-ready features such as metrics, health checks, and externalized configuration - no requirement for XML configuration In the following sections, we’re going to take a closer look at the necessary steps to create a Boot application and highlight some of the features in the new framework in more detail. Spring Boot Starters Simply put, starters are dependency descriptors that reference a list of libraries. To create a Spring Boot application, you first need to configure the spring-boot-starter-parent artifact in the parent section of the pom.xml: <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.5.3.RELEASE</version> <relativePath /> </parent> This way, you only need to specify the dependency version once for the parent. The value is then used to determine versions for most other dependencies – such as Spring Boot starters, Spring projects or common third-party libraries. The advantage of this approach is that it eliminates potential errors related to incompatible library versions. When you need to update the Boot version, you only need to change a single, central version, and everything else gets implicitly updated. Also note that there are more than 30 Spring Boot starters available, and the community is building more every day. A good starting point is creating a basic web application. To get started, you can simply add the web starter to your pom: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> If you want to enable Spring Data JPA for database access, you can add the JPA starter: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> Notice how we’re no longer specifying the version for either of these dependencies. Before we dive into some of the functionality in the framework, let’s have a look at another way we can bootstrap a project quickly. Spring Boot Initializr Spring Boot is all about simplicity and speed, and that starts with bootstrapping a new application. You can achieve that by using the Spring Boot Initializr page to download a pre-configured Spring Boot project, which you can then import into your IDE. The Initializr lets you select whether you want to create a Maven or Gradle project, the Boot version you want to use and of course the dependencies for the project: You can also select the “Switch to the full version” option, you can configure a lot more advanced options as well. Spring Boot Auto-Configuration Spring applications usually require a fair amount of configuration to enable features such as Spring MVC, Spring Security or Spring JPA. This configuration can take the form of XML but also Java classes annotated with @Configuration. Spring Boot aims to simplify this process by providing a sensible default configuration, based on the dependencies on the classpath and loaded automatically behind the scenes. This auto-configuration contains @Configuration annotated classes, intended to be non-invasive and only take effect if you have not defined them explicitly yourself. The approach is driven by the @Conditional annotation – which determines what auto-configured beans are enabled based on the dependencies on the classpath, existing beans, resources or System properties. It’s important to understand that, as soon as you define your configuration beans, then these will take precedence over the auto-configured ones. Coming back to our example, based on the starters added in the previous section, Spring Boot will create an MVC configuration and a JPA configuration. To work with Spring Data JPA, we also need to set up a database. Luckily, Boot provides auto-configuration for three types of in-memory databases: H2, HSQL, and Apache Derby. All you need to do is add one of the dependencies to the project, and an in-memory database will be ready for use: <dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> </dependency> The framework also auto-configures Hibernate as the default JPA provider. If you want to replace part of the auto-configuration for H2, the defaults are smart enough to gradually step back and allow you to do that while still preserving the beans you’re not explicitly defining yourself. For example, if you want to add initial data to the database, you can create files with standard names such as schema.sql, data.sql or import.sql to be picked up automatically by Spring Boot auto-configuration, or you can define your DataSource bean to load a custom named SQL script manually: @Configuration public class PersistenceConfig { @Bean public DataSource dataSource() { EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder(); EmbeddedDatabase db = builder.setType(EmbeddedDatabaseType.H2) .addScript("mySchema.sql") .addScript("myData.sql") .build(); return db; } } This has the effect of overriding the auto-configured DataSource bean, but not the rest of the default beans that make up the configuration of the persistence layer. Before moving on, note that it’s also possible to define an entirely new custom auto-configuration that can then be reused in other projects as well. The Entry Point in a Boot Application The entry point for a Spring Boot application is the main class annotated with @SpringBootApplication: @SpringBootApplication public class Application { public static void main(String[] args){ SpringApplication.run(Application.class, args); } } This is all we need to have a running Boot application. The shortcut @SpringBootApplication annotation is equivalent to using @Configuration, @EnableAutoConfiguration, and @ComponentScan and will pick up all config classes in or bellow the package where the class is defined. Embedded Web Server Out of the box, Spring Boot launches an embedded web server when you run your application. If you use a Maven build, this will create a JAR that contains all the dependencies and the web server. This way, you can run the application by using only the JAR file, without the need for any extra setup or web server configuration. By default, Spring Boot uses an embedded Apache Tomcat 7 server. You can change the version by specifying the tomcat.version property in your pom.xml: <properties> <tomcat.version>8.0.43</tomcat.version> </properties> Not surprisingly, the other supported embedded servers are Jetty and Undertow. To use either of these, you first need to exclude the Tomcat starter: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-tomcat</artifactId> </exclusion> </exclusions> </dependency> Then, add the Jetty or the Undertow starters: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-jetty</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-undertow</artifactId> </dependency> Advanced Externalized Configuration Another super convenient feature in Boot is the ability to easily configure the behavior of an application via external properties files, YAML files, environment variables and command-line arguments. These properties have standard names that will be automatically picked up by Boot and evaluated in a set order. The advantage of this feature is that we get to run the same deployable-unit/application in different environments. For example, you can use the application.properties file to configure an application’s port, context path, and logging level: server.port=8081 server.contextPath=/springbootapp logging.level.org.springframework.web: DEBUG This can be a significant simplification in more traditional environments but is a must in virtualized and container environments such as Docker. Of course, ready-to-go deployable units are a great first step, but the confidence you have in your deployment process is very much dependent on both the tooling you have around that process but also the practices within your organization. Metrics Beyond project setup improvements and operational features, Boot also brings in some highly useful functional features, such as internal metrics and health checks – all enabled via actuators. To start using the actuators in the framework, you need to add only a single dependency: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> The relevant information is available via endpoints that can be accessed out-of-the-box: /metrics and /health. We also get access to other endpoints such as: /info which displays application information and /trace that shows the last few HTTP requests coming into the system. Here are just some of the types of metrics we get access to by default: - system-level metrics – total system memory, free system memory, class load information, system uptime - DataSource metrics – for each DataSource defined in your application, you can check the number of active connections and the current usage of the connection pool - cache metrics – for each specified cache, you can view the size of the cache and the hit and miss ratio - Tomcat session metrics – the number of active and maximum sessions You can also measure and track your own metrics, customize the default endpoints as well as add your own, entirely new endpoint. Now, tracking and exposing metrics is quite useful until you get to production, but of course, once you do get to production, you need a more mature solution that’s able to go beyond simply displaying current metrics. That’s where Retrace is a natural next step to help you drill down into the details of the application runtime, but also keep track of this data over time. Health Checks One of the primary and most useful endpoints is, not surprisingly, /health. This will expose different information depending on the accessing user and on whether the enclosing application is secured. By default, when accessed without authentication, the endpoint will only indicate whether the application is up or down. But, beyond the simple up or down status, the state of different components in the system can be displayed as well – such as the disk or database or other configured components like a mail server. The point where /health goes beyond just useful is with the option to create your custom health indicator. Let’s roll out a simple enhancement to the endpoint: @Component public class HealthCheck implements HealthIndicator { @Override public Health health() { int errorCode = check(); // perform some specific health check if (errorCode != 0) { return Health.down() .withDetail("Error Code", errorCode).build(); } return Health.up().build(); } public int check() { // Your logic to check health return 0; } } As you can see, this allows you to use your internal system checks and make those a part of /health. For example, a standard check here would be to do a quick persistence-level read operation to ensure everything’s running and responding as expected. Similarly to metrics, as you move towards production, you’ll definitely need a proper monitoring solution to keep track of the state of the application. Within Retrace, the People Metrics feature is a simple way you can define and watch these custom metrics. A powerful step forward from just publishing metrics or health info on request is the more advanced Key Transactions feature in Retrace – which can be configured to actively monitor specific operations in the system and notify you when the metrics associated with that operation become problematic. Example Application After setting up the project, you can simply start creating controllers or customizing the configuration. Let’s create a simple application that manages a list of employees. First, let’s add an Employee entity and repository based on Spring Data: @Entity public class Employee { @Id @GeneratedValue(strategy=GenerationType.IDENTITY) private long id; private String name; // standard constructor, getters, setters } public interface EmployeeRepository extends JpaRepository<Employee, Long>{ } Let’s now create a controller to manipulate employee entities: @RestController public class EmployeeController { private EmployeeRepository employeeRepository; public EmployeeController(EmployeeRepository employeeRepository){ this.employeeRepository = employeeRepository; } @PostMapping("/employees") @ResponseStatus(HttpStatus.CREATED) public void addEmployee(@RequestBody Employee employee){ employeeRepository.save(employee); } @GetMapping("/employees") public List<Employee> getEmployees(){ return employeeRepository.findAll(); } } You also need to create the mySchema.sql and myData.sql files: create table employee(id int identity primary key, name varchar(30)); insert into employee(name) values ('ana'); To avoid Spring Boot recreating the employee table and removing the data, you need to set the ddl-auto Hibernate property to update: spring.jpa.hibernate.ddl-auto=update Testing the Application Spring Boot also provides excellent support for testing; all included in the test starter: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> </dependency> This starter automatically adds commonly used dependencies for testing in Spring such as Spring Test, JUnit, Hamcrest, and Mockito. As a result, you can create a test for the controller mappings, by using the @SpringBootTest annotation with the configuration classes as parameters. Let’s add a JUnit test that creates an Employee record, then retrieves all the employees in the database and verifies that both the original record added and the one just created are present: @RunWith(SpringRunner.class) @SpringBootTest(classes = Application.class) @WebAppConfiguration public class EmployeeControllerTest { private static final String CONTENT_TYPE = "application/json;charset=UTF-8"; private MockMvc mockMvc; @Autowired private WebApplicationContext webApplicationContext; @Before public void setup() throws Exception { this.mockMvc = MockMvcBuilders .webAppContextSetup(webApplicationContext) .build(); } @Test public void whenCreateEmployee_thenOk() throws Exception { String employeeJson = "{\"name\":\"john\"}"; this.mockMvc.perform(post("/employees") .contentType(CONTENT_TYPE) .content(employeeJson)) .andExpect(status().isCreated()); this.mockMvc.perform(get("/employees")) .andExpect(status().isOk()) .andExpect(content().contentType(CONTENT_TYPE)) .andExpect(jsonPath("$", hasSize(2))) .andExpect(jsonPath("$[0].name", is("ana"))) .andExpect(jsonPath("$[1].name", is("john"))); } } Simply put, @SpringBootTest allows us to run integration tests with Spring Boot. It uses the SpringBootContextLoader as the default ContextLoader and automatically searches for a @SpringBootConfiguration class if no specific classes or nested configuration are defined. We also get a lot of additional and interesting support for testing: - @DataJpaTest annotation for running integration tests on the persistence layer - @WebMvcTest which configures the Spring MVC infrastructure for a test - @MockBean which can provide a mock implementation for a required dependency - @TestPropertySource used to set locations of property files specific to the test Conclusions Ever since Spring sidelined XML configuration and introduced its Java support, the core team has had simplicity and speed of development as primary goals. Boot was the next natural step in that direction, and it has certainly achieved this goal. The adoption of Boot has been astounding over the last couple of years, and a 2.0 release will only accelerate that trend going forward. And a large part of that success is the positive reaction of the community to the production-grade features that we explored here. Features that were traditionally built from the ground up by individual teams are now simply available by including a Boot starter. That is not only very useful, but also very cool. The full source code of all the examples in the article is available here, as a ready to run Boot project. -
https://stackify.com/spring-boot-level-up/
CC-MAIN-2021-39
en
refinedweb
DBMI Library (driver) - open update cursor. More... #include <stdlib.h> #include <grass/dbmi.h> #include "macros.h" #include "dbstubs.h" Go to the source code of this file. DBMI Library (driver) - open update cursor. (C) 1999-2008 by the GRASS Development Team This program is free software under the GNU General Public License (>=v2). Read the file COPYING that comes with GRASS for details. Definition in file d_openupdate.c. Open update cursor. Definition at line 26 of file d_openupdate.c.
https://grass.osgeo.org/programming7/d__openupdate_8c.html
CC-MAIN-2021-39
en
refinedweb
React Best Practices and Useful Functions Lately React has been becoming the new tool used by developers to create everything from a single page application to mobile applications. But since I started going deeper into React I have seen all this “cool” node modules that are extremely badly developed. They follow no rules, the components are way too big. They use state for pretty much everything, and they don’t leverage dumb components. Anyone with enough experience understands how much of a hassle this is to maintain and how much load it takes on the browser if you render every component every time. In this article I will walk you through React best practices, both on how to setup React and how to make it extremely fast. Please note I will keep updating this article as new practices emerge. Before you start reading please note React is a functional programming (FP) library. If you don’t know what FP is, please read this Stack Exchange response. Use ES6 (transpiled with Babel) ES6 will make your life a lot easier. It makes JS look and feel more modern. One great example with ES6 are Generators and Promises. Remember when there was a time that you had to do a bunch of nested calls to be able to do an asynchronous call. Well now I am glad to welcome you too Synchronous Asynchronous JS, (yea it’s as cool as it sounds). One great example of this are generators: Where this: Turns into this: Use Webpack The decision to use Webpack is simple: Hot reloading, minified files, node modules :), and you can split your applications into small pieces and lazy load them. If you are planing on building a large scale applcation I recommend reading this article to understand how lazy loading works. Use JSX If you come from a web development background JSX will feel very natural. But if your background is not in web develop don’t worry too much, JSX is very easy to learn. Note that if don’t you don’t use JSX the application will be harder to maintain. Always look at your bundle size One tip to making your bundle way smaller is to import directly from the node module root path. Do this: import Foo from ‘foo/Foo’ instead of: Import {Foo} from ‘foo’ Keep your components small (Very Small) Rule of thumb is that if your render method has more than 10 lines is probably way too big. The whole idea of using React is code re-usability so if you just throw everything in one file you are losing the beauty of React. What needs to have its own component? When thinking React you need to think about code reusability and code structure. You would not create a component for a simple input element that only has one line of code. A component is a mix of “HTML” elements that the user perceives as one. I know that this sounds a little bit strange but lets see an example. Take a look at this login screen: What is the structure behind it. You have a form that contains two inputs a button and a link. Lets see this in code: Whats wrong here? Repetition. The inputs contain the same structure, why not make that a component. Now that is beautiful. I will not get into much detail here but if you want to continue reading go to Thinking React. What about State? Best practice in React is to minimize your state. One thing to keep in mind is to avoid synchronizing state between a child and parent. In the above example we have a form in that form the state is passed down as a props from the view and every time the user updates the password and username the state is updated in the view and not the form. Use ShouldComponentUpdate for performance optimization React is a templating language that renders EVERY TIME the props or the state of the component changes. So imagine having to render the entire page every time there in an action. That takes a big load on the browser. That’s where ShouldComponentUpdate comes in, whenever React is rendering the view it checks to see if shouldComponentUpdate is returning false/true. So whenever you have a component that’s static do yourself a favor and return false. Or if is not static check to see if the props/state has changed. If you want to read more on performance optimization read my article on React Perf Think about inmutability If you are coming from Scala or other high performance languages inmutability is a concept that you are probably really familiar with. But if you are not familiar with the concept think of immutability like having twins. They are very much alike and they look the same but they are not equal. For example: What just happened? Object2 was created as a reference of object1 that means that in every sense of the word object2 is another way of referencing object1. When I created object3 I created a new object that has the same structure as object1. The Object.assign function takes a new object and then clones the structure of object1 therefore creating a new reference so when you compare object1 to object3 they are different. Why is this significant? Think of performance optimization, I mentioned above that React renders everytime the state of a component changes. When using the ShouldComponentUpdate function instead of doing a deep check to see if all the attributes are different you can simply compare the objects. If you want to know more keep reading this article. Use Smart and Dumb Components There is not much to say here other than you don’t need to have a state in every object. Ideally you will have a smart parent view and all the children are dumb components that just receive props and don’t have any logic in it. You can create a dumb component by doing something like this: Dumb components are also easier to debug because it enforces the top down methodology that React is all about. Use PropTypes PropTypes help you set data validation for components. This is very useful when debugging and when working with multiple developers. Anyone working with a large team should read this article. Always bind the functions in the constructor method Whenever working with components that uses state try to bind components in the constructor method. Keep in mind that you can use ES7 now and you can bind the functions using something like this now (Instead of binding in the constructor): someFunction = () => {} Use Redux/Flux When dealing with data you want to use either Flux or Redux. Flux/Redux allows you to handle data easily and takes the pain away from handling front end cache. I personally use Redux because it forces you to have a more controlled file structure. Keep in mind that sometimes it is very useful to use Redux/Flux but you might not need to keep the entire state of your application in one plain object. Read more about it here. Use normalizr Now that we are talking about data, I am going to take a moment and introduce you to the holy grail of dealing with complex data structures. Normalizr structures your nested json objects to simple structures that you can modify on the fly. File structure I am going to make a blunt statement here and say that I have only seen 2 file structures with React/Redux that makes things easy to work with. First structure: Second structure: Use Containers (Depracated — 2017 Update Next Chapter) The reason you want to use containers that pass down the data is because you want to avoid having to connect every view to a store when dealing with Flux/Redux. The best way to do this is to create two containers. One containing all secure views (views that need authentication) and one containing all insecure views. The best way to create a parent container is to clone the children and pass down the desired props. Example: Use Templates instead of Containers While working with containers and cloning the props down to the views I found a more efficient way to do this. The way I recomend it now is instead of using containers is to create a BaseTemplate that is extended by an AuthenticatedTemplate and a NotAuthenticatedBaseTemplate. In does two templates you will add all the functionality and the state that is shared across all the none-authenticated/authenticated views. In the views instead of extending React.Component you extend the template. This way you avoid cloning any objects and you can filter the props that are send down the component tree. Avoid Refs Refs will only make your code harder to maintain. Plus when you use refs you are manipulating the virtual Dom directly. Which means that the component will have to re-render the whole Dom tree. Use Prop validation PropTypes will make your life a lot better when working with a large team. They allow you to seemly debug your components. They will also make your debugging a lot easier. In a way you are setting standard requirements for a specific component. Other comments I want to emphasize that that you should split all of your components into individual files. Use a router: There is no much to say here other than if you want to create a single page app you need a router. I personally use React Router. If you are using flux remember to unbind the store listening to change events. You don’t want to create memory leaks. If you want to change the title of your application dynamically you can do something like this: This repo is a great example of React/Redux authentication. Whats new in 2017 Get ready for a major rewrite. The creators of react are now rebuilding reacts core. It has better performance, better animations, and more APIs you can leverage to build large applications. You can read more here. Useful helper functions The following function is an Object comparison function. Usage: check if state or props have change in shouldComponentUpdate Create reducer dynamically Usage: Create constants: Render if: Usage: render component if somethingTrue Change state value dynamically: My webpack.config.js Keep reading about React High Performance Applications here. If you liked this article, please click that green 👏below so others can enjoy it. Also please ask questions or leave notes with any useful practices or useful functions you know. Follow me on twitter @nesbtesh
https://nesbtesh.medium.com/react-best-practices-a76fd0fbef21
CC-MAIN-2021-39
en
refinedweb
Post Syndicated from Bruce Schneier original Tag Archives: humor APT Horoscope Post Syndicated from Bruce Schneier original This. Nihilistic Password Security Questions Post Syndicated from Bruce Schneier original Posted three years ago, but definitely appropriate for the times. Friday Squid Blogging: T-Shirt Post Syndicated from Bruce Schneier original As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered. Read my blog posting guidelines here. Resetting Your GE Smart Light Bulb Post Syndicated from Bruce Schneier original If you need to reset the software in your GE smart light bulb — firmware version 2.8 or later — just follow these easy instructions:. Welcome to the future! Introducing Furball — Rapid Content Delivery Post Syndicated from Yev original When we first introduced Catblaze back in 2016, people called us crazy. Back then we thought we were onto something after our CAT scans filtered through our 200 petabytes of data and saw that over 50% of the material was cat pictures. Well, we couldn’t have been more right. We’re now backing up well over 750 petabytes of data and our Catblaze service accounts for almost one-third of it. Similarly to how we keep iterating on Backblaze Cloud Backup, we knew we had to keep working on Catblaze as well. Introducing the Furball With that many cat photos being uploaded to us, we saw the need to introduce a rapid cat delivery system to our Catblaze offering, which can concatenate your new cat content with your existing cat content in the cloud! We took a look at our B2 Fireball and realized that we could create a similar system that was integrated with our Restore by Mail service to deliver your cat content currently backed up to Catblaze. Introducing: the Furball! How it Works You’ve uploaded all of your cat content to Catblaze, and you feel great. But oh no! Disaster has struck when your frisky feline flipped Fresca all over your computer. Now you need some way to get all of those feline files back. Fret not! Just log in to Catblaze, navigate to the Furball page, enter your address, et voila — all of your cat content will be coughed up to the Furball and sent directly back to you! One thing to keep in mind is that Backblaze typically has 11 nines of durability, Catblaze and the Furball program are down to only 9 lives of durability, but don’t let that worry you. Furball Pricing You might be thinking that the Furball is priceless, but we’re pleased to announce that it won’t actually cost a paw and a leg! We recently increased our Restore by Mail capabilities and Furball pricing is similar at just $189 per Furball for up to 8 terabytes of frisky feline fun! *Please note that the Furball ships as soon as we can actually get the cat contents inside the box. This might sound easy but herding cats has proven tricky in the past. Also, please make sure you send us clean data —- otherwise it takes us a while to scrub it. As the old saying goes, “litterbox in, litterbox out.” Availability and Pricing Catblaze is available now for just $6/month per computer for an unlimited amount of cat-related content. We’ll also let you upload other content as well, but we know it’s not as important. Just cough up $189 and the Furball is yours — sent overnight by PetEx! Building on the success of our Restore Return Refund program, you can return your Furball to us within 30 days and we’ll refund you the money! You can try Catblaze for free by visiting: though you might find that it says Backblaze once installed. We regret this typo. The post Introducing Furball — Rapid Content Delivery appeared first on Backblaze Blog | Cloud Storage & Cloud Backup. The Maltese MacBook Post Syndicated from Roderick Bauer original — Editor It was a Wednesday and it would have been just like any other Wednesday except Apple was making its big fall product announcements. Just my luck, I had to work in the San Francisco store, which meant that I was the genius who got to answer all the questions. I had just finished helping a customer who claimed that Siri was sounding increasingly impatient answering his questions when I looked up and saw her walk in the door. Her blonde hair was streaked with amethyst highlights and she was wearing a black leather tutu and polished kneehigh Victorian boots. Brightly colored tattoos of Asian characters ran up both of her forearms and her neck. Despite all that, she wouldn’t particularly stand out in San Francisco, but her cobalt-blue eyes held me and wouldn’t let me go. She rapidly reduced the distance between the door and where I stood behind the counter at the back of the store. She plopped a Surface Pro computer on the counter in front of me. “I lost my data,” she said. I knew I’d seen her before, but I couldn’t place where. “That’s a Windows computer,” I said. She leaned over the counter towards me. Her eyes were even brighter and bluer close up. “Tell me something I don’t know, genius,” she replied. Then I remembered where I’d seen her. She was on Press: Here a while back talking about her new startup. She was head of software engineering for a Google spinoff. Angels all over the valley were fighting to throw money at her project. I had been sitting in my boxers eating cold pizza and watching her talk on TV about AI for Blockchain ML. She was way out of my league. “I was in Valletta on a business trip using my MacBook Pro,” she said. “I was reading Verlaine on the beach when a wave came in and soaked Reggie. ‘Reggie’ is my MacBook Pro. Before I knew it, it was all over.” Her eyes misted up. “You know that there isn’t an Apple store in Malta, don’t you?” she said. “We have a reseller there,” I replied. “But they aren’t geniuses, are they?” she countered. “No, they’re not.” She had me there. “I had no choice but to buy this Surface Pro at a Windows shop on Strait Street to get me through the conference. It’s OK, but it’s not Reggie. I came in today to get everything made right. You can do that for me, can’t you?” I looked down at the Surface Pro. We weren’t supposed to work on other makes of computers. It was strictly forbidden in the Genius Training Student Workbook. Alarms were going off in my head telling me to be careful: this dame meant nothing but trouble. “Well?” she said. I made the mistake of looking at her and lingering just a little too long. Her eyes were both shy and probing at the same time. I felt myself falling head over heels into their inky-blue depths. I shook it off and gradually crawled back to consciousness. I told myself that if a customer’s computer needs help, it doesn’t make any difference what you think of the computer, or which brand it is. She’s your customer, and you’re supposed to do something about it. That’s the way it works. Damn the Genius Training Student Workbook. “OK,” I said. “Let’s take care of this.” I asked her whether she had files on the Surface Pro she needed to save. She told me that she used Backblaze Cloud Backup on both the new Surface Pro and her old MacBook Pro. My instincts had been right. This lady was smart. “That will make it much easier,” I told her. “We’ll just download the backed up files for both your old Macbook Pro and your Surface Pro from Backblaze and put them on a new MacBook Pro. We’ll be done in just a few minutes. You know about Backblaze’s Inherit Backup State, right? It lets you move your account to a new computer, restore all your files from your backups to the computer, and start backing up again without having to upload all your files again to the cloud. “What do you think?” she asked. I assumed she meant that she already knew all about Inherit Backup State, so I went ahead and configured her new computer. I was right. It took me just a little while to get her new MacBook Pro set up and the backed up files restored from the Backblaze cloud. Before I knew it, I was done. “Thanks” she said. “You’ve saved my life.” Saved her life? My head was spinning. She turned to leave. I wanted to stop her before she left. I wanted to tell her about my ideas for an AI-based intelligent customer support agent. Maybe she’d be impressed. But she was already on her way towards the door. I thought she was gone forever but she stopped just before the door. She flipped her hair back over her shoulder as she turned to look at me. “You really are a genius.” She smiled and walked out of the store and out of my life. My eyes lingered on the swinging door as she crossed the street and disappeared into the anonymous mass of humanity. I thought to myself: she’ll be back. She’ll be back to get a charger, or a Thunderbolt to USB-C adaptor, or Magsafe to USB-C, or Thunderbolt 3 to Thunderbolt 2, or USB-C to Lightning, or USB-A to USB-C, or DisplayPort to Mini DisplayPort, or HDMI to DisplayPort, or vice versa. Yes, she’ll be back. I panicked. Maybe she’ll take the big fall for Windows and I’ll never see her again. What if that happened? Then I realized I was just being a sap. Snap out of it! I’ll wait for her no matter what happens. She deserves that. The post The Maltese MacBook appeared first on Backblaze Blog | Cloud Storage & Cloud Backup. xkcd on Voting Computers Post Syndicated from Bruce Schneier original OMG The Stupid It Burns SUPER game night 3: GAMES MADE QUICK??? 2.0 Post_1<<_3<<_8<<_11<<_12<<?? XKCD’s Smartphone Security System Post Syndicated from Bruce Schneier original Security Vulnerabilities in Star Wars Post Syndicated from Bruce Schneier original A fun video describing some of the many Empire security vulnerabilities in the first Star Wars movie. Happy New Year, everyone. "Santa Claus is Coming to Town" Parody Post Syndicated from Bruce Schneier original Wondermark on Security Post Syndicated from Bruce Schneier original More notes on US-CERTs IOCs Post Syndicated from Robert Graham original Yet another Russian attack against the power grid, and yet more bad IOCs from the DHS US-CERT. IOCs are “indicators of compromise“, things you can look for in order to order to see if you, too, have been hacked by the same perpetrators. There are several types of IOCs, ranging from the highly specific to the uselessly generic. A uselessly generic IOC would be like trying to identify bank robbers by the fact that their getaway car was “white” in color. It’s worth documenting, so that if the police ever show up in a suspected cabin in the woods, they can note that there’s a “white” car parked in front. But if you work bank security, that doesn’t mean you should be on the lookout for “white” cars. That would be silly. This is what happens with US-CERT’s IOCs. They list some potentially useful things, but they also list a lot of junk that waste’s people’s times, with little ability to distinguish between the useful and the useless. An example: a few months ago was the GRIZZLEYBEAR report published by US-CERT. Among other things, it listed IP addresses used by hackers. There was no description which would be useful IP addresses to watch for, and which would be useless. Some of these IP addresses were useful, pointing to servers the group has been using a long time as command-and-control servers. Other IP addresses are more dubious, such as Tor exit nodes. You aren’t concerned about any specific Tor exit IP address, because it changes randomly, so has no relationship to the attackers. Instead, if you cared about those Tor IP addresses, what you should be looking for is a dynamically updated list of Tor nodes updated daily. And finally, they listed IP addresses of Yahoo, because attackers passed data through Yahoo servers. No, it wasn’t because those Yahoo servers had been compromised, it’s just that everyone passes things though them, like email. A Vermont power-plant blindly dumped all those IP addresses into their sensors. As a consequence, the next morning when an employee checked their Yahoo email, the sensors triggered. This resulted in national headlines about the Russians hacking the Vermont power grid. Today, the US-CERT made similar mistakes with CRASHOVERRIDE. They took a report from Dragos Security, then mutilated it. Dragos’s own IOCs focused on things like hostile strings and file hashes of the hostile files. They also included filenames, but similar to the reason you’d noticed a white car — because it happened, not because you should be on the lookout for it. In context, there’s nothing wrong with noting the file name. But the US-CERT pulled the filenames out of context. One of those filenames was, humorously, “svchost.exe”. It’s the name of an essential Windows service. Every Windows computer is running multiple copies of “svchost.exe”. It’s like saying “be on the lookout for Windows”. Yes, it’s true that viruses use the same filenames as essential Windows files like “svchost.exe”. That’s, generally, something you should be aware of. But that CRASHOVERRIDE did this is wholly meaningless. What Dragos Security was actually reporting was that a “svchost.exe” with the file hash of 79ca89711cdaedb16b0ccccfdcfbd6aa7e57120a was the virus — it’s the hash that’s the important IOC. Pulling the filename out of context is just silly. Luckily, the DHS also provides some of the raw information provided by Dragos. But even then, there’s problems: they provide it in formatted form, for HTML, PDF, or Excel documents. This corrupts the original data so that it’s no longer machine readable. For example, from their webpage, they have the following: import “pe” import “hash” Among the problems are the fact that the quote marks have been altered, probably by Word’s “smart quotes” feature. In other cases, I’ve seen PDF documents get confused by the number 0 and the letter O, as if the raw data had been scanned in from a printed document and OCRed. If this were a “threat intel” company, we’d call this snake oil. The US-CERT is using Dragos Security’s reports to promote itself, but ultimate providing negative value, mutilating the content. This, ultimately, causes a lot of harm. The press trusted their content. So does the network of downstream entities, like municipal power grids. There are tens of thousands of such consumers of these reports, often with less expertise than even US-CERT. There are sprinklings of smart people in these organizations, I meet them at hacker cons, and am fascinated by their stories. But institutionally, they are dumbed down the same level as these US-CERT reports, with the smart people marginalized. There are two solutions to this problem. The first is that when the stupidity of what you do causes everyone to laugh at you, stop doing it. The second is to value technical expertise, empowering those who know what they are doing. Examples of what not to do are giving power to people like Obama’s cyberczar, Michael Daniels, who once claimed his lack of technical knowledge was a bonus, because it allowed him to see the strategic picture instead of getting distracted by details. “Only a year? It’s felt like forever”: a twelve-month retrospective Post Syndicated from Alex Bate original This weekend saw my first anniversary at Raspberry Pi, and this blog marks my 100th post written for the company. It would have been easy to let one milestone or the other slide had they not come along hand in hand, begging for some sort of acknowledgement. The day Liz decided to keep me So here it is! Joining the crew Prior to my position in the Comms team as Social Media Editor, my employment history was largely made up of retail sales roles and, before that, bit parts in theatrical backstage crews. I never thought I would work for the Raspberry Pi Foundation, despite its firm position on my Top Five Awesome Places I’d Love to Work list. How could I work for a tech company when my knowledge of tech stretched as far as dismantling my Game Boy when I was a kid to see how the insides worked, or being the one friend everyone went to when their phone didn’t do what it was meant to do? I never thought about the other side of the Foundation coin, or how I could find my place within the hidden workings that turned the cogs that brought everything together. … when suddenly, as if out of nowhere, a new job with a dream company. #raspberrypi #positive #change #dosomething 12 Likes, 1 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “… when suddenly, as if out of nowhere, a new job with a dream company. #raspberrypi #positive…” A little luck, a well-written though humorous resumé, and a meeting with Liz and Helen later, I found myself the newest member of the growing team at Pi Towers. Ticking items off the Bucket List I thought it would be fun to point out some of the chances I’ve had over the last twelve months and explain how they fit within the world of Raspberry Pi. After all, we’re about more than just a $35 credit card-sized computer. We’re a charitable Foundation made up of some wonderful and exciting projects, people, and goals. High altitude ballooning (HAB) Skycademy offers educators in the UK the chance to come to Pi Towers Cambridge to learn how to plan a balloon launch, build a payload with onboard Raspberry Pi and Camera Module, and provide teachers with the skills needed to take their students on an adventure to near space, with photographic evidence to prove it. All the screens you need to hunt balloons. . We have our landing point and are now rushing to Therford to find the payload in a field. . #HAB #RasppberryPi 332 Likes, 5 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “All the screens you need to hunt balloons. . We have our landing point and are now rushing to…” I was fortunate enough to join Sky Captain James, along with Dan Fisher, Dave Akerman, and Steve Randell on a test launch back in August last year. Testing out new kit that James had still been tinkering with that morning, we headed to a field in Elsworth, near Cambridge, and provided Facebook Live footage of the process from payload build to launch…to the moment when our balloon landed in an RAF shooting range some hours later. “Can we have our balloon back, please, mister?” Having enjoyed watching Blue Peter presenters send up a HAB when I was a child, I marked off the event on my bucket list with a bold tick, and I continue to show off the photographs from our Raspberry Pi as it reached near space. Spend the day launching/chasing a high-altitude balloon. Look how high it went!!! #HAB #ballooning #space #wellspacekinda #ish #photography #uk #highaltitude 13 Likes, 2 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “Spend the day launching/chasing a high-altitude balloon. Look how high it went!!! #HAB #ballooning…” You can find more information on Skycademy here, plus more detail about our test launch day in Dan’s blog post here. Dear Raspberry Pi Friends… My desk is slowly filling with stuff: notes, mementoes, and trinkets that find their way to me from members of the community, both established and new to the life of Pi. There are thank you notes, updates, and more from people I’ve chatted to online as they explore their way around the world of Pi. *heart melts* By plugging myself into social media on a daily basis, I often find hidden treasures that go unnoticed due to the high volume of tags we receive on Facebook, Twitter, Instagram, and so on. Kids jumping off chairs in delight as they complete their first Scratch project, newcomers to the Raspberry Pi shedding a tear as they make an LED blink on their kitchen table, and seasoned makers turning their hobby into something positive to aid others. It’s wonderful to join in the excitement of people discovering a new skill and exploring the community of Raspberry Pi makers: I’ve been known to shed a tear as a result. Meeting educators at Bett, chatting to teen makers at makerspaces, and sharing a cupcake or three at the birthday party have been incredible opportunities to get to know you all. You’re all brilliant. The Queens of Robots, both shoddy and otherwise Last year we welcomed the Queen of Shoddy Robots, Simone Giertz to Pi Towers, where we chatted about making, charity, and space while wandering the colleges of Cambridge and hanging out with flat Tim Peake. Queen of Robots @simonegiertz came to visit #PiTowers today. We hung out with cardboard @astro_timpeake and ate chelsea buns at @fitzbillies #Cambridge. . We also had a great talk about the educational projects of the #RaspberryPi team, #AstroPi and how not enough people realise we’re a #charity. . If you’d like to learn more about the Raspberry Pi Foundation and the work we do with #teachers and #education, check out our website –. . How was your day? Get up to anything fun? 597 Likes, 3 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “Queen of Robots @simonegiertz came to visit #PiTowers today. We hung out with cardboard…” And last month, the wonderful Estefannie ‘Explains it All’ de La Garza came to hang out, make things, and discuss our educational projects. Estefannie on Twitter Ahhhh!!! I still can’t believe I got to hang out and make stuff at the @Raspberry_Pi towers!! Thank you thank you!! Meeting such wonderful, exciting, and innovative YouTubers was a fantastic inspiration to work on my own projects and to try to do more to help others discover ways to connect with tech through their own interests. Those ‘wow’ moments Every Raspberry Pi project I see on a daily basis is awesome. The moment someone takes an idea and does something with it is, in my book, always worthy of awe and appreciation. Whether it be the aforementioned flashing LED, or sending Raspberry Pis to the International Space Station, if you have turned your idea into reality, I applaud you. Some of my favourite projects over the last twelve months have not only made me say “Wow!”, they’ve also inspired me to want to do more with myself, my time, and my growing maker skill. Museum in a Box on Twitter Great to meet @alexjrassic today and nerd out about @Raspberry_Pi and weather balloons and @Space_Station and all things #edtech ⛅🛰⛅🛰 🤖🤖 Projects such as Museum in a Box, a wonderful hands-on learning aid that brings the world to the hands of children across the globe, honestly made me tear up as I placed a miniaturised 3D-printed Virginia Woolf onto a wooden box and gasped as she started to speak to me. Jill Ogle’s Let’s Robot project had me in awe as Twitch-controlled Pi robots tackled mazes, attempted to cut birthday cake, or swung to slap Jill in the face over webcam. Jillian Ogle on Twitter @SryAbtYourCats @tekn0rebel @Beam Lol speaking of faces… Every day I discover new, wonderful builds that both make me wish I’d thought of them first, and leave me wondering how they manage to make them work in the first place. Space We have Raspberry Pis in space. SPACE. Actually space. Raspberry Pi on Twitter New post: Mission accomplished for the European @astro_pi challenge and @esa @Thom_astro is on his way home Twelve months later, this still blows my mind. And let’s not forget… - The chance to visit both the Houses of Parliment and St James’s Palace - Going to a Doctor Who pre-screening and meeting Peter Capaldi, thanks to Clare Sutcliffe There’s no need to smile when you’re #DoctorWho. 13 Likes, 2 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “There’s no need to smile when you’re #DoctorWho.” We’re here. Where are you? . . . . . #raspberrypi #vidconeu #vidcon #pizero #zerow #travel #explore #adventure #youtube 1,944 Likes, 30 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “We’re here. Where are you? . . . . . #raspberrypi #vidconeu #vidcon #pizero #zerow #travel #explore…” - Making a GIF Cam and other builds, and sharing them with you all via the blog Made a Gif Cam using a Raspberry Pi, Pi camera, button and a couple LEDs. . When you press the button, it takes 8 images and stitches them into a gif file. The files then appear on my MacBook. . Check out our Twitter feed (Raspberry_Pi) for examples! . Next step is to fit it inside a better camera body. . #DigitalMaking #Photography #Making #Camera #Gif #MakersGonnaMake #LED #Creating #PhotosofInstagram #RaspberryPi 19 Likes, 1 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “Made a Gif Cam using a Raspberry Pi, Pi camera, button and a couple LEDs. . When you press the…” The next twelve months Despite Eben jokingly firing me near-weekly across Twitter, or Philip giving me the ‘Dad glare’ when I pull wires and buttons out of a box under my desk to start yet another project, I don’t plan on going anywhere. Over the next twelve months, I hope to continue discovering awesome Pi builds, expanding on my own skills, and curating some wonderful projects for you via the Raspberry Pi blog, the Raspberry Pi Weekly newsletter, my submissions to The MagPi Magazine, and the occasional video interview or two. It’s been a pleasure. Thank you for joining me on the ride! The post “Only a year? It’s felt like forever”: a twelve-month retrospective appeared first on Raspberry Pi. John Oliver is wrong about Net Neutrality Post Syndicated from Robert Graham original Tune in now to catch @lastweetonight with @iamjohnoliver on why we need net neutrality and Title II. — EFF (@EFF) May 8, 2017 The command-line, for cybersec Post. An SQL Injection Attack Is a Legal Company Name in the UK Post Syndicated from Bruce Schneier original Someone just registered their company name as ; DROP TABLE “COMPANIES”;– LTD. Reddit thread. Obligatory xkcd comic. Election-Day Humor Post Syndicated from Bruce Schneier original This was written in 2004, but still holds true today.
https://noise.getoto.net/tag/humor/
CC-MAIN-2021-39
en
refinedweb
- Terminology - Object Identity and Type - Reference Counting and Garbage Collection - References and Copies - First-Class Objects - Built-in Types for Representing Data - Built-in Types for Representing Program Structure - Built-in Types for Interpreter Internals - Object Behavior and Special Methods Built-in Types for Representing Program Structure In Python, functions, classes, and modules are all objects that can be manipulated as data. Table 3.9 shows types that are used to represent various elements of a program itself. Table 3.9 Built-in Python Types for Program Structure Note that object and type appear twice in Table 3.9 because classes and types are both callable as a function. Callable Types Callable types represent objects that support the function call operation. There are several flavors of objects with this property, including user-defined functions, built-in functions, instance methods, and classes. User-Defined Functions User-defined functions are callable objects created at the module level by using the def statement or with the lambda operator. Here’s an example: def foo(x,y): return x + y bar = lambda x,y: x + y A user-defined function f has the following attributes: In older versions of Python 2, many of the preceding attributes had names such as func_code, func_defaults, and so on. The attribute names listed are compatible with Python 2.6 and Python 3. Methods Methods are functions that are defined inside a class definition. There are three common types of methods—instance methods, class methods, and static methods: class Foo(object): def instance_method(self,arg): statements @classmethod def class_method(cls,arg): statements @staticmethod def static_method(arg): statements An instance method is a method that operates on an instance belonging to a given class. The instance is passed to the method as the first argument, which is called self by convention. A class method operates on the class itself as an object. The class object is passed to a class method in the first argument, cls. A static method is a just a function that happens to be packaged inside a class. It does not receive an instance or a class object as a first argument. Both instance and class methods are represented by a special object of type types.MethodType. However, understanding this special type requires a careful understanding of how object attribute lookup (.) works. The process of looking something up on an object (.) is always a separate operation from that of making a function call. When you invoke a method, both operations occur, but as distinct steps. This example illustrates the process of invoking f.instance_method(arg) on an instance of Foo in the preceding listing: f = Foo() # Create an instance meth = f.instance_method # Lookup the method and notice the lack of () meth(37) # Now call the method In this example, meth is known as a bound method. A bound method is a callable object that wraps both a function (the method) and an associated instance. When you call a bound method, the instance is passed to the method as the first parameter (self). Thus, meth in the example can be viewed as a method call that is primed and ready to go but which has not been invoked using the function call operator (). Method lookup can also occur on the class itself. For example: umeth = Foo.instance_method # Lookup instance_method on Foo umeth(f,37) # Call it, but explicitly supply self In this example, umeth is known as an unbound method. An unbound method is a callable object that wraps the method function, but which expects an instance of the proper type to be passed as the first argument. In the example, we have passed f, a an instance of Foo, as the first argument. If you pass the wrong kind of object, you get a TypeError. For example: >>> umeth("hello",5) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: descriptor 'instance_method' requires a 'Foo' object but received a 'str' >>> For user-defined classes, bound and unbound methods are both represented as an object of type types.MethodType, which is nothing more than a thin wrapper around an ordinary function object. The following attributes are defined for method objects: One subtle feature of Python 3 is that unbound methods are no longer wrapped by a types.MethodType object. If you access Foo.instance_method as shown in earlier examples, you simply obtain the raw function object that implements the method. Moreover, you’ll find that there is no longer any type checking on the self parameter. Built-in Functions and Methods The object types.BuiltinFunctionType is used to represent functions and methods implemented in C and C++. The following attributes are available for built-in methods: For built-in functions such as len(), __self__ is set to None, indicating that the function isn’t bound to any specific object. For built-in methods such as x.append, where x is a list object, __self__ is set to x. Classes and Instances as Callables Class objects and instances also operate as callable objects. A class object is created by the class statement and is called as a function in order to create new instances. In this case, the arguments to the function are passed to the __init__() method of the class in order to initialize the newly created instance. An instance can emulate a function if it defines a special method, __call__(). If this method is defined for an instance, x, then x(args) invokes the method x.__call__(args). Classes, Types, and Instances When you define a class, the class definition normally produces an object of type type. Here’s an example: >>> class Foo(object): ... pass ... >>> type(Foo) <type 'type'> The following table shows commonly used attributes of a type object t: When an object instance is created, the type of the instance is the class that defined it. Here’s an example: >>> f = Foo() >>> type(f) <class '__main__.Foo'> The following table shows special attributes of an instance i: The __dict__ attribute is normally where all of the data associated with an instance is stored. When you make assignments such as i.attr = value, the value is stored here. However, if a user-defined class uses __slots__, a more efficient internal representation is used and instances will not have a __dict__ attribute. More details on objects and the organization of the Python object system can be found in Chapter 7. Modules The module type is a container that holds objects loaded with the import statement. When the statement import foo appears in a program, for example, the name foo is assigned to the corresponding module object. Modules define a namespace that’s implemented using a dictionary accessible in the attribute __dict__. Whenever an attribute of a module is referenced (using the dot operator), it’s translated into a dictionary lookup. For example, m.x is equivalent to m.__dict__["x"]. Likewise, assignment to an attribute such as m.x = y is equivalent to m.__dict__["x"] = y. The following attributes are available:
https://www.informit.com/articles/article.aspx?p=1357182&seqNum=7
CC-MAIN-2021-39
en
refinedweb
Heads up! To view this whole video, sign in with your Courses account or enroll in your free 7-day trial. Sign In Enroll CSS in Django5:12 with Lacey Williams Henschel Add some app-specific static content to your site. This is CSS or other static files that are only needed by one particular app, not the whole project. Understand how to include app-specific content in your project, and why you might want to do so. Managing Static Files: The Django documentation goes into more detail about how Django serves static content in different environments. If you'd like more detail than what we've gone into here, I recommend checking out the docs. Always read the docs! You can also use template tags to get at other data in your project, 0:00 including your static files like your CSS. 0:05 In Django, you need to store your static files in a special way. 0:08 It's best to store your project-wide static files in a directory called assets. 0:14 In our case, the setup looks like this. 0:19 To add CSS that's specific to an app however, 0:22 you need to follow a slightly different pattern. 0:26 If we wanted to add CSS to our courses app, 0:29 we'd need to add a directory called static underneath our courses app. 0:33 That directory contains its own directory also called courses. 0:38 Static slash courses then contains directories for your CSS, 0:43 your JavaScript files, and any images you're using in your app. 0:47 This is so the Django can always find the files you're referring to. 0:53 Adding a directory with the same name as your app inside your static folder 0:58 feels a little weird, but it helps you avoid namespace collisions later on. 1:03 For example, if your project had an app called admin, and 1:09 another called main site, and both of those apps had static 1:13 files associated with them, they might have files with the same names. 1:17 Styles.css is a really common way to name a basic CSS file. 1:21 When Django tries to find styles.css in the static folder for 1:28 your admin app, if you have the extra directory with the name of your app, 1:33 then Django goes to the CSS directory in static/admin, 1:38 and that extra admin helps Django know it's in the right place. 1:42 This is called namespacing. 1:47 So let's try it out. 1:49 Make sure to reload your workspace, 1:52 because the new workspace has some new CSS files. 1:54 We added some CSS to our courses app. 1:58 Let's walk through how that's set up before we get started. 2:01 First, we created a new directory in the course's app called static. 2:04 Next, we created a directory within the static directory 2:10 with the same name as our app, courses. 2:13 Finally, we created a directory called CSS inside the course's directory. 2:17 This is where we store the CSS file that is specific to this app. 2:22 Now, we just need to add this new CSS to our templates, but we have a problem,. 2:27 This is app specific CSS, so we should not add it to layout.html. 2:32 We don't want this CSS to be used on other pages in this project 2:38 that aren't in the course's app. 2:43 So, we should add a new block tag to layout.html. 2:45 So we'll add that here, right below the existing style sheet. 2:51 We just open up a new block tag, call it static, and immediately close it. 2:55 We add the static block after the original style sheet 3:01 because we don't want to have to add the project wide CSS to each template. 3:05 The static tag is for app specific CSS in other templates. 3:10 Now, let's open up course list.html in the course's app and 3:15 add our app specific CSS to this file. 3:20 So first, right here above the block title, we will add our block static tag. 3:24 And I like to immediately close it, so I don't have to remember to do that later. 3:32 And we can also save ourselves a little bit of typing by copying the link 3:36 to the existing style sheet and pasting it here. 3:41 We just have to remember to change this path. 3:45 Now here, we're going to be a little more specific about the path to the CSS file. 3:48 So we'll say courses/css/courses.css. 3:55 This is so Django will definitely know where to look to find this particular 4:01 CSS file. 4:05 We also need to add the load tag to the top of this page. 4:06 So, load static from static files. 4:10 This lets Django know that we intend to load some new static content. 4:15 Now, we just need to start our server. 4:19 So we will change directories into learning site and 4:22 then use python manage.py 4:26 ruserver on our port to take a look at our site. 4:30 So here we can see that the titles of the courses are navy blue, 4:39 instead of the maroon color that you see in the header. 4:43 This is because this particular page is using our app-specific CSS. 4:47 If we open the CSS file, we see that the CSS that we changed 4:52 is the color of these headers, which is now navy blue. 4:56 So that's the static tag in a nutshell. 5:01 In the next video, we'll get into some useful built-in filters, and 5:04 start building our own custom template filters. 5:08
https://teamtreehouse.com/library/customizing-django-templates/template-tags-and-filters/css-in-django
CC-MAIN-2020-40
en
refinedweb
[SOLVED]Send Serial ASCII string to Web Server I have a lopy with expansion board I have connected pin 4 of the lopy to pin 3 (tx) of a serial cable aswell as ground to ground. I am using cmd echo hello > COM9 from windows shell, but I get as a hex string as an output. b'\x00\x80\x80\x80\x00\x00\x80\x00\x80\x80\x80\x00\x80\x80\x80\x00\x00\x80\x00' import machine uart1 = machine.UART(1, baudrate=9600, bits=8, parity=None, stop=1) while True: print(uart1.readall()) time.sleep(1) Is this expected output? My goal is my family has a weigh station and want to have the weight displayed in their office which is 10m away from weigh machine. The weigh station out puts ASCII string via RS-232 connection as follows <STX> <SIGN> <WEIGHT(7)> <STATUS> <EXT> STX: Start of transmission character (ASCII 02) ETX: End of transmission character (ASCII 03) SIGN: The sign of the weight reading (space for positive, dash (-) for negative) WEIGHT(7): A seven character string containing the current weight including decimal point, then first character isa space. Leading zero blanking applies. STATUS: Provides information on the weight reading. The characters G?N?U?O?M?E represent Gross/ Net/ Underload/ Overload/ Motion/ Error. If the output is in HEX how can I represent this to POST in usable data? Can I use ubinascii conversion? Any help would be great. I managed to get this to work thanks to your help, used a sparkfun MAX3232 to step down the RS232 to UART and ASCII strings appear as expected and was able to upload them to a web server. @jcaron damn that makes sense, so I would need some kind of step down can this be done by an ardiuno I have one lying around. @jcaron I ommit that he say about RS232! @jimpower RS232 is not an UART. And 5V levels can kill your lopy board it is 3V3 device. If you need to connect it to your PC use TTL converter e.g. USB->UART. You then can have virtual COM on your PC. Isn’t there an issue with levels? I.e. +/- 5V for regular serial port compared to 0/3V for the LoPy or something similar? @jimpower Then only some wiring problem must be. You have connected P4which is default RX for UART1 with TX of your serial converter For test connect also RX from converter to TX ( P3). And look if all is ok. And if possible test Lopy without expansion board. @jimpower no this looks like garbage what is your putty com settings? UPDATE can you clarify this I used putty and got this output from set /p x="hello world" <nul >.\COM9 above command can be run from windows CMD after setting com settings by modecommand you can not run it from putty it send this as text Thanks for your reply, I used putty and got this output from set /p x="hello world" <nul >\.\COM9 b'fr3rfr\x193r\x19rfr3r\x19r3rfrfr3rfr3rgr3rfrgrfr3r3rfrg\x00\xa9::\n*\x00yFrgrgrggr3rfrfrgrfrfrf3r3rfr\x19r\trMr\x12r\x12rHrKr\trcr\x0cr\x06r\x06r\x08rgr3rgr3rfr\x19r' Does this look accurate? @jimpower said in Send Serial ASCII string to Web Server: \x00\x80\x80\x80\x00\x00\x80\x00\x80\x80\x80\x00\x80\x80\x80\x00\x00\x80\x00' this is not hellosentence first look if you have same port configuration on PC type modeinto command line and look like is your port really configured it can be different than showing in device manager ;-) to see how to configure port, type mode /? e.g. mode COM1 BAUD=9600 PARITY=n DATA=8 STOP=1 rts=off cts=off and to send data from command without line break and spaces use set /p x="hello world" <nul >\\.\COM1 but better use any com port software like putty, comport monitor from Arduino IDE ...
https://forum.pycom.io/topic/2594/solved-send-serial-ascii-string-to-web-server
CC-MAIN-2020-40
en
refinedweb
surprisingly I didn't find a straight-forward description on how to draw a circle with matplotlib.pyplot (please no pylab) taking as input center (x,y) and radius r. I tried some variants of this: import matplotlib.pyplot as plt circle=plt.Circle((0,0),2) # here must be something like circle.plot() or not? plt.show() ... but still didn't get it working. You need to add it to an axes. A Circle is a subclass of an Artist, and an axes has an add_artist method. Here's an example of doing this: import matplotlib.pyplot as plt circle1 = plt.Circle((0, 0), 0.2, color='r') circle2 = plt.Circle((0.5, 0.5), 0.2, color='blue') circle3 = plt.Circle((1, 1), 0.2, color='g', clip_on=False) fig, ax = plt.subplots() # note we must use plt.subplots, not plt.subplot # (or if you have an existing figure) # fig = plt.gcf() # ax = fig.gca() ax.add_artist(circle1) ax.add_artist(circle2) ax.add_artist(circle3) fig.savefig('plotcircles.png') This results in the following figure: The first circle is at the origin, but by default clip_on is True, so the circle is clipped when ever it extends beyond the axes. The third (green) circle shows what happens when you don't clip the Artist. It extends beyond the axes (but not beyond the figure, ie the figure size is not automatically adjusted to plot all of your artists). The units for x, y and radius correspond to data units by default. In this case, I didn't plot anything on my axes ( fig.gca() returns the current axes), and since the limits have never been set, they defaults to an x and y range from 0 to 1. Here's a continuation of the example, showing how units matter: circle1 = plt.Circle((0, 0), 2, color='r') # now make a circle with no fill, which is good for hi-lighting key results circle2 = plt.Circle((5, 5), 0.5, color='b', fill=False) circle3 = plt.Circle((10, 10), 2, color='g', clip_on=False) ax = plt.gca() ax.cla() # clear things for fresh plot # change default range so that new circles will work ax.set_xlim((0, 10)) ax.set_ylim((0, 10)) # some data ax.plot(range(11), 'o', color='black') # key data point that we are encircling ax.plot((5), (5), 'o', color='y') ax.add_artist(circle1) ax.add_artist(circle2) ax.add_artist(circle3) fig.savefig('plotcircles2.png') which results in: You can see how I set the fill of the 2nd circle to False, which is useful for encircling key results (like my yellow data point). import matplotlib.pyplot as plt circle1=plt.Circle((0,0),.2,color='r') plt.gcf().gca().add_artist(circle1) A quick condensed version of the accepted answer, to quickly plug a circle into an existing plot. Refer to the accepted answer and other answers to understand the details. By the way: gcf()means Get Current Figure gca()means Get Current Axis
https://pythonpedia.com/en/knowledge-base/9215658/plot-a-circle-with-pyplot
CC-MAIN-2020-40
en
refinedweb
javacv NullpointerException loading image. I'm trying to execute this simple java class to test opencv and javacv correct installation: import static com.googlecode.javacv.cpp.opencv_core.*; import static com.googlecode.javacv.cpp.opencv_highgui.*; import com.googlecode.javacv.CanvasFrame; public class demo1 { public static void main(String[] args) { //Load image img1 as IplImage final IplImage image = cvLoadImage("img1.png"); System.out.println("Image loaded"); //create canvas frame named 'Demo' final CanvasFrame canvas = new CanvasFrame("Demo"); //Show image in canvas frame canvas.showImage(image); //This will close canvas frame on exit canvas.setDefaultCloseOperation(javax.swing.JFrame.EXIT_ON_CLOSE); } } I receive this console output: Image loaded Exception in thread "main" java.lang.NullPointerException at com.googlecode.javacv.CanvasFrame.showImage(CanvasFrame.java:366) at com.googlecode.javacv.CanvasFrame.showImage(CanvasFrame.java:363) at demo1.main(demo1.java:14) I installed OpenCV-2.4.6.0 in my win7 64bit pc extracted it in C:\opencv, set the environment variable PATH as C:\opencv\build\x64\vc11\bin, i've got the Java SDK, downloaded javacv (source and cppjars) and added all opencv and javacv jar files in the Eclipse project. The img1.png is in the same folder of demo1.java What is the problem? Thank you for help. try System.out.println("Image loaded" + image); // and see,that it wasn't loaded. try an absolute path, like "c:/my/folder/img1.png" It worked, thank you very much! can you say me something about this error? I think is the same problem but the image should be passed like configuration argument and i tried a path like "c:/my/folder/img1.png" but still not works. OK, i fixed the other one too.
https://answers.opencv.org/question/21511/javacv-nullpointerexception-loading-image/
CC-MAIN-2020-40
en
refinedweb
Understanding the Windows Server Essentials SDK Published: December 17, 2010 Updated: October 17, 2013 Applies To: Windows Home Server 2011, Windows Server 2012 Essentials, Windows Small Business Server 2011 Essentials, Windows Storage Server 2008 R2 Essentials The Windows Server Essentials Software Development Kit (SDK) helps you understand the concepts of creating add-ins that extend the functionality of features in Windows Server 2012 R2 Essentials. These features are also included in the Windows Server Essentials role in Windows Server 2012 R2. Target Platforms for the Windows Server Essentials SDK The Windows Server 2012 R2 Essentials SDK targets two platforms: the stand-alone Windows Server 2012 R2 Essentials, and the Windows Server Essentials Experience role in Windows Server 2012 R2. - Windows 2012 R2 Essentials is built upon both on-premises and online technologies to deliver a best-in-class solution for small businesses. Out of the box, Windows 2012 R2 Essentials provides small businesses with simple file sharing, remote access, and an end-to-end view of the health of the network. Windows 2012 R2 Essentials also contains application add-in management and a product-wide extensibility model so that new services, such as e-mail, collaboration, and communication can be seamlessly and easily added to the solution. - Windows Server Essentials Experience is a role built into Windows Server 2012 R2. The Windows Server Essentials role allows you to create a server that duplicates all of the functionality of Windows 2012 R2 Essentials. Generally, this role is used by a 3rd party hosting service to lease out a virtual server to a small business customer, who needs only one server for their entire business. Windows Server Essentials SDK The SDK provides the following content: - How-to information that helps you understand how to build add-ins - Templates that provide a strong base for building add-ins - Samples that provide examples of complete add-ins, including an end-to-end sample that demonstrates how all of the extensible features of the servers can be connected - API references that help you understand how to use the elements to extend and manage Windows Server 2012 Essentials Initialization and the GAC (Global Asembly Cache) Some assemblies in previous versions, as well as potentially some dependencies of the SDK assemblies, are not put in the GAC. This can cause a problem for applications that are running in directories other than the directory that the assemblies are running in. To solve this issue, an application must initialize the Windows Server Essentials environment. This will allow the application to correctly resolve any dependencies on non-GACed assemblies. The call to do this is as follows. This call must be made before any other calls into Windows Server Essentials SDK binaries are made. Additionally, it must be made in a method that does not, itself, call into other Windows Server Essentials SDK assemblies. If your initialization routine also initializes other Windows Server Essentials SDK components, move their initialization into a separate method. For example: Extending features The following table lists the features can be extended by using the concepts described in the SDK: Managing features You can manage features by using a set of API elements that are included in the SDK. The following namespaces can be used to manage features: Creating and deploying an add-in package After you build your add in, you can package it, so that the operating system will recognize its file name extension and content, which enables the add-in to be deployed. To distribute your add-in, you must create an add-in package that is a cabinet (CAB) file, which contains Windows Installer packages, an Addin.xml file for metadata, and an optional .rtf or text file for an End-User License Agreement. The CAB file must use a .wssx extension. For more information about packaging an add-in, see Creating and Deploying an Add-In Package. Connecting extensible features The following steps describe how the extensible features work together to manage user information: - In the Dashboard, an add-in is used to display user data and tasks in a Users tab. The add-in does not provide business logic or state information, but it does enable the administrator to manage the Users feature through a user interface. - The Dashboard add-in communicates with the object model of the Users Provider. The object model in the provider layer enables the Users feature to be managed by using or not using a user interface. The Users feature can be managed directly by using API elements. - The object model makes communication with the Users Provider easier. For example, requests through the object model can initiate communication with Active Directory Domain Services or other services to provision and manage users on the native sub-system. The provider can also communicate with online services and can communicate with other providers that contain user specific data for a specific workload or application. - A user alerts file is created that lists the events or alerts that the Alerts Provider will identify. The Alerts Provider uses this information to raise user specific alerts if they are encountered on the system. Using templates and samples To help you develop an add-in, templates are provided for the following features: - Dashboard - Health and alerts - Provider Framework - Remote Web Access - Web API To help you understand how to develop an add-in, code examples are provided in the how-to sections and stand-alone samples that use the provided templates are provided. Samples are provided for the following features: - Dashboard and Dashboard Homepage - Dashboard home page - Launchpad - Provider - Hosted Email Add-in Using the Windows 2008 R2 SDK You can also take advantage of the Microsoft Windows SDK for Windows Server 2008 and the .NET Framework 4. For more information, see Windows SDK (). Development Environment To develop and build add-ins, you should have the following: - Development computer that should be running a 64-bit version of Windows Vista, Windows 7, or Windows 8 - Visual Studio 2010 or Visual Studio 2012 - The downloaded and installed SDK - A server that is running Windows 2012 Essentials - A client computer for connecting to the server Getting the Current Version Information The manner in which version information is obtained depends on whether the code is being executed on a server or client. - Server Installation - On a server installation, the server name and SKU can be obtained by calling the GetProductInfo Windows API function. This function updates a pointer to a DWORD with the product type information. The following table lists the possible values in the context of this topic. - Client Installation - On a client installation, the caller must examine the Windows Registry key located at HKLM\SOFTWARE\Microsoft\WindowsServer. This key contains a value called ServerSku, which is a DWORD and represents the SKU. The following table lists the possible values in the context of this topic. Identifying the Server Name On the client, developers can view the HKLM\SOFTWARE\Microsoft\Windows Server registry key.That key contains a registry value called ServerName that identifies the server name.
http://msdn.microsoft.com/en-us/library/gg513958.aspx
CC-MAIN-2013-48
en
refinedweb
Spring Security 3 Formats: save 15%! save 37%! - Make your web applications impenetrable. - Implement authentication and authorization of users. - Integrate Spring Security 3 with common external security providers. - Packed full with concrete, simple, and concise examples. Book DetailsLanguage : English Paperback : 396 pages [ 235mm x 191mm ] Release Date : May 2010 ISBN : 1847199747 ISBN 13 : 9781847199744 Author(s) : Peter Mularien Topics and Technologies : All Books, Application Development, Security and Testing, Java, Open Source Table of Contents Preface Chapter 1: Anatomy of an Unsafe Application Chapter 2: Getting Started with Spring Security Chapter 3: Enhancing the User Experience Chapter 4: Securing Credential Storage Chapter 5: Fine-Grained Access Control Chapter 6: Advanced Configuration and Extension Chapter 7: Access Control Lists Chapter 8: Opening up to OpenID Chapter 9: LDAP Directory Services Chapter 10: Single Sign On with Central Authentication Service Chapter 11: Client Certificate Authentication Chapter 12: Spring Security Extensions Chapter 13: Migration to Spring Security 3 Appendix: Additional Reference Material Index - Chapter 1: Anatomy of an Unsafe Application - Security audit - About the sample application - The JBCP pets application architecture - Application technology - Reviewing the audit results - Authentication - Authorization - Database Credential Security - Sensitive Information - Transport-Level Protection - Using Spring Security 3 to address security concerns - Why Spring Security? - Summary - Chapter 2: Getting Started with Spring Security - Core security concepts - Authentication - Authorization - Securing our application in three easy steps - Implementing a Spring Security XML configuration file - Adding the Spring DelegatingFilterProxy to your web.xml file - Adding the Spring Security XML configuration file reference to web.xml - Mind the gaps! - Common problems - Security is complicated: The architecture of secured web requests - How requests are processed? - What does auto-config do behind the scenes? - How users are authenticated? - What is spring_security_login and how did we get here? - Where do the user's credentials get validated? - When good authentication goes bad? - How requests are authorized? - Configuration of access decision aggregation - Access configuration using spring expression language - Summary - Chapter 3: Enhancing the User Experience - Customizing the login page - Implementing a custom login page - Implementing the login controller - Adding the login JSP - Configuring Spring Security to use our Spring MVC login page - Understanding logout functionality - Adding a Log Out link to the site header - How logout works - Changing the logout URL - Logout configuration directives - Implementing the remember me option - How remember me works - Is remember me secure? - Authorization rules differentiating remembered and fully authenticated sessions - Building an IP-aware remember me service - Customizing the remember me signature - Implementing password change management - Extending the in-memory credential store to support password change - Extending InMemoryDaoImpl with InMemoryChangePasswordDaoImpl - Configuring Spring Security to use InMemoryChangePasswordDaoImpl - Building a change password page - Adding a change password handler to AccountController - Exercise notes - Summary - Chapter 4: Securing Credential Storage - Database-backed authentication with Spring Security - Configuring a database-resident authentication store - Creating the default Spring Security schema - Configuring the HSQL embedded database - Configuring JdbcDaoImpl authentication store - Adding user definitions to the schema - How database-backed authentication works - Implementing a custom JDBC UserDetailsService - Creating a custom JDBC UserDetailsService class - Adding a Spring Bean declaration for the custom UserDetailsService - Out of the box JDBC-based user management - Advanced configuration of JdbcDaoImpl - Configuring group-based authorization - Configuring JdbcDaoImpl to use groups - Modifying the initial load SQL script - Modifying the embedded database creation declaration - Using a legacy or custom schema with database-resident authentication - Determining the correct JDBC SQL queries - Configuring the JdbcDaoImpl to use customSQL queries - Configuring secure passwords - Configuring password encoding - Configuring the PasswordEncoder - Configuring the AuthenticationProvider - Writing the database bootstrap password encoder - Configuring the bootstrap password encoder - Would you like some salt with that password? - Configuring a salted password - Declaring the SaltSource Spring bean - Wiring the PasswordEncoder to the SaltSource - Augmenting DatabasePasswordSecurerBean - Enhancing the change password functionality - Configuring a custom salt source - Extending the database schema - Tweaking configuration of the CustomJdbcDaoImpl UserDetails service - Overriding the baseline UserDetails implementation - Extending the functionality of CustomJdbcDaoImpl - Moving remember me to the database - Configuring database-resident remember me tokens - Adding SQL to create the remember me schema - Adding new SQL script to the embedded database declaration - Configuring remember me services to persist to the database - Are database-backed persistent tokens more secure? - Securing your site with SSL - Setting up Apache Tomcat for SSL - Generating a server key store - Configuring Tomcat's SSL Connector - Automatically securing portions of the site - Secure port mapping - Summary - Chapter 5: Fine-Grained Access Control - Re-thinking application functionality and security - Planning for application security - Planning user roles - Planning page-level security - Methods of Fine-Grained authorization - Using Spring Security Tag Library to conditionally render content - Conditional rendering based on URL access rules - Conditional rendering based on Spring EL Expressions - Conditionally rendering the Spring Security 2 way - Using controller logic to conditionally render content - Adding conditional display of the Log In link - Populating model data based on user credentials - What is the best way to configure in-page authorization? - Securing the business tier - The basics of securing business methods - Adding @PreAuthorize method annotation - Instructing Spring Security to use method annotations - Validating method security - Several flavors of method security - JSR-250 compliant standardized rules - Method security using Spring's @Secured annotation - Method security rules using Aspect Oriented Programming - Comparing method authorization types - How does method security work? - Advanced method security - Method security rules using bean decorators - Method security rules incorporating method parameters - How method parameter binding works - Securing method data through Role-based filtering - Adding Role-based data filtering with @PostFilter - Pre-filtering collections with method @PreFilter - Why use a @PreFilter at all? - A fair warning about method security - Summary - Chapter 6: Advanced Configuration and Extension - Writing a custom security filter - IP filtering at the servlet filter level - Writing our custom servlet filter - Configuring the IP servlet filter - Adding the IP servlet filter to the Spring Security filter chain - Writing a custom AuthenticationProvider - Implementing simple single sign-on with an AuthenticationProvider - Customizing the authentication token - Writing the request header processing servlet filter - Writing the request header AuthenticationProvider - Combining AuthenticationProviders - Simulating single sign-on with request headers - Considerations when writing a custom AuthenticationProvider - Session management and concurrency - Configuring session fixation protection - Understanding session fixation attacks - Preventing session fixation attacks with Spring Security - Simulating a session fixation attack - Comparing session-fixation-protection options - Enhancing user protection with concurrent session control - Configuring concurrent session control - Understanding concurrent session control - Testing concurrent session control - Configuring expired session redirect - Other benefits of concurrent session control - Displaying a count of active users - Displaying information about all users - Understanding and configuring exception handling - Configuring "Access Denied" handling - Configuring an "Access Denied" destination URL - Adding controller handling of AccessDeniedException - Writing the Access Denied page - What causes an AccessDeniedException - The importance of the AuthenticationEntryPoint - Configuring Spring Security infrastructure beans manually - A high level overview of Spring Security bean dependencies - Reconfiguring the web application - Configuring a minimal Spring Security environment - Configuring a minimal servlet filter set - Configuring a minimal supporting object set - Advanced Spring Security bean-based configuration - Adjusting factors related to session lifecycle - Manual configuration of other common services - Declaring remaining missing filters - LogoutFilter - ExceptionTranslationFilter - Explicit configuration of the SpEL expression evaluator and Voter - Bean-based configuration of method security - Wrapping up explicit configuration - Which type of configuration should I choose? - Authentication event handling - Configuring an authentication event listener - Declaring required bean dependencies - Building a custom application event listener - Out of the box ApplicationListeners - Multitudes of application events - Building a custom implementation of an SpEL expression handler - Summary - Chapter 7: Access Control Lists - Using Access Control Lists for business object security - Access Control Lists in Spring Security - Basic configuration of Spring Security ACL support - Defining a simple target scenario - Adding ACL tables to the HSQL database - Configuring the Access Decision Manager - Configuring supporting ACL beans - Creating a simple ACL entry - Advanced ACL topics - How permissions work - Custom ACL permission declaration - ACL-Enabling your JSPs with the Spring Security JSP tag library - Spring Expression Language support for ACLs - Mutable ACLs and authorization - Configuring a Spring transaction manager - Interacting with the JdbcMutableAclService - Ehcache ACL caching - Configuring Ehcache ACL caching - How Spring ACL uses Ehcache - Considerations for a typical ACL deployment - About ACL scalability and performance modelling - Do not discount custom development costs - Should I use Spring Security ACL? - Summary - Chapter 8: Opening up to OpenID - The promising world of OpenID - Enabling OpenID authentication with Spring Security - Writing an OpenID login form - Configuring OpenID support in Spring Security - Adding OpenID users - The OpenID user registration problem - How OpenID identifiers are resolved - Implementing user registration with OpenID - Adding the OpenID registration option - Differentiating between a login and registration request - Configuring a custom authentication failure handler - Adding the OpenID registration functionality to the controller - Attribute Exchange - Enabling AX in Spring Security OpenID - Real-world AX support and limitations - Google OpenID support - Is OpenID secure? - Summary - Chapter 9: LDAP Directory Services - Understanding LDAP - LDAP - Common LDAP attribute names - Running an embedded LDAP server - Configuring basic LDAP integration - Configuring an LDAP server reference - Enabling the LDAP AuthenticationProvider - Troubleshooting embedded LDAP - Understanding how Spring LDAP authentication works - Authenticating user credentials - Determining user role membership - Mapping additional attributes of UserDetails - Advanced LDAP configuration - Sample JBCP LDAP users - Password comparison versus Bind authentication - Configuring basic password comparison - LDAP password encoding and storage - The drawbacks of a Password Comparison Authenticator - Configuring the UserDetailsContextMapper - Implicit configuration of a UserDetailsContextMapper - Viewing additional user details - Using an alternate password attribute - Using LDAP as a UserDetailsService - Notes about remember me with an LDAP UserDetailsService - Configuration for an In-Memory remember me service - Integrating with an external LDAP server - Explicit LDAP bean configuration - Configuring an external LDAP server reference - Configuring an LdapAuthenticationProvider - Integrating with Microsoft Active Directory via LDAP - Delegating role discovery to a UserDetailsService - Summary - Chapter 10: Single Sign On with Central Authentication Service - Introducing Central Authentication Service - High level CAS authentication flow - Spring Security and CAS - CAS installation and configuration - Configuring basic CAS integration - Adding the CasAuthenticationEntryPoint - Enabling CAS ticket verification - Proving authenticity with the CasAuthenticationProvider - Advanced CAS configuration - Retrieval of attributes from CAS assertion - How CAS internal authentication works - Configuring CAS to connect to our embedded LDAP server - Getting UserDetails from a CAS assertion - Examining the CAS assertion - Mapping LDAP attributes to CAS attributes - Finally, returning the attributes in the CAS assertion - Alternative Ticket authentication using SAML 1.1 - How is Attribute Retrieval useful? - Additional CAS capabilities - Summary - Chapter 11: Client Certificate Authentication - How Client Certificate authentication works - Setting up a Client Certificate authentication infrastructure - Understanding the purpose of a public key infrastructure - Creating a client certificate key pair - Configuring the Tomcat trust store - Importing the certificate key pair into a browser - Using Firefox - Using Internet Explorer - Wrapping up testing - Troubleshooting Client Certificate authentication - Configuring Client Certificate authentication in Spring Security - Configuring Client Certificate authentication using the security namespace - How Spring Security uses certificate information - How Spring Security certificate authentication works - Other loose ends - Supporting Dual-Mode authentication - Configuring Client Certificate authentication using Spring Beans - Additional capabilities of bean-based configuration - Considerations when implementing Client Certificate authentication - Summary - Chapter 12: Spring Security Extensions - Spring Security Extensions - A primer on Kerberos and SPNEGO authentication - Kerberos authentication in Spring Security - Overall Kerberos Spring Security authentication flow - Getting prepared - Assumptions for our examples - Creating a keytab file - Configuring Kerberos-related Spring beans - Wiring SPNEGO beans to the security namespace - Adding the Application Server machine to a Kerberos realm - Special considerations for Firefox users - Troubleshooting - Verifying connectivity with standard tools - Enabling Java GSS-API debugging - Other troubleshooting steps - Configuring LDAP UserDetailsService with Kerberos - Using form login with Kerberos - Summary - Chapter 13: Migration to Spring Security 3 - Migrating from Spring Security 2 - Enhancements in Spring Security 3 - Changes to configuration in Spring Security 3 - Rearranged AuthenticationManager configuration - New configuration syntax for session management options - Changes to custom filter configuration - Changes to CustomAfterInvocationProvider - Minor configuration changes - Changes to packages and classes - Summary - Appendix: Additional Reference Material - Getting started with JBCP Pets sample code - Available application events - Spring Security virtual URLs - Method security explicit bean configuration - Logical filter names migration reference This is an excellent book, well written, up-to-date, complete, with relevant examples and code. Spring Security 3 deserves a place in your library regardless of your level of involvement in developing web applications. The book Spring Security 3 will be a good resource for you if you are looking for a security solutions based on Spring Security Dec 2012 Errata type: Typo | Page number: 28 and 36 <filterclass> should be: <filter-class> Sample chapters You can view our sample chapters and prefaces of this title on PacktLib or download sample chapters in PDF format. What you will learn from this book -. Approach The book starts by teaching the basic fundamentals of Spring Security 3 such as setup and configuration. Later it looks at more advanced topics showing the reader how to solve complex real world security issues. Who this book is for This book is for Java developers who build web projects and applications. The book assumes basic familiarity with Java, XML and the Spring Framework. Newcomers to Spring Security will still be able to utilize all aspects of this book.
http://www.packtpub.com/spring-security-3/book?tag=vf/tcl-abr1/0610
CC-MAIN-2013-48
en
refinedweb
This article has been excerpted from book "The Complete Visual C# Programmer's Guide" from the Authors of C# Corner.Another very useful class, FileSystemWatcher, acts as a watchdog for file system changes and raises an event when a change occurs. You must specify a directory to be monitored. The class can monitor changes to subdirectories and files within the specified directory. If you have Windows 2000, you can even monitor a remote system for changes. (Only remote machines running Windows NT or Windows 2000 are supported at present.) The option to monitor files with specific extensions can be set using the Filter property of the FileSystemWatcher class. You can also fine-tune FileSystemWatcher to monitor any change in file Attributes, LastAccess, LastWrite, Security, and Size data. The FileSystemWatcher class raises the events described in Table 6.3. Table 6.3: FileSystemWatcher events Listing 6.6 illustrates the use of the FileSystemWatcher class to capture changes to files and directories and report them on the console screen.Listing 6.6: Using the FileSystemWatcher Classusing System;using System.IO;public class FileWatcher{ public static void Main(string[] args) { // If a directory is not specified, exit program. if (args.Length != 1) { // Display the proper way to call the program. Console.WriteLine("Usage: FileWatcher.exe <directory>"); return; } try { // Create a new FileSystemWatcher and set its properties. FileSystemWatcher watcher = new FileSystemWatcher(); watcher.Path = args[0]; // Watch both files and subdirectories. watcher.IncludeSubdirectories = true; // Watch for all changes specified in the NotifyFilters //enumeration. watcher.NotifyFilter = NotifyFilters.Attributes | NotifyFilters.CreationTime | NotifyFilters.DirectoryName | NotifyFilters.FileName | NotifyFilters.LastAccess | NotifyFilters.LastWrite | NotifyFilters.Security | NotifyFilters.Size; // Watch all files. watcher.Filter = "*.*"; // Add event handlers. watcher.Changed += new FileSystemEventHandler(OnChanged); watcher.Created += new FileSystemEventHandler(OnChanged); watcher.Deleted += new FileSystemEventHandler(OnChanged); watcher.Renamed += new RenamedEventHandler(OnRenamed); //Start monitoring. watcher.EnableRaisingEvents = true; //Do some changes now to the directory. //Create a DirectoryInfo object. DirectoryInfo d1 = new DirectoryInfo(args[0]); //Create a new subdirectory. d1.CreateSubdirectory("mydir"); //Create some subdirectories. d1.CreateSubdirectory("mydir1\\mydir2\\mydir3"); //Move the subdirectory "mydir3 " to "mydir\mydir3" Directory.Move(d1.FullName + "", d1.FullName + "\\mydir\\mydir3"); //Check if subdirectory "mydir1" exists. if (Directory.Exists(d1.FullName + "\\mydir1")) { //Delete the directory "mydir1" //I have also passed 'true' to allow recursive deletion of //any subdirectories or files in the directory "mydir1" Directory.Delete(d1.FullName + "\\mydir1", true); } //Get an array of all directories in the given path. DirectoryInfo[] d2 = d1.GetDirectories(); //Iterate over all directories in the d2 array. foreach (DirectoryInfo d in d2) { if (d.Name == "mydir") { //If "mydir" directory is found then delete it recursively. Directory.Delete(d.FullName, true); } } // Wait for user to quit program. Console.WriteLine("Press \'q\' to quit the sample."); Console.WriteLine(); //Make an infinite loop till 'q' is pressed. while (Console.Read() != 'q') ; } catch (IOException e) { Console.WriteLine("A Exception Occurred :" + e); } catch (Exception oe) { Console.WriteLine("An Exception Occurred :" + oe); } } // Define the event handlers. public static void OnChanged(object source, FileSystemEventArgs e) { // Specify what is done when a file is changed. Console.WriteLine("{0}, with path {1} has been {2}", e.Name, e.FullPath, e.ChangeType); } public static void OnRenamed(object source, RenamedEventArgs e) { // Specify what is done when a file is renamed. Console.WriteLine(" {0} renamed to {1}", e.OldFullPath, e.FullPath); }}The code starts with the parameter obtained from the user in the form of a path to a directory. Then it creates a FileSystemWatcher instance to monitor all changes in files and directories. Once the FileSystemWatcher is enabled, we are ready to make some changes. In this example, a file and few directories are created and later deleted. Figure 6.2 contains the output displayed on the console screen. Figure 6.2: Output of the FileWatcher class As you can see, the FileSystemWatcher captured every change made. If you start Windows Explorer and browse the directory you have provided as the parameter, you will find that every change you make in Explorer is reflected in the console. Since input is obtained from the user, chances are high that invalid arguments could raise exceptions; therefore, all the file manipulation code is placed within the try-catch block. IOException Class The IOException class is the base class for exceptions under the System.IO namespace. All other exceptions under this namespace-DirectoryNotFoundException, EndOfStreamException, FileNotFoundException, and FileLoadException-derive from the IOException class. I/O operations deal with various resources like hard disk, floppy disk, and network sockets, which have a greater chance of failing. Therefore, you should always encapsulate your I/O code within the trycatch block to intercept any exceptions that might occur. ConclusionHope this article would have helped you in understanding FileSystemWatcher and IOException Class in C#. See other articles on the website on .NET and C#. PageSetupDialog in C# ComboBox in C#
http://www.c-sharpcorner.com/uploadfile/puranindia/filesystemwatcher-in-C-Sharp/
CC-MAIN-2013-48
en
refinedweb
On Sun, Jan 08, 2006 at 01:06:18PM -0000, Brian Hulley wrote: > 5) We can get all the advantages of automatic namespace management the OOP > programmers take for granted, in functional programming, by using value > spaces as the analogue of objects, and can thereby get rid of complicated > import/export directives There is nothing complicated in Haskell's module system. It's very simple, explicit, independent from the type system and therefore easy to understand. "Verbose" or "low-level" would be better accusations. It seems that you want to introduce some kind of C++'s Koenig's lookup to Haskell. Is it your inspiration? For me, C++ doesn't seem to be a source of good ideas for programming language design ;-) Best regards Tomasz -- I am searching for programmers who are good at least in (Haskell || ML) && (Linux || FreeBSD || math) for work in Warsaw, Poland
http://www.haskell.org/pipermail/haskell-cafe/2006-January/013977.html
CC-MAIN-2013-48
en
refinedweb
19 November 2010 11:27 [Source: ICIS news] LONDON (ICIS)--La Seda de Barcelona has reached an agreement to sell its polyethylene terephthalate (PET) production plant at San Roque, ?xml:namespace> La Seda de Barcelona, which did not disclose the financial details of the deal, said that the sale process would be completed within a few weeks. The sale of the plant, which has a production capacity of 175,000 tonnes/year, is part of La Seda’s continuing effort to carry out a wide-ranging restructuring plan involving the divestment of non-strategic assets. With the acquisition, Cepsa Quimica would extend its presence in the polyester value chain, La Seda said. The San Roque plant was expected to be operational again in early 2011, La Seda added. La Seda has also been looking to sell two more of its PET assets in Europe following the 59% divestment of its facility at Sines, For more on PET
http://www.icis.com/Articles/2010/11/19/9412007/la-seda-de-barcelona-to-sell-spain-pet-plant-to-cepsa-quimica.html
CC-MAIN-2013-48
en
refinedweb
Correlation energies within the range-separated RPA¶ One of the less attractive features of calculating the electronic correlation energy within the random-phase approximation (RPA) is having to describe the \(1/r\) divergence of the Coulomb interaction. Describing this divergence in a plane-wave basis set requires in turn a large basis set for the response function \(\chi^{KS}(\omega)\), and this soon becomes very demanding on computational resources. The scheme proposed in Ref. 1 tries to avoid this problem by considering the RPA energy with an effective Coulomb interaction \(v^{LR}\), i.e. where: The error function \(\text{erf}(x)\) quickly goes to zero at the origin and tends to 1 at large \(x\). Thus \(v^{LR}\) is identical to the Coulomb interaction in the long-range (LR) limit, but goes smoothly to zero at small distance. The transition between long and short-range (SR) behaviour is governed by the range-separation parameter \(r_c\), which is chosen by the user. In the limit of very small \(r_c\) the full RPA is restored. The remaining problem is how to restore the SR part of the Coulomb interaction. The solution of Ref. 1 is to use a local-density approximation and the homogeneous electron gas, and write where The quantities \(\varepsilon_c^{RPA}(n)\) and \(\varepsilon_c^{LR}(n,r_c)\) are the correlation energies (normalized to the appropriate number of electrons) of the homogeneous electron gas (HEG), calculated with the full Coulomb interaction and with only the long-range part, respectively. The total correlation energy is evaluted as \(E_c^{LR-RPA} + E_c^{SR-RPA}\). Note that the quantity \(n^v(\vec{r})\) is the density of valence electrons, i.e. only the electrons which are used to construct \(\chi^{KS}\). In PAW language this quantity is the “all-electron valence density”. Of course it should be remembered that partitioning the correlation energy in this way will only yield the exact RPA result either in the limit of vanishing \(r_c\) or (trivially) for the HEG. Ref. 1 applies the scheme for a variety of systems. Here we focus on one example and calculate the correlation energy of bulk Si as a function of \(r_c\), and compare it to the full RPA result. Example 1: Correlation energy of silicon¶ The range-separated RPA calculations are performed in the same framework as the RPA, so it may also be useful to consult the tutorials Calculating RPA correlation energies. We start with a converged ground-state calculation to get the electronic wavefunctions: from ase.build import bulk from gpaw import GPAW, FermiDirac from gpaw.wavefunctions.pw import PW bulk_si = bulk('Si', a=5.42935602) calc = GPAW(mode=PW(400.0), xc='LDA', occupations=FermiDirac(width=0.01), kpts={'size': (4, 4, 4), 'gamma': True}, parallel={'domain': 1}, txt='si.gs.txt') bulk_si.set_calculator(calc) E_lda = bulk_si.get_potential_energy() calc.diagonalize_full_hamiltonian() calc.write('si.lda_wfcs.gpw', mode='all') This calculation will take about a minute on a single CPU. Now we use the following script to get the RPA correlation in the range-separated approach, using a number of different values for \(r_c\). For the values of the plane-wave cutoff and number of bands used to evaluate \(\chi^{KS}\), we use the values reported in Ref. 1: from gpaw.xc.fxc import FXCCorrelation from ase.units import Hartree from ase.parallel import paropen resultfile = paropen('range_results.dat', 'w') # Standard RPA result resultfile.write(str(0.0) + ' ' + str(-12.250) + '\n') # Suggested parameters from Bruneval, PRL 108, 256403 (2012) rc_list = [0.5, 1.0, 2.0, 3.0, 4.0] cutoff_list = [11.0, 5.0, 2.25, 0.75, 0.75] nbnd_list = [500, 200, 100, 50, 40] for rc, ec, nbnd in zip(rc_list, cutoff_list, nbnd_list): fxc = FXCCorrelation('si.lda_wfcs.gpw', xc='range_RPA', txt='si_range.' + str(rc) + '.txt', range_rc=rc) E_i = fxc.calculate(ecut=[ec * Hartree], nbands=nbnd) resultfile.write(str(rc) + ' ' + str(E_i[0]) + '\n') This script should take about 10 minutes when run on 8 CPUs. If you look in one of the output files, e.g. si_range.4.0.txt, you should find the line Short range correlation energy/unit cell = -11.7496 eV which is \(E_c^{SR-RPA}\). The code then reports the total RPA energy \(E_c^{LR-RPA} + E_c^{SR-RPA}\) at the end of the file. Below we plot these numbers, and compare to the RPA energy calculated in the standard approach (\(r_c=0\)). The same plot is reported in 1 (Fig. 1). One can see that for \(r_c<2\), there is pretty good agreement between the range-separated and standard approaches. The difference is, the range-separated approach requires less computational firepower (e.g. a cutoff of 80 eV at \(r_c=2\), compared to 400 eV for the standard approach). We end with the reminder that there is no such thing as a free lunch, and this scheme requires careful testing on a system-by-system basis. As its name suggests, \(r_c\) is a parameter; larger values allow faster convergence, but reduced accuracy. Also, the construction of the all-electron valence density on the grid can sometimes throw up problems, so you are strongly advised to check that the Density integrates to XXX electrons line in the output file delivers the expected number of valence electrons.
https://wiki.fysik.dtu.dk/gpaw/tutorials/rangerpa/rangerpa_tut.html
CC-MAIN-2020-05
en
refinedweb
There are enum objects in QC/Lean that I would like to know how to print. For instance, I would like to get the OrderEvent.Status Enum name. Here's the public OrderStatus Enum members:public enum OrderStatus : System.Enum MemberDescriptionCanceledOrder cancelled before it was filledCancelPendingOrder waiting for confirmation of cancellationFilledCompleted, Filled, In Market Order.InvalidOrder invalidated before it hit the market (e.g. insufficient capital)..NewNew order pre-submission to the order processor.NoneNo Order State YetPartiallyFilledPartially filled, In Market Order.SubmittedOrder submitted to the market I've tried the following to see the status of an order event: def OnOrderEvent(self, order_event): self.Log(str(order_event.Status)) # <-- Output: 3 What is returned is an integer, probably representing the index for a member element. I would imagine there is some way to pass that into a Enum object to get the name (e.g. self.OrderStatus[order_event.Status]). But I have had no luck finding anything like that. How might one go about accessing the Enum members given an index?
https://www.quantconnect.com/forum/discussion/2872/print-enum-name/
CC-MAIN-2020-05
en
refinedweb
Cookie in ASP.NET Core In this article we will demonstrate working with cookies in Asp.Net Core. Putting away and recovering little snippets of data in treats a typical prerequisite in many web applications. This article clarifies with a case how ASP.NET Core manages treats. You will figure out how to peruse and compose treats utilizing ASP.NET Core. You will likewise figure out how to design the treat properties, for example, termination time. Let’s Get Started: - Open Visual Studio 2015 Update 3 or Higher version. - File >> New Project. - Choose Asp.Net Core Web Application. Provide a suitable name for the project. - Click on OK. - Choose Web Application from the template. - Click OK. This will take a few seconds and then our project will be created. Now let’s go to Home Controller and add code in it. Create an action named Write Cookies and add some codes in it as shown in the code snippet"); } Use the Namespace Microsoft.AspNetCore.Http to avoid the errors. The WriteCookies() technique gets three parameters to be specific setting, settingValue and isPersistent through model official. The setting paremeter will be the esteem chosen in the dropdown list. The settingValue parameter will be the esteem entered in the textbox and isPersistent will show whether isPersistent checkbox is checked or not. Inside, the code checks the estimation of isPersistent boolean parameter. On the off chance that it is genuine a question of CookieOptions class (Microsoft.AspNetCore.Http namespace) is made. The CookieOptions class enables you to indicate the properties of the treat, for example, the accompanying: - Domain (domain associated with the cookie) - Expires (when the cookie should expire) - Path (Specify path where cookie is applicable) - Secure (Cookie is sent only on https channel) - HttpOnly (client side script can’t access the cookie) In the above illustration, you set the Expires property of the cookie to one day from now. Along these lines the cookie will be held on the customer machine for one day. At that point a cookie is composed to the reaction utilizing the Append() technique for the Cookies gathering. The Append() technique acknowledges key, esteem and CookieOptions protest. The else square of the code essentially affixes a cookie by setting its key and esteem (no CookieOptions protest is passed). A ViewBag message shows to the client that the inclination is put away effectively. Now in the Index View of Home Controller, write following mark-up as shown in the code snippet below: <h1>Specify your preferences :</h1> <form asp- <select name="setting"> <option value="fontName">Font Name</option> <option value="fontSize">Font Size</option> <option value="color">Color</option> </select> <input type="text" name="settingValue" /> <input type="checkbox" name="isPersistent" value="true" /> Is Persistent? <input type="submit" value="Write Cookie" /> </form> <h4>@ViewBag.Message</h4> <h4> <a asp- Read Cookies </a> </h4> The index view renders a form and input elements. It also outputs the Message property from ViewBag. A Read Cookies kink takes the user to a test page where preferences are applied. Now let’s add another action in HomeController as shown in the code snippet below:. Now let’s create a view with the same name as the action in the Controller. - Right Click on Home folder from the Views. - Add New Item. - Select MVC View Page. - Provide a name for the View ‘ReadCookies’. Now add following markups shown in the code snippet. Now build and run the application Provide the inputs and click on write Cookie: Now click on Read Cookies: Then run the application again and repeat the process by checking the Is Persistent checkbox. If you observe the browser's cookie cache you will find the three cookies there: Also Read: How to perform CRUD Operation in ASP.NET Core MVC with ADO.NET? - How to use ADO.NET in .NET Core 2.x for Performance Critical Applications - How to Set Connection String in Production Application for best Performance - AspNet MVC Core 2.0 Model validations with custom attribute - How to Upload File in ASP.NET Core with ViewModel? - How to read appSettings JSON from Class Library in ASP.NET Core? - 6 - 0 - 0
https://www.ttmind.com/techpost/Cookie-in-ASP-NET-Core-2-0
CC-MAIN-2020-05
en
refinedweb
Creating a printer friendly version of your MCMS postings. In Visual Studio.NET, open your MCMS application. Create a new User control and name it PrintFriendly.ascx. Drag a HyperLink control from the Web Forms toolbox. Set the Text property of the HyperLink control to "Print Friendly Version". Set the ID property of the HyperLink Control to "lnkPrintFriendly". Switch to your code view and add the following using directives using Microsoft.ContentManagement.Publishing; using Microsoft.ContentManagement.Common; In the Page_Load Event, add the following code: Posting curPosting = CmsHttpContext.Current.Posting; if(CmsHttpContext.Current.Mode == PublishingMode.Published) { lnkPrintFriendly.Visible = true; lnkPrintFriendly.NavigateUrl = "PrintFriendly.aspx?" + curPosting.Guid; lnkPrintFriendly.Target = "_Blank"; } else { lnkPrintFriendly.Visible = false; } Click Save.. In Visual Studio.NET, open your MCMS application. Create a new web form and name it PrintFriendly.aspx. Create a table that is 650 pixels wide and centered on the page. Drag a Label Control inside the table and name it "lblPostingDisplayName". This control will render the posting's Display Name property. Drag a Literal Control inside the table and name it "litPostingContent". This control will render the content of the placeholders. Your HTML Code should look something like this: <%@ Page language="c#" Codebehind="PrintFriendly.aspx.cs" AutoEventWireup="false" Inherits="CMSPrintFriendly.PrintFriendly" %> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> <title>PrintFriendly</title> <meta name="GENERATOR" Content="Microsoft Visual Studio .NET 7.1"> <meta name="CODE_LANGUAGE" Content="C#"> <meta name="vs_defaultClientScript" content="JavaScript"> <meta name="vs_targetSchema" content=""> </HEAD> <body> <form id="frmPrintFriendly" method="post" runat="server"> <P> <TABLE align="center" id="tblContent" cellSpacing="1" cellPadding="1" width="650" border="0"> <TR> <TD> <P> <asp:Label</asp:Label></P> <P><FONT face="Verdana" size="2"> <asp:Literal</asp:Literal></FONT></P> </TD> </TR> </TABLE> </P> </form> </body> </HTML> Switch to your code view and add the following using directives: using Microsoft.ContentManagement.Publishing; using Microsoft.ContentManagement.Common; In the Page_Load Event, add the following code: try { // Retrieve the Posting Display Name string strGuid = Request.QueryString[0].ToString(); Posting curPosting = (Posting)CmsHttpContext.Current.Searches.GetByGuid(strGuid); lblPostingDisplayName.Text = curPosting.DisplayName; // Retrieve placeholder data for every placeholder (seperated with HTML breaks). PlaceholderCollection colPlaceholders = curPosting.Placeholders; foreach(Placeholder pH in colPlaceholders) { litPostingContent.Text = pH.Datasource.RawContent.ToString(); litPostingContent.Text += "<br><br>"; } } catch { // Generate a generic error message if it fails litPostingContent.Text = "Error: There was a problem obtaining the content for this page."; } Rebuild your solution. Implementing the Control The last thing to do will be to implement the control. - Open an existing MCMS template for your MCMS application. - Drag the PrintFriendly User Control on to your template file. - Save this template. At this point, you can now go view a posting (or create a new one) and click on the link which will generate a printer friendly version in a new browser window. This posting is provided "AS IS" with no warranties, and confers no rights.
https://docs.microsoft.com/en-us/archive/blogs/luke/creating-a-printer-friendly-version-of-your-mcms-postings-2
CC-MAIN-2020-05
en
refinedweb
Flutter Location Plugin This plugin for Flutter handles getting location on Android and iOS. It also provides callbacks when location is changed. :sparkles: New experimental feature :sparkles: To get location updates even your app is closed, you can see this wiki post. Getting Started Android In order to use this plugin in Android, you have to add this permission in AndroidManifest.xml : <uses-permission android: Update your gradle.properties file with this: android.enableJetifier=true android.useAndroidX=true org.gradle.jvmargs=-Xmx1536M Please also make sure that you have those dependencies in your build.gradle: dependencies { classpath 'com.android.tools.build:gradle:3.3.0' classpath 'com.google.gms:google-services:4.2.0' } ... compileSdkVersion 28 iOS And to use it in iOS, you have to add this permission in Info.plist : NSLocationWhenInUseUsageDescription NSLocationAlwaysUsageDescription Warning: there is a currently a bug in iOS simulator in which you have to manually select a Location several in order for the Simulator to actually send data. Please keep that in mind when testing in iOS simulator. Example App The example app uses Google Maps Flutter Plugin, add your API Key in the AndroidManifest.xml and in AppDelegate.m to use the Google Maps plugin. Sample Code Then you just have to import the package with import 'package:location/location.dart'; Look into the example for utilisation, but a basic implementation can be done like this for a one time location : var currentLocation = LocationData; var location = new Location(); // Platform messages may fail, so we use a try/catch PlatformException. try { currentLocation = await location.getLocation(); } on PlatformException catch (e) { if (e.code == 'PERMISSION_DENIED') { error = 'Permission denied'; } currentLocation = null; } You can also get continuous callbacks when your position is changing: var location = new Location(); location.onLocationChanged().listen((LocationData currentLocation) { print(currentLocation.latitude); print(currentLocation.longitude); }); Public Method Summary In this table you can find the different functions exposed by this plugin: You should try to manage permission manually with requestPermission() to avoid error, but plugin will try handle some cases for you. Objects class LocationData { final double latitude; // Latitude, in degrees final double longitude; // Longitude, in degrees final double accuracy; // Estimated horizontal accuracy of this location, radial, in meters final double altitude; // In meters above the WGS 84 reference ellipsoid final double speed; // In meters/second final double speedAccuracy; // In meters/second, always 0 on iOS final double heading; //Heading is the horizontal direction of travel of this device, in degrees final double time; //timestamp of the LocationData } enum LocationAccuracy { POWERSAVE, // To request best accuracy possible with zero additional power consumption, LOW, // To request "city" level accuracy BALANCED, // To request "block" level accuracy HIGH, // To request the most accurate locations available NAVIGATION // To request location for navigation usage (affect only iOS) } Note: you can convert the timestamp into a DateTime with: DateTime.fromMillisecondsSinceEpoch(locationData.time.toInt()) Feedback Please feel free to give me any feedback helping support this plugin !.
https://pub.dev/documentation/location_jamel/latest/
CC-MAIN-2020-05
en
refinedweb
gpg:: ScorePage:: ScorePageToken #include <score_page.h> A data structure that is a nearly-opaque type representing a query for a ScorePage (or is empty). Summary ScorePageToken is used in various Leaderboard functions that allow paging through pages of scores. Tokens created by this function will always start at the beginning of the requested range. The client may obtain a token either from a Leaderboard, in which case it represents a query for the initial page of results for that query, or from a previously-obtained ScorePage, in which case it represents a continuation (paging) of that query. Public functions ScorePageToken ScorePageToken() ScorePageToken ScorePageToken( std::shared_ptr< const ScorePageTokenImpl > impl ) Explicit constructor. ScorePageToken ScorePageToken( const ScorePageToken & copy_from ) Copy constructor for copying an existing score page token into a new one. ScorePageToken ScorePageToken( ScorePageToken && move_from ) Constructor for moving an existing score page token into a new one. r-value-reference version. Valid bool Valid() const Returns true when the returned score page token is populated with data and is accompanied by a successful response status; false for an unpopulated user-created token or for a populated one accompanied by an unsuccessful response status. It must be true for the getter functions on this token (LeaderboardId, Start, etc.) to be usable. operator= ScorePageToken & operator=( const ScorePageToken & copy_from ) Assignment operator for assigning this score page token's value from another score page token. operator= ScorePageToken & operator=( ScorePageToken && move_from ) Assignment operator for assigning this score page token's value from another score page token. r-value-reference version. ~ScorePageToken ~ScorePageToken()
https://developers.google.com/games/services/cpp/api/class/gpg/score-page/score-page-token
CC-MAIN-2020-05
en
refinedweb
Python package for cointegration analysis. Project description CointAnalysis Python library for cointegration analysis. Features - Carry out cointegration test - Evaluate spread between cointegrated time-series - Generate cointegrated time-series artificially - Based on scikit-learn API Installation $ pip install cointanalysis What is cointegration? See Hamilton's book. How to use Let us see how the main class CointAnalysis works using two ETFs, HYG and BKLN, as examples. Since they are both connected with liabilities of low-rated companies, these prices behave quite similarly. Cointegration test The method test carries out a cointegration test. The following code gives p-value for null-hypothesis that there is no cointegration. from cointanalysis import CointAnalysis hyg = ... # Fetch historical price of high-yield bond ETF bkln = ... # Fetch historical price of bank loan ETF X = np.array([hyg, bkln]).T coint = CointAnalysis() coint.test(X) coint.pvalue_ # 0.0055 The test have rejected the null-hypothesis by p-value of 0.55%, which implies cointegration. Get spread The method fit finds the cointegration equation. coint = CointAnalysis().fit(X) coint.coef_ # np.array([-0.18 1.]) coint.mean_ # 7.00 coint.std_ # 0.15 This means that spread "-0.18 HYG + BKLN" has the mean 7.00 and standard deviation 0.15. In fact, the prices adjusted with these parameters clarifies the similarities of these ETFs: The time-series of spread is obtained by applying the method transform subsequently. The mean and the standard deviation are automatically adjusted (unless you pass parameters asking not to). spread = coint.transform(X) # returns (-0.18 * hyg + 1. * bkln - 7.00) / 0.15 spread = coint.transform(X, adjust_mean=False, adjust_std=False) # returns -0.18 * hyg + 1. * bkln The method fit_transform carries out fit and transform at once. spread = coint.fit_transform(X) The result looks like this: Acknowledgements References Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/cointanalysis/
CC-MAIN-2020-05
en
refinedweb
In general CUDA does not allow dynamic initialization of global device-side variables except for records with empty constructors as described in section E.2.3.1 of CUDA 7.5 Programming guide: device, constant and shared variables defined in namespace scope, that are of class type, cannot have a non-empty constructor or a non-empty destructor. A constructor for a class type is considered empty at a point in the translation unit, if it is either a trivial constructor or it satisfies all of the following conditions: - The constructor function has been defined. - The constructor function has no parameters, the initializer list is empty and the function body is an empty compound statement. - Its class has no virtual functions and no virtual base classes. - The default constructors of all base classes of its class can be considered empty. - For all the nonstatic data members of its class that are of class type (or array thereof), the default constructors can be considered empty. Clang is already enforcing no-initializers for shared variables, but currently allows dynamic initialization for device and constant variables. This patch applies initializer checks for all device-side variables. Empty constructors are accepted, but no code is generated for them.
https://reviews.llvm.org/D15305?id=45044
CC-MAIN-2020-05
en
refinedweb
YZ L4,962 Points Why is this wrong? CHALLENGE: Now update Hand in hands.py. I'm going to use code similar to Hand.roll(2) and I want to get back an instance of Hand with two D20s rolled in it. I should then be able to call .total on the instance to get the total of the two dice. I'll leave the implementation of all of that up to you. I don't care how you do it, I only care that it works., sides=20): super().__init__() self.sides = sides import dice class Hand(list): def roll(x): for _ in range(x): return(D20.value) @property def total(self): return sum(self) 1 Answer Chris FreemanTreehouse Moderator 58,942 Points You're off to a good start but there are issues: - need to decorate rollwith @classmethod - need to include clsas the first parameter in a classmethod - need to create an instance of Handto return. Use self = cls() - with selfas the instance, you can append dice.D20()instances within loop - need to use dice.D20()instead of D20()since only dicewas imported returnneeds to be outside of the forloop or only first pass is run before returning - return selfas the created instance. Post back if you need more help. Good luck!!
https://teamtreehouse.com/community/why-is-this-wrong-54
CC-MAIN-2020-05
en
refinedweb
Python alternatives for PHP functions Do you know a Python replacement for PHP's session_set_save_handler ? Write it! open($save_path, $session_name){ global $sess_save_path; $sess_save_path = $save_path; return(true);}function close(){ return(true);}function read($id){ global $sess_save_path; $sess_file = "$sess_save_path/sess_$id"; return (string) @file_get_contents($sess_file);}function write($id, $sess_data){ global $sess_save_path; $sess_file = "$sess_save_path/sess_$id"; if ($fp = @fopen($sess_file, "w")) { $return = fwrite($fp, $sess_data); fclose($fp); return $return; } else { return(false); }}function destroy($id){ global $sess_save_path; $sess_file = "$sess_save_path/sess_$id"; return(@unlink($sess_file));}function gc($maxlifetime){ global $sess_save_path; foreach (glob("$sess_save_path/sess_*") as $filename) { if (filemtime($filename) + $maxlifetime < time()) { @unlink($filename); } } return true;}session_set_save_handler("open", "close", "read", "write", "destroy", "gc");session_start();// proceed to use sessions normally?> As of PHP 5.0.5 the write and().
http://www.php2python.com/wiki/function.session-set-save-handler/
CC-MAIN-2020-05
en
refinedweb
Open module Popup This action will allow you to manually trigger an open popup action for the specified module(Form/TabsPro).. - Select Module. Select the module that you wish to open in popup (Form or TabsPro based on the action selected: Open Action Form Popup, Open TabsPro Popup). - QueryString Parameters. Optionally you can pass parameters through querystrings that can then be used in the module that is currently being opened in popup; their values can be referenced as tokens with ‘QueryString:’ namespace. Using the javascript API you can open Action Form in popup by calling the next javascript method dnnsf.api.actionForm.openPopupById(‘1234’, {‘param’:’valueofparam’,’param2’:’valueofparam2’},true) First parameter is required and is the module id of the Action Form The second parameter is optional and it is a JS object. After the Action Form init the values can be used by calling the QueryString token (eg. [QueryString:param]). The third parameter is optional and tells Action Form if the module should be reinitialized (refreshed). This can be used when you want to refresh the form so it can use the values from the second parameters. Show Condition and Enable Condition are refreshed as well. Default is false.
https://docs.dnnsharp.com/actions/dnn-sharp/open-module-popup.html
CC-MAIN-2020-05
en
refinedweb
. Python Testing with pytest: Simple, Rapid, Effective, and Scalable by Brian Okken Python Testing with pytest Simple, Rapid, Effective, and Scalable by Brian Okken Customer Reviews I found Python Testing with pytest to be an eminently usable introductory guidebook to the pytest testing framework. It is already paying dividends for me at my company. - Chris Shaver VP of Product, Uprising Technology Systematic software testing, especially in the Python community, is often either completely overlooked or done in an ad hoc way. Many Python programmers are completely unaware of the existence of pytest. Brian Okken takes the trouble to show that software testing with pytest is easy, natural, and even exciting. - Dmitry Zinoviev Author of "Data Science Essentials in Python" This book is the missing chapter absent from every comprehensive Python book. - Frank Ruiz Principal Site Reliability Engineer, Box, Inc. About this Title Pages: 214 Published: 2017-09-13 Release: P2.0 (2018-11-27) ISBN: 978-1-68050-240. Author Q&A Q: I’m new to Python (or I’ve just built my first app) and I know I should include testing. Will this book help get me started? A: Yes. Although the goal of the book is to teach you how to effectively and efficiently use pytest, it does it in the context of a working application. Throughout the book I discuss various testing topics that relate to my philosophy of testing. Although it isn’t a book intended to teach you all you need to know about test strategy, there is some of that in there. I don’t assume much Python and/or testing experience. Experienced folks won’t get bored, either. I’ve had people tell me that they’ve been testing for years with pytest and realized while reading the book many ways to improve their testing. Q: What is a test framework? The book refers to pytest as a testing framework. What does that mean? A: pytest is a software test framework, which means pytest is a command-line tool that automatically finds tests you’ve written, runs the tests, and reports the results. It has a library of goodies that you can use in your tests to help you test more effectively. It can be extended by writing plugins or installing third-party plugins. It can be used to test Python distributions. And it integrates easily with other tools like continuous integration and web automation. Q: What makes pytest stand out above other test frameworks? A: Here are a few of the reasons pytest stands out: - Simple tests are simple to write in pytest. - Complex tests are still simple to write. - Tests are easy to read. - You can get started in seconds. - You use assert to fail a test, not things like self.assertEqual() or self.assertLessThan(). Just assert. - You can use pytest to run tests written for unittest or nose. - It is being actively developed and maintained by a passionate and growing community. - It’s so extensible and flexible that it will easily fit into your workflow. - Because it’s installed separately from your Python version, you can use the same latest version of pytest on legacy Python 2 (2.6 and above) and Python 3 (3.3 and above). - The pytest fixture model simplifies the workflow of writing setup and teardown code. This is an incredible understatement. pytest fixtures will change the way you think about and write tests, making them more maintainable, more robust, and easier to read. After you use pytest fixtures for a while, you’ll never want to go back to writing tests without them. Q: My application is very different than the example application in the book. Will I still benefit from it? A: Yes. I chose an example application that has a lot in common with many other types of applications. It has: - A main user interface that is inconvenient to test against. - Several layers of abstraction. - Intermediate data types that are used for communication between components. - A database data store. Specifically, it’s a command-line application called `tasks` that is usable as a shared to-do list for a small team. Although the specifics of this application might not be that similar to your own project, the overall structure in the bullet points above share testing problems with many other projects. Q: Does the code work with 2.7 and 3.x? A: Yes. However, some of the example code uses the Python 3 style print function, `print(‘something’)`. To run this code in Python 2.7, you’ll need to add `from future import print_function` to the top of those files. Q: Can I test web applications with pytest? A: Yes. pytest is being used to test any type of web application from the outside with the help of Selenium, Requests, and other web-interaction libraries. For internal testing, pytest been used by with Django, Flask, Pyramid, and other frameworks. What You Need The examples in this book were written using Python 3.6 and pytest 3.2. pytest 3.2 supports Python 2.6, 2.7, and Python 3.3+. Resources Contents & Extracts - Acknowledgments - What Is pytest? - Learn pytest While Testing an Example Application - How This Book Is Organized - What You Need to Know - Example Code and Online Resources - Getting Started with pytest - Getting pytest - Running pytest - Running Only One Test - Using Options - Exercises - What’s Next - Writing Test Functions - Testing a Package - Using assert Statements - Expecting Exceptions - Marking Test Functions - Skipping Tests - Marking Tests as Expecting to Fail - Running a Subset of Tests - Parametrized Testing - Exercises - What’s Next - pytest Fixtures excerpt - Sharing Fixtures Through conftest.py - Using Fixtures for Setup and Teardown - Tracing Fixture Execution with –setup-show - Using Fixtures for Test Data - Using Multiple Fixtures - Specifying Fixture Scope - Specifying Fixtures with usefixtures - Using autouse for Fixtures That Always Get Used - Renaming Fixtures - Parametrizing Fixtures - Exercises - What’s Next - Builtin Fixtures - Using tmpdir and tmpdir_factory - Using pytestconfig - Using cache - Using capsys - Using monkeypatch - Using doctest_namespace - Using recwarn - Exercises - What’s Next - Plugins - Configuration - Understanding pytest Configuration Files - Changing the Default Command-Line Options - Registering Markers to Avoid Marker Typos - Requiring a Minimum pytest Version - Stopping pytest from Looking in the Wrong Places - Specifying Test Directory Locations - Changing Test Discovery Rules - Disallowing XPASS - Avoiding Filename Collisions - Exercises - What’s Next - Using pytest with Other Tools - pdb: Debugging Test Failures - Coverage.py: Determining How Much Code Is Tested - mock: Swapping Out Part of the System - tox: Testing Multiple Configurations - Jenkins CI: Automating Your Automated Tests - unittest: Running Legacy Tests with pytest - Exercises - What’s Next - Virtual Environments - pip - Plugin Sampler Pack - Plugins That Change the Normal Test Run Flow - Plugins That Alter or Enhance Output - Plugins for Static Analysis - Plugins for Web Development - Packaging and Distributing Python Projects - Creating an Installable Module - Creating an Installable Package - Creating a Source Distribution and Wheel - Creating a PyPI-Installable Package - xUnit Fixtures - Syntax of xUnit Fixtures - Mixing pytest Fixtures and xUnit Fixtures - Limitations of xUnit Fixtures Author Brian Okken is a lead software engineer with two decades of R&D experience developing test and measurement instruments. He hosts the Test & Code podcast and co-hosts the Python Bytes podcast.
https://pragprog.com/book/bopytest/python-testing-with-pytest
CC-MAIN-2020-05
en
refinedweb
I made a simple app to test the PythonEvaluate functionality of Rhino Compute. You can write Rhino Common code in the text field and have it evaluated and rendered in THREE.js You can create geometry and return as Brep, you can also reference geometry from viewport for further manipulation. It’s just a test, so of course very buggy and limited. In-browser python interpreter I made a simple app to test the PythonEvaluate functionality of Rhino Compute. Errr so anybody with access to the app can execute arbitrary Python code on your server? Nice work @mkarimi! We have another user that is struggling to use rhino3dm with React. If you’re willing (and it’s up to you!) could you share some tips with them? @Dancergraham we have a couple of things in place to mitigate malicious code execution in our own implementation of the Compute server. Firstly the Compute service runs as a non-administrator and secondly there are some restrictions as to what python code you can run on the server (note that the python endpoint has since been moved out of the Compute project into the IronPython plug-in). Good idea! Maybe add eval to the list of badwords and correct the typo in GetProperty ? It may also be worth checking out the bandit package on PyPi (from OpenStack) for other potential security issues with Python. It is widely used and recommended. -Graham We love pull requests! Edit: actually I’m not sure that’s a typo. I think it’s supposed to cover both GetProperty() and GetProperties(). Absolutely @will, happy to share details. Working in React importing Rhino3dm package like import * from 'rhino' just doesn’t work. It’ll be much cleaner if that worked, but there are work arounds. Here’s what I ended up doing, you basically have to make a script tag just like plain html and reference that. One way to do that in React is: const fetchJsFromCDN = (src, key) => { return new Promise((resolve, reject) => { const script = document.createElement("script"); script.src = src; script.addEventListener("load", () => { resolve(window[key]); }); script.addEventListener("error", reject); document.body.appendChild(script); }); }; Which can be called like: var rhinoProm = fetchJsFromCDN( "", ["rhino3dm"] ); var computeProm = fetchJsFromCDN( "", ["RhinoCompute"] ); var modules = await Promise.all([rhinoProm, computeProm]); var rhino3dm = modules[0] var compute = modules[1] No that you have your modules resolved. We need to attach them to our window so we can reference them elsewhere rhino3dm().then(async rh => { window.rhino3dm = rh; }); compute.authToken = compute.getAuthToken(); window.compute = compute rhino3dm and compute modules can be referenced anywhere using window.rhino3dm or window.compute I tried forking the repo but can’t find that code in the main branch of my fork …? EDIT: Found it but it tells me I must be on a branch to edit…? @Dancergraham I’m sorry, despite having written it shortly before in a post above, I forgot that the code is no longer in the open source Compute repository… @stevebaer, what do you think about adding eval to the list of banned words? I’m still trying to figure out how eval could be used in a malicious way with the other restrictions in place. I’m concerned about malicious scripts, but also don’t want to limit people. This is really just a prototype server and we do log who is calling compute if we end up with someone making a mess of things. The compute server already supports calls for solving grasshopper definitions and you could include custom scripts in those definitions. The compute server can also be taken down at any time and replaced with a fresh clone. eval Is similar to exec and returns a value. Both are generally considered to be major security risks as a way of bypassing other safeguards, but I’m no security expert so I’m just passing on what I’ve heard. A hint on how we solve these kind of problems at ShapeDiver: We implemented a manual approval process for scripts contained in Grasshopper models (C#, VB, Python). Whenever a model gets uploaded containing a script which has not been approved or denied before, we are notified by our backend system and review the script before approving or denying it. This allows us to ensure security and reliability of the backend systems shared by our customers. I’ve just seen some discussion of this from someone who knows Python very well… not sure how this fits in with your implementation…
https://discourse.mcneel.com/t/in-browser-python-interpreter/92055
CC-MAIN-2020-05
en
refinedweb
EEx v1.9.4 EEx.Engine behaviour View Source Basic EEx engine that ships with Elixir. An engine needs to implement all callbacks below. An engine may also use EEx.Engine to get the default behaviour but this is not advised. In such cases, if any of the callbacks are overridden, they must call super() to delegate to the underlying EEx.Engine. Link to this section Summary Functions Handles assigns in quoted expressions. Callbacks Invoked at the beginning of every nesting. Called at the end of every template. Invokes at the end of a nesting. Called for the dynamic/code parts of a template. Called for the text/static parts of a template. Called at the beginning of every template. Link to this section Types state()View Source Link to this section Functions handle_assign(arg)View Source Handles assigns in quoted expressions. A warning will be printed on missing assigns. Future versions will raise. This can be added to any custom engine by invoking handle_assign/1 with Macro.prewalk/2: def handle_expr(state, token, expr) do expr = Macro.prewalk(expr, &EEx.Engine.handle_assign/1) super(state, token, expr) end Link to this section Callbacks handle_begin(state)View Source Invoked at the beginning of every nesting. It must return a new state that is used only inside the nesting. Once the nesting terminates, the current state is resumed. handle_body(state)View Source Called at the end of every template. It must return Elixir's quoted expressions for the template. handle_end(state)View Source Invokes at the end of a nesting. It must return Elixir's quoted expressions for the nesting. handle_expr(state, marker, expr)View Source Called for the dynamic/code parts of a template. The marker is what follows exactly after <%. For example, <% foo %> has an empty marker, but <%= foo %> has "=" as marker. The allowed markers so far are: "" "=" "/" "|" Markers "/" and "|" are only for use in custom EEx engines and are not implemented by default. Using them without an appropriate implementation raises EEx.SyntaxError. It must return the updated state. handle_text(state, text)View Source Called for the text/static parts of a template. It must return the updated state. init(opts)View Source Called at the beginning of every template. It must return the initial state.
https://hexdocs.pm/eex/EEx.Engine.html
CC-MAIN-2020-05
en
refinedweb
The following form allows you to view linux man pages. #include <numaif.h> int get_mempolicy(int *mode, unsigned long *nodemask, unsigned long maxnode, unsigned long addr, unsigned long flags); Link with -lnuma. get_mempolicy() retrieves the NUMA policy of the calling process or of a memory address, depending on the setting of flags. A NUMA machine has different memory controllers with different dis- tances to specific CPUs. The memory policy defines from which node memory is allocated for the process. If flags is specified as 0, then information about the calling pro- cess's default policy (as set by set_mempolicy(2)) is returned. The policy returned [mode and nodemask] may be used to restore the pro- cess pol- icy multi- ple of sizeof(unsigned long). If flags specifies both MPOL_F_NODE and MPOL_F_ADDR, get_mempolicy() will return the node ID of the node on which the address addr is allo- cated. On success, get_mempolicy() returns 0; on error, -1 is returned and errno is set to indicate the error..) The get_mempolicy() system call was added to the Linux kernel in ver- sion 2.6.7. This system call is Linux-specific. For information on library support, see numa(7). getcpu(2), mbind(2), mmap(2), set_mempolicy(2), numa(3), numa(7), numactl(8) webmaster@linuxguruz.com
http://www.linuxguruz.com/man-pages/get_mempolicy/
CC-MAIN-2017-43
en
refinedweb
A creator for readers. More... #include <yarp/os/PortReaderCreator.h> A creator for readers. This is used when you want a Port to create a different reader for every input connection it receives. This is a very quick way to make a multi-threaded server that keeps track of which input is which. Inherit from this class, defining the PortReaderCreator::create method. Then pass an instance to Port::setReaderCreator. The create() method will be called every time the Port receives a new connection, and all input coming in via that connection will be channeled appropriately. Definition at line 32 of file PortReaderCreator.h. Destructor. Definition at line 10 of file PortReaderCreator.cpp. Factory for PortReader objects. Implemented in yarp::name::NameServerManager.
http://yarp.it/classyarp_1_1os_1_1PortReaderCreator.html
CC-MAIN-2017-43
en
refinedweb
Omikron: The Nomad Soul FAQ/Walkthrough by Stu_Pidd Version: FINAL | Updated: 11/30/08 | Search Guide | Bookmark Guide OMIKRON: the Nomad Soul _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ *_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_*_* Version 1.06 - FINAL Written by Stu Pidd Copyright(c) 2005-2008 This faq is ONLY for personal use. Currently, this faq can be found on GameFaqs.com GameSpot.com Since this is the last update, I don't care what happens, so long as it helps as many people possible, you can put it on your site, no catch. Times played through to make this faq: 6 Times played through in life: *that infinity symbol* Quickest Time: Can't remember, but about 2 hours 30 mins !!!!!!!!!!!!!!!! Version Updates: Version 1.00 - 6/6/06 - Completed FAQ and posted it Version 1.01 - 6/17/06 - Fixed numerous things throughout the FAQ - Add more to credits - fixed up table of contents - added alternate route to archives room Version 1.02 - Nov -> Dec 2006 - Added Frequently Asked Questions section - Added some new stuff to side quests - Fixed and added some little things throughout the walkthrough - Added an item list Version 1.03 - Jan 2007 - Fixed up some of the many millions of little mistakes - Added more the FAQ - Added and fixed numerous things in the walkthrough - Completed the item list (I think) - Added Bugs and Glitch section Version 1.04 - Aug 2007 - A bunch of little screw-ups fixed everywhere - Added a bit to the spells - Added the entire storyline section - fixed up some things in the FAQ - Fixed some typos (still many more though) Version 1.05 - Mar -> May 2008 - Added DarkSecret's images - I added the "Removed Things" section - Fixed up on some loose spots - Added to the FAQ Version 1.06 - Nov -> Dec 2008 *FINAL UPDATE* - Fixed MANY little errors - Added MANY little things here and there - Added a bit more to the "Basics" section - Got rid of a lot of things such as contacts (its the final update, afterall) - Fixed up the ToC !!!!!!!!!!!!!!!! -=Table of Contents=- Introduction.....................................!@#$% Basics (includes control and how to do stuff)....@#$%! Walkthrough......................................#$%!@ Complete Storyline...............................%$$%# Side Quests......................................$%!@# Character List...................................%$#@! Weapon & Spell List..............................%$#@# Items List.......................................@#$$% Glitches & Bugs..................................!@!#$ Frequently Asked Questions.......................!@#$$ Removed Things...................................#!$@@ Credits..........................................%!@#$ NOTE: Press ctrl and F and type the symbols to the right of the sections in the table of contents so you can be sent to the section automatically. ############################################ !@#$% Introduction ############################################ I have been playing Omikron for a long time now (at least 8 years) and I have beaten it countless times. I know the game at the back of my head (nearly) and I have decided to put it to use. I have looked all over the net for an faq but most of them look like they got a monkey on a typewriter to do it so now I'm going to put one up. This game's getting a little old and is really hard to find (ebay would be the best place to find it if you ask me). Although it isn't too popular and is as old as the hills, I enjoy it along with much others. Also, this is THE final update so nothing will be updated from here on. If you have further question, the gamefaqs.com and gamespot.com forums are the way to go and the people there, I'm sure, would happily answer your questions. ############################################ @#$%! Basics ############################################ Here's a list on all the essentials needed to know before playing the game. ============================================ System Requirements (PC) ============================================ Minimum Requirements: PII 233 MHz Processor Windows 95 / Windows 98 32 MB RAM 4MB Direct X 6.1 Compliant Video Card 8X CD-ROM Drive 100% DirectX 6.1 Compliant Sound Card DirectX 6.1 350 MB of Uncompressed Hard Drive Storage Keyboard and Mouse Recommended Requirements: PII 300 MHz Processor 64 MB RAM 8 MB Direct3D or 3Dfx compatible 3D Accelerator Card 16X CD ROM ============================================ Controls ============================================ NOTE: This is from the PC version booklet. Adventure/menu mode Move/cursor Arrow keys Action button Enter Cancel action/previous menu Spacebar First person view L Run Shift Open 'SNEAK' TAB Strafe/half turn Ctrl Swim faster Shift Dive (water) Ctrl Jump Spacebar Shooter mode Move Arrow keys Fire/shoot Shift or left mouse button Action/use/open Enter Jump Spacebar or right mouse button Crouch Right ctrl Change weapon Alt Look around Number pad arrow keys Fighting mode Move forwards/backwards Right/left arrow keys Strafe End/Delete Jump Up arrow key Crouch Down arrow key Punch 1 Q Punch 2 W Kick 1 A Kick 2 S NOTE: There are other 'special moves' involving various combinations of keys to be discovered. ============================================ Sneak ============================================ The sneak is practically your inventory. It holds your personal ID and attributes, slider location, map, seteks (money), rings, items, important memories and settings. The ID shows a rotating picture of yourself and information such as age, occupation, DOB etc. Attributes are your main stats. It shows your HP/health/energy, mana, fighting experience, attack, dodge, body resistance/defense and speed/agility. These can be changed with potions and various other items. Slider locations are the most difficult to understand. Sliders are vehicles and you can call them like taxis. To call one go to call slider. You'll see it come and then you can pick manual or automatic. Manual lets you drive it by yourself (hard as hell) but automatic lets you choose a bunch of locations that you need to go to or have already visited. When the slider stops you must get in with the action button. Map, seteks, rings and items are on the default screen. On the right is a list of up to 18 items that you have picked up during your adventures. Seteks are the currency used in Omikron. They're white and black triangle things and are found quite commonly. You can spend them at any shop or even give it to the poor if they ask. Rings will allow you to save and give you advice. I won't reveal too much about them as it's apart of the storyline. The map is quite useful and shows a detailed road and path map and also has where you are and other locations. There is a map for each district and only shows them as you enter them. Important memories will tell you stuff that someone has told you before. These normally tell what you must do and/or a little bit of help. You can get advise about these from save points by using (a lot of rings). The settings is exactly what it means. You can adjust the sounds, visuals, speed and other stuff but the default should be fine. You can also change the difficulty of both firearm and hand-to-hand combat, so if you want a challenge or you're stuggling, use this feature. ============================================ Fighting ============================================ Fighting is hand to hand combat with a human or monster. You can only get into fights if you are attacked. On the screen you'll see your HP on the left (blue) and the opponents (red). To win you must reduce the enemies HP. You can use the normal attacks or you can learn new ones from books found in stores or throughout the game. The damage you inflict is determined by your attack stat. Dodge is how fast you can evade enemies attacks. Body resistance is how much damage can be reduced from the enemies attacks. Speed is how fast you move altogether. Fight experience is a general description of your fighting skill. All of this (except HP and body resistance) can be improved by practice. If you are having trouble with a fighting part, there are a couple of things that you can do. 1) Start fight with maximum health (self explanatory). 2) Start fight with maximum attack. This increases the damage that you deal to you opponent. You can get attack potions from sorceries (take note that it will drop rapidly to a normal state if put right up to the top). 3) Start fight with maximum dodge. This increases both your speed and your ability to dodge attacks. You can get dodge potions from sorceries the same as fight potions and it will decrease, also. 4) Learn combos or develop a strategy. I will discuss them below. 5) Get more fighting experience. On the stats page in your sneak, it will tell you your current fight status. It can range from Initiative (crap) or right up to a Taar Expert. There is one other that is only possible with the character Kushulain called "Master of the Innervoice", which, I assume, is the best, even though you can't fight with him. This stat can be increased by simply fighting (the virtual fighter found in Kay'l's apartment is great for this) or you can find a Taar Stone that is a the sorcery shop. When fighting, you will see coloured smoke-like stuff appear as you hit the enemy or the enemy hits you. Red indicates that the person was hit and recieved a decent amount of damage. Blue means that the person blocked it but the still recieved damage (though very little). The aim, obviously, is to get the red. Different attacks can do more damage, and there are also combos that can really scrape down the HP. I'll go into more detail below. Fighting uses these controls on the PC: Jump Up Arrow Crouch Down Arrow Move Left Left Arrow Move Right Right Arrow Punch 1 Q Punch 2 W Kick 1 A Kick 2 S Attacking while crouched or in mid air (jumping) will create different attacks, some that do more damage than others. There are three heights, low, medium and high. The enemy is only generally blocking one, which gives you two more options, assuming you get to them quick enough. If they are standing, crouch down as you have a good chance to get them down. If they keep blocking you moves, try attacking at a different height. If they are nearly dead, just punch heaps or do a combo, even if they block it. Combos do lots of attacks, and, even though they block it, it still does damage. In the game, you have probably seen the enemy do some amazing attacks that you can never seem to do. These attacks are possible and are, actually, very easy to pull off, but they're not always the best tactic. You can find different combos by reading the numerous "Taar" books. In the books, they don't say "punch" or "kick", instead, they say A, B, C or D. This is what they indicate: Punch 1 A Punch 2 B Kick 1 C Kick 2 D For example, one combo is ABCD (its a pretty good one too). So you'd tap: Punch 1, Punch 2, Kick 1, Kick 2 -- in that order I won't list all the combos but some of the best ones I've come across are: ABCD ABAB CDCD If you want the coolest looking one, jump and press CD. Although these combos are impressive and can do a lot of damage, they are not always the best option. The aim of the fight is to make the enemy vulnerable, and this is done best when they are on the ground. If you can get a combo in there, by all means do, but otherwise just keep crouching and tapping D. ============================================ Shooting ============================================ Shooting once again involves your HP (on the left again). There can also be a HP bar on the other side in red, but only in 2 fights in the game. You can also use a variety of guns you have found throughout the game. All guns require ammo except the waver gun and the power rod. You can practice shooting at armories/gun shops. Simply go up to the guy at the counter and ask for a ticket. You can then use it to practice in the simulator rooms. During a shooting part, if you have medikits in your sneak and your health gets to a critical stage, you will automatically use them. Keep this in mind if you're having trouble at certain parts. There are other guns in the game that some may think "ooooh baby". These guns cost lots and so does their ammunition. Also, they're not that good at all. Your basic waver gun will do the job in all of the game's situations. If you are struggling, spend your money on medikits, not guns and ammo. Also, if you are fighting demons, only the power rod will do damage. So don't bother wasting 20 megazooker rockets just to see that it does nothing. And just so you know, the enemies in Hamastagn are NOT demons, hence the waver weapons will work (though they're pretty pointless at that point anyways). ============================================ Saving ============================================ To save you must find three floating interlinking rings. Use the action button and you'll be able to save. Every time you save it consumes one ring. If you have no rings you cannot save but that happens rarely. Also, in the save menu, there is another option where you can pay 5 rings to get an "answer" of something in your memory. For the love of god, don't bother with these; they don't tell you anything and its not worth the rings if you're struggling enough as it it. ============================================ Other ============================================ To use an elevator, walk in until you turn around and press the action button. If you need a key, find the control panel and use it on it. If you walk into an elevator and you don't turn around automatically, the elevator is not used for anything. The "use" command in the inventory allows you to use a key, give an item, eat food, drink drinks etc. The examine lets you look or read items, books, notes, maps, parchments etc. The "use on" allows you to put items together. This is used rarely (mostly for spells). If you are having trouble figuring out what to do, then try two things: USE YOUR BRAIN - Just like in real life. The puzzles in the game are different to the more traditional game puzzles and require you to think of an option that would work in reality, even if it is a bit farfetched. USE THE CHARACTERS BRAIN - Quite literally. In the sneak, there is an option called memory. This tells you things that different characters have recently told you and almost always point you in the right direction. After you run a long way, you will stop and puff for air. Before you stop, press the action button to not puff for air. If you have trouble understanding a concept of the game (eg politics, companies, weather, locations) talk to people on the street that are unoccupied. They can either be standing or sitting on a seat. ############################################ #$%!@ Walkthrough ############################################ This section is the main part of the faq. Here I will be explaining the everything to do with the storyline. This will not include sidequests but I will inform you when you can and where to go, etc. Throughout the walkthroughh you'll see notices. Here are what they are: NOTE: Just a reminder or something that you should take note of. HINT: Something is obvious to some and not to others that will help. SECRET: Something that can snag you awesome items or may be a glitch. Now just to start this off, at the main menu, go to new game and give yourself a name (it's just the name of your file). Then continue to start the game. Now for the walkthrough: ============================================ Anekbah ============================================ After the short scene you'll find yourself in an alley. Go out the only way possible (not too hard to find) and you'll come across some rings on the ground. Press enter twice when you're near them to get them. I'll explain these later on. Go on to exit the alley. Go right and you'll trigger the start credits. SECRET: Here is a little hint that is sure to help people who are sick of seeing the start credits. Instead of walking on the path/road you can do one of two things: You can hug (walk along) the wall to the right or left or you can call a slider from the top. Be sure not to go through that road by foot through the rest of the game. After the scene (or if you skipped it) you can go into your inventory and go to sliders. Pick the one that says Kay'l's apartment. A car should pop up and you can enter it with the enter button. You will then be taken straight to the apartment OR you can simply walk forwards to the left a little and you'll end up at the same place. Go through to see three elevators. Go in the middle or right one (as pointed our by Alexandros Ntzintsvelasvili) and face the controls to your right. Get into your inventory and use the apartment key. Now you should be taken up into the apartment. As you enter take a sharp left until you find a desk. Pressing the action button will open the draw to find 500 seteks (money). Now go and pick up the ring notice on the table in the center of the room. If you want some entertainment, go watch the blue hologram television thingy. After that, go to the other end of the room (little scene here) and look through the glass. You'll see a little lizard (koopie) and a T shaped thing. You'll notice a key (yes, that's a key) on it. Can't get it yet so don't bother. Now walk behind the wall and take the middle door. You'll see a light on the floor in this room. Walk up to it and a computer will drop down. Press the action button and do the first level. You'll have to fight a computer dude. It shouldn't be too hard if you know what you're doing. When (or if) you beat it, do the second and then the third. You should be at a better fighting level now. So now go through the other door in this room. Open the cupboard to find a medikit and a badge (be sure to examine the badge). Now go to the box and open it for a sleeping pill prescription. If you want go can go to the toilet in the next room and "let it all out" if you like, otherwise go through the door to the lounge room again. Go through the only door you haven't been through yet to find a small kitchen. Go to the far corner and you'll find three floating rings. You can use this to save. Now grab the newspaper on the bench and open the 2 cabinets in the central pillar to find some kloops beer, a pureed carmen and food for Koopies. NOTE: Be sure to read all the books, parchments and notices you find throughout the game as they may provide vital information and/or information that makes the game easier to understand (eg Politics). Walk out of this room to see a scene where you meet your wife. There's a long conversation and she walks into the bedroom. After the conversation you can pick up the gun on the ground. Go up to the koopie (lizard) and you'll see a hatch. Use the FOOD FOR KOOPIES on it and it'll get you that key (note that you DON'T need this but it'll snag you some extra items later on). Now go to the left to see a little computer in the wall. This is a transcan. Use this to rid of items from your inventory. Be sure to get the rings and look at the note in there. This next bit is optional. Go into the bedroom and Telis will walk out of the shower and go to the bed. Go to the opposite side of the bed to her and press the action button to see a "romantic but scary" scene. Once that is over you have nothing else to do in here. Now go back to the elevator (scene before you leave) and back to the street. Now make sure you've examined the badge you got from the cupboard and call a slider to go to the security HQ (or as I like to call the "cops"). You'll get dropped off in front of a large building. This is the HQ but don't enter yet. Turn around and go to the drug store/pharmacy across the road. Go up to the counter and get into your inventory. Use you sleeping pill prescription to get the sleeping drug. Now also in this shop you'll notice a lady that's different from everyone else. You can't talk to her but just look. I'll explain more about this later. Just leave and enter the big building from before. You'll need to use your badge on the thing on the left to get in. As you walk in you may see a scene (sometimes I don't see it). Go in the elevator and go to the -1 floor. Go through all the doors possible and you'll meet Tarek (he's pretty friendly... now) and Boog (can't do anything yet, just take note of him). You'll also see a different room behind a yellow door. Grab the flyer on the table and use the vending machine to the left. Buy a cup of koil (should have enough). Now (for the first time) go to USE ON with the cup of koil and then go to the sleeping drug to create a drugged cup of koil. You'll use this later. Now go to the floor -2. Go around and meet another guy but he's a bad ass (make's you wonder how he became a cop in the first place ;)). HINT: You can raid any of the rooms that are unlocked and are unoccupied. The places you should look are the desks, cupboards and draws (next to chair). Now go to Kay'ls room to find a save point. Look in the cupboard to find a medikit and a locked box. You can open it with the small key you got from the koopie in your apartment. Inside will be a short message. Now open the draw to find a 100 seteks and more. Now go to the blue computer on the other side and use the action button. Now go through all these documents but make sure you read the last one. You may also use the action button on the painting of the dude on the wall to learn a little. When you're done leave the room. Go to the elevator and you'll receive a message on your sneak. Now go to the -4 floor and go through the highly guarded room. Talk to the lady there to learn a little. After the conversation pick up the three items. You should get two notes and a key. Read the notes first and then leave the HQ the same way you entered. Now you can do the bar side quest (read the first side quest in the side quest section). Call a slider and go to Jenna's apartment. Go through the door and up the elevator the same way as Kay'l's. Now inside you'll find nothing unusual. Take whatever you want. In the next room you'll find that a built in cupboard won't open. If you go into the toilet you can get a key by pressing the action button. Now go up to it for a medikit and a small note (can someone tell me what the use of this note means, please). Now if you go to the first room and walk towards the thing against the wall and stand next to it, you'll notice there's a little room behind it. It took me ages to figure this out but all you have to do is use the action button on the lights to the right of it. It will open and you'll score yourself a decagun and a parchment to find out the truth. You can now leave and return to the HQ now. Go to the detention cells on -3 floor. Go through the red door and talk to the guard there. If you have your orders there he'll take you to Jenna. Follow him and talk to her. If you got the Parchment and the decagun you'll be able to ask extra question to find out she's obviously guilty. Now leave to get a message from Telis (she wants to see you at a restaurant. Return to captain Lea's office and talk to her. Say what you like but just so you know, you'll receive a decent item if you say she's innocent. She'll ask for a cup of koil so give her your drugged cup of koil and she'll fall asleep (you can give her as many non-drugged cups of koil if you stuff it up). Now raid her office to find her badge in the draw and a koopie sandwich (yes, the lizard). !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! I would like to thank Alexandros Ntzintsvelasvili for a different strategy to getting into the archives room: There is an alternative way to get into the Archives Room of the Security HQ. You don't necessarily have to drug capt. Lea. Instead, after you receive her orders, go to the Maintenance Workshop on Level -5. You'll see some Mecatechs repairing police sliders. They're kinda hostile if you talk with them and they won't allow you to explore the workshop. Notice a dozen of empty beer bottles lying around. This is a hint to help you figure out how to gain access to the rest of the workshop. ApproachCamir (red hair) and offer him a Kloops Beer (you should already have one from the kitchen of Kay'l's Apartment). He will be very glad and let you explore the workshop freely. There is a safe point and two doors. One of them leads to an office. Get inside and pick up "Mecaguard Specifications" from the desk. Read it to get some clues at what you have to do next. Check the safe on the wall for Camir's key and use it to unlock the cabinet. Inside you'll find "Omikron Police Memo - Archives" and the "OX2600 Receiver Kit". Now exit the office and use the other door. This door leads to some kind of Mecaguard storage and repair facility. As you enter to your left there are 4 big doors. Only Mecaguards can enter them. Examine them now so you know which leads where. In the right corner of the room there's some kind of small lift next to a Mecha. Enter it and push the panel to raise it. Turn left to face the Mechaguard and use the "OX2600" to install it. Now you need to find a control panel to operate it. 2 Magic rings are lying on the second floor next to some crates.Leave the Maintenance Workshop and go to Level -4 Surveillance Room. As you enter turn left and enter the door ahead. Remove the fuze from the machinery and get out. You'll see an angry operator bashing the control panel. Talk with him and he'll leave to call the Maintenance. The control panel is now accessible but it is not working. Return to the previous room and put the fuze back. Get back to the control panel and activate it. Now you're controlling the Meca that you inslalled the receiver to. Remember the 4 doors for Mechas. Enter the one that leads to the Archives (third from left). This initiates a shooting part. I can't give exact directions only some observations I've made. In the top right corner there is a radar which is quite helpful. The blue dot is you, the red is an active enemy Mech and the black dot is destroyed enemy Mech. There are some Regeneration Pods on the floor which restore your health as many times as you step on them(circle with a cross inside and a little brighter than the rest of the floortiles). Anyway, after you finish the shooting part the game returns to the player at the Surveillance Room. Now go to level -3 Archive Room. There are two doors. The door on the left leads to a vault with 1000 seteks inside. You can only enter this room if you passed the Mechaguard shooting , so I think this isa reward for choosing this path instead of drugging captain Lea. The other door leads to the Archives and itshould be open now and won't require having a Level 4 Badge. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Now go to -3 floor again and enter the other door. At the far end of the room is a mecaguard guarding a door. Use Lea's badge on the black thing to the left of it to open the door. Now walk up to the computer and press the action button. You must browse through the serial killer file to receive 2 new destinations: Anekbah morgue Qalisar strip club But first you must go to Telis at the restaurant. You can take a slider if you want but it's just around the block from the HQ if you feel like running. When you get there, go sit with her and you'll hear a conversation. After it she'll leave a talisman behind. Be sure you pick it up as you leave. Now you have to go to the supermarket. Take the slider and be sure you have 150+ health and at least a medium medikit if this is your first go. The decagun from earlier will also be handy. As you enter shoot the guy as he pops out. Go right now until you see some boxes behind a pillar. A guy will jump out so shoot him. Go to the corner to find another guy to your right. Now patrol these 2 isles until you kill everyone (be careful of the guy with a decagun). Soon 2 guys from behind boxes will shoot you. Finish them off and another guy will push some boxes to create a path. Kill him and go through. If you have a decagun or another gun, swap it now (your decagun should have 50+ ammo). Shoot the guy on the next isle and there are 2 behind the far one. Kill them and more will come out. Once all of them are dead you'll find a small medikit in one isle and a large medikit where you fought the 2 guys. Don't take the large medikit unless you are below 75 health (you have to estimate). Go through one of the isles to the back wall to see a civilian get shot. Shoot him and the the ones to the left. Grab the medium medikit and move on to a dead end. Two men are here with good octagons. Finish them off and return to get the large medikit (btw there is ammo in the corner there). Return to the dead end and walk towards the boxes to face the nastiest one yet. Kill him and a dude will tell you there's one left. Grab the medium medikit and use it if your health isn't on 200. SECRET: This is something that no one seems to know and it is obviously a glitch. If you walk towards the boxes where the last guy came out and press the action button, you'll pick up a waver gun named " ". Yes, it isn't named anything. You can't use it to make a double waver and you can't even use it as a waver. I suggest you don't get it, as you can't remove it from your inventory. I just thought you should know. Now walk through the door and get ready for a hand to hand combat. -------------------------------------------- BOSS: Thief leader HP: 150 Now for your first boss, he isn't easy. If you've been training with the simulator at your apartment, this will be ten times easier. This guy has little strength but good agility and defense. Your main attack should be crouching and using the main kick to get him on the ground. Wait about half a second and go at him again. Your main objective is to get and keep him on the ground. If you have good speed if should be easy and strength will finish him off quickly. -------------------------------------------- Pick up the potion next to the door and you can get 5 rings if you press the action button on him. Now you can leave (make sure you read what it says when you exit the shop). Now you have a choice in where you can go. You can go to the strip club or the morgue. I suggest the morgue as it is closest. Take a slider if you like but it is just to the left as you exit from the restaurant from before (on the turn). As you enter you'll have a vision. Speak to the lady at the counter and take the left door. Speak the meditech and return and take the right door. Take the first door on your right. On the right of the room is a body. Take the surgical instrument next to it (press the action button on him). Now you may grab the medikits on the shelves and then press the action button on the mutilated body. If you're on the right side it'll say there's something underneath one of her fingernails. Use the surgical instrument to get it out and take it. You may use the action button on her. Now leave and take the next door on your right to see a machine doing an "operation" on someone. Go over to the computer just to the left and go to Den. Now go pick up his sneak. As you walk of you'll get another vision. If you use the computer again and pull out number 2 you can get some rings. Now (optional) go to the machine over the far end and use the corpse sample on it. You'll receive some handy information. Now you can leave. You're next destination should be Qalisar. You can do one of three things: Go to the strip club Go to the dreamers concert (if you have examined flyer) Or go to Fuan's I suggest the concert. After that go to the strip club. Whichever way it goes you'll get dropped off at Harvey's bar (the concert). When you want to go to the strip club, go up to the balcony and run around until you see a shop with white walls (windows I think). Just to be sure, it has a diagonal hall. Inside you'll get another vision. Go down and there's a Yuki on the table to the left. Talk to the man at the counter and start to ask questions. Want you want to do in this conversation is to keep the conversation going, don't ask for drinks and don't bribe with seteks. Follow those steps and you'll find her for free. You may also find that intimidating him helps too. Go talk to her and then slowly follow her to her room. She'll babble on a bit and then go into her room to get the paper. You'll hear a scream. Now go up to the gray thing next to the door and use your waver on it. You'll go in and see someone run out (a cop?). No matter how quick yyou were, Anissa will be dead Press the action button on her twice to get her key. Then open the cupboard to find a medikit and Den's card (how'd she get it?). Now go over to the small table and press the action button to turn a switch and open a secret hatch. Use the small key on it to find the bit of paper. Read it if you like but it isn't going to make much sense. Now you can leave. If you like, there's a ring in the toilets. You can do a side quest here. Check out the HQ master key in the side quest section for details. Now go to Fuan's. It is at the end of one of the balconies and has a big sign saying "Fu-an's" and also has Khonsu posters on hit. Go talk to him and pay his fee if you can to receive Den's badge. NOTE: Use the sneak in front of Fuan to enable this conversation. Also, the fee is 400 seteks. Now that you have his badge and card, all you need now is his apartment key. You should now go back to the HQ (by slider unless you feel like running for an hour). Go to Den's office and open it with his badge. Raid the place to score Den's apartment key and a message that says "Truth hides behind the tiger...". You'll figure out what this means soon enough. Leave and you'll speak to Tarek (even if you haven't met him before). Now leave HQ and you'll receive a rather disturbing call from Telis. Now you MUST go to your apartment (pick up some medikits if you need them). When you get up the elevator you'll see a note on the floor. Examine it and then you might as well get rid of it. Make sure you save and leave through the elevator (make sure you have 150+ health). You'll end up on the roof top. Walk out and look down to see no sliders or pedestrians (scary). Now walk over to the bloody Telis three times to see her turn into the demon! -------------------------------------------- BOSS: Demon Telis HP: 150 This isn't too hard if you've got 200 health and good fighting stats. She has good agility and insane strength but lacks in defense. All you have to focus on is evading her attack and getting her low (I believe it's her weak spot). If low isn't working go all over until she's on the ground then use any attack to get her back down once she's up. Her HP will deplete quickly so it shouldn't take long especially if you have decent strength and 200 HP. -------------------------------------------- If you win, there's 5 rings for you and 1 hidden one in the corner. If you lose it's not over yet. You'll see an extra scene if you lose and you'll reincarnate into a thief. Exit through the elevator to meet an unknown person (look at their face closely and you may remember). After the conversation, leave and it's time to finally go to Den's apartment. But before you do there's something else you can do. Go to the side quest section under temple secret. So when you're done take the slider to Den's apartment. Use his key in the elevator to go up and walk into the bloody room. You may use the action button on the blood but make sure you have room in your inventory. Now go into the bedroom and pick up the wedding picture on the table. Open the cupboard to the reincarnation spell (if you didn't before) and HQ pass. Go into the shower and grab the key on the inside. You can use it to open the box back in the bedroom. You'll find a medikit, 200 seteks and a potion. Now go back to the bloody room. You'll see a picture of a tiger and a tiger statue in the room. If you read Den's note: "Truth hides behind the tiger", you'll realize you have to do something with them. Go to the left of the tiger and push (action button) him across to find a switch. You can then use this switch to move the tiger picture. Behind it is a safe. If you examine the wedding picture, you'll find the numbers: 7213. Use these on the safe to find a map and transcan tape. Go save if you like and then use the tape on the transcan (television thingy). You'll then see Den blabbering on and then there'll be a twist followed by another fight. -------------------------------------------- BOSS: Demon Tarek HP: 125 This is harder than the last demon but shouldn't be too hard. If you are the thief then you should've trained. This demon has average agility but great strength. It also lacks in defense if you ever get to him. As usual, get him to the ground and keep him that way. You may find that just going with fast punches all over is better but use your own technique. Don't worry if you get your ass kicked as its not game over yet. Just win or loose and enjoy the music. -------------------------------------------- If you loose a lady will run in and you'll turn into her. If you win then you get 5 rings. Now return to the HQ and on the left side (around the corner) is a supermarket. There is a man with an orange dress there. Use Telis' talisman on him and follow him. Go inside the building and talk to the guy at the counter. Then after that give him the corpse sample you found in the morgue. Then pick up the power rod and you can return to the HQ. NOTE: From now on if you walk into some apartment places you'll find demons. If you die against these there is no one to save you. Walk in at your own risk. You also can't go back to you're apartment or any other unless you have an apartment key. Here is a list of items you need before going into the HQ: Anissa's paper Power rod Den's map A few medikits (not small) You may want to look around the shop or some other shops and buy a few things or finish off any side quests you've missed so far as this is your last chance. When you're ready, go into the HQ and down to the bottom floor. Go into the ventilator room and take your first right into the control room. Go over to the other side (left of the window) and use the HQ key on the black thing to open it up. In this room there are 2 switches; one to stop the big propeller and one to open the window. The first switch is right in front of you and the second is on the right wall. After that go to the window and jump out into the water. Swim to find a little tunnel out and up onto land. Pick up the electric cable and leave through the door. Go back up to the control room and use the electric cable on the electric panel to stop the little propellers. After that, go down the hall and the door that just opened and you haven't been in. If you look at Den's map it will tell you to go to the right on but you might want to go to the left and straight ones first for items (altogether it's 8 rings and a medium medikit). Go down the right tunnel and follow the path to an elevator. Continue until you come to an intersection. Left leads to the "sight" of someone's office (not Gandhars). Go right and save. Get as much health as you can and go down to Gandhar's office. Do any last preparations and walk up the stairs for another fight. -------------------------------------------- BOSS: Demon Guardian HP: 125 This is probably the easiest so far (probably because if you die it's over). This demon had average speed and strength but hopeless defense. Once you get him on the ground you've practically won already. Just be sure to back out if you get on the ground and focus on keeping him on the ground and not giving him big damage. -------------------------------------------- As a reward you'll get a dodge potion (15+), a life potion and some rings. Now you can snoop around in the office! Go into his draw to find a Junpar pass and flick a switch. Btw can someone tell me how to get into the cupboard (I got in there once but I forgot how I did). Now go into the new room and use the computer thing on the left. This is where you use Anissa's paper. 4 from 2 3 from 5 Last from 1 Second last from last Use the combinations given and you'll end up in a secret room. Stock up on health and walk forwards to meet some weird dude before the final battle (not in the game though). Run around the corner (don't change your gun) and shoot all the zombie dude there. Now run until you get to a huge room with lava in the middle. Run around the right side until you see a scene. Now you have to fight Gandhar. -------------------------------------------- BOSS: Gandhar HP: 200 I bet this in less than a minute but it may take first timers a few shots. Immediately run left to the passage where you came from and hide in there for a while. Strafe out when he's visible and shoot him just below the head. When you see a meteor like thing, run and hide. When he goes down do the same. If he's shooting green crap at you, don't worry; you can out shoot him easily. Just focus your attack below his head as he swings around and it should be fine. -------------------------------------------- Once it's over you'll flee and find a dodge potion, rings and 600 seteks. Jump into the water and swim a short distance to meet up with a very familiar face (she was the person in the hood if you didn't notice). Now you may spend some money and do whatever and then you must go to Junpar. Take a slider and give the pass to the mecaguard. If you are still Kay'l you'll be shot and turn into a hospital chick. If that happens then return (you can reincarnate again if you like) and try again. Now you are up to Junpar! ============================================ Junpar ============================================ As you enter you can do the concert (see side quests) but when you want to continue with the story line take a slider to the temple. From there go in and take the left arch and then another arch to find the door on your right. You will see a blue man here that you can reincarnate with. Don't do it, as he is the strongest fighter you can get. It'd be best to save him for the end of the game. Instead, go speak to the black guy to the right of all the people praying. Follow him in and you'll be in a room. Take the candles and climb on the block. In the wall is a red engraving. Place Telis Talisman on it to move one of the statues. Go in and follow until you meet up with Jenna. I you said she was innocent (as I told you) she'll give you a regeneration potion. Now you must follow her. NOTE: There is an item in this room called insulation spray. Don't pick it up, as you'll need it later on in the game. It'll save you a bunch of time. Speak to the guy in the room and then leave. Go left and through the door on the left. Take the door on the right and talk to Jenna in her room. Now you can wander around and talk to people. I recommend you fight Yob at least once. When you're ready go see Soks (the robot) down the bottom. After the conversation, go see Krill on the exact opposite room from Jenna. Talk to him and get the detonator out of the cupboard before going back to Soks. Get the KR100s and use it on the detonator to create WCM explosives. Now leave the temple the same way you got it and take a slider to the bridge. As you walk over it you'll see a short scene. Run over to the ssave point in the right corne and save if you wish. Now there are 2 ways you can do this: you can climb over the wall on the left (action button on the bars in the wall) or you can reincarnate into the guard up ahead and then do it. I suggest reincarnating as you get a megazooka with 10 ammo and you can get yourself at an advantage point. Which ever you do, climb the wall to begin. Here is a basic map of the area: ________ WW / AP\ ## - = Walls WW| ********|## # = Road WW| |## < = Wall to climb WW|*** * |## B = Bridge WW| * |## W = Water WW|******* S |<# S = Start WW|F *P |## F = Finish WW| * *****|## A = Advantage point WW \____*___/ ## * = Small walls BB############## P = Panel BB############## Now this is a very broad map so bear with me here. If you reincarnated this is what you have to do: From the start point, proceed to the advantage point. Whip out your megazooka and aim at the two people. Shoot them and wait for a few others to come. Activate the switch quickly and proceed with caution. If you just climbed the wall then immediately shoot the people above and then the ones in front. Make your way to the advantage point killing people on the ground first and then the ones on walls. Activate the switch and continue. Now for both of them make your way to the second panel/switch killing off anyone you see. Once you've activated it you must then get the finish. The mecaguards will be in front of you so don't shoot them or they'll be after you. Run around (shoot the mecadog) and through the gate. Now jump in the water and swim towards the bridge. Dive in the water to find a rod in the ground. Use the activation button to move it. Now swim to the far side of the river and climb on the platform. Use the switch to go up the top and use the switch right next to the ridge there. Jump back in the river and swim to where you came out of the gate. Use the switch so you go over to the platforms. Jump over them until you reach the pillar holding up the bridge. Use the WCM explosives on it and run. You have five second to jump back in the water. If you don't make it you'll reincarnate into another person. Whichever way it goes you've completed your first mission and you will return the Awakened base (the temple). Go talk to Namtar (same place as before) and you'll be assigned with a new mission (if you ask). Now leave the temple and take a slider to the bookstore. Unfortunately this is hard to navigate so it would be best to use the map here. Anyways, once your in, go down the stairs and go to the man to your right. Give him a look at Telis Talisman and he'll give you the pirate box. Pick it up and return to a road. Call a slider and go to the rooftops. Make sure you have a few medikit just incase. Follow the road with no cars and through the large door. Follow the road through another door and take a right into a small tunnel. You'll find a guy there depressed. Reincarnate into him if you like and go back outside. Continue along for a while until you can go down an alley to the right. You'll see a mecaguard so go down to alley to the right. Climb the wall with the "bars" on it to get to the roof. You have one of two options here. You can jump right over the alley into the courtyard (which is sure to deplete a bunch of HP) or you can push the second box down onto the mecaguard. If you do the second then go back down to the mecaguard (down the bars again) and through the door to the courtyard. Take the elevator up (btw there are rings here) and up to see a familiar face. Now get ready for another shoot out. For this shoot out, you can refer to this map made by DarkSecret. Just use this link: This is my short summary, but the map is a thousand times better: As you start faced in the direction given, you are to head that way but about 45 degrees right. You must be on the ground and then climb a wall to get to the antenna. Before you do, be sure to take the medikit there. After you've done that you have to escape. Make note that mecaguards are enemies now but aren't too easy to kill but can deal major damage. From your starting direction you need to head that was until you reach a tall fence. After that go right and climb onto a building. Jump to a lower one and drop a little lower. Jump to another building and you should escape. You will now automatically be taken to the temple/awakened base. After the conversation go see Namtar and grab the keys to the hideout. Now go see Dakobah and he'll give you the key to his library. Go down the bottom to see Soks out of his shop. Talk to him and then go get the insulation spray from the desk. Give it to sock and he'll give you a big... thing. Now go to the room above Dakobah's and use his key on it. Grab everything you can here (keep them in your inventory) and then use the big... thing Soks gave you one the elevator. Go up to the second floor and grab everything there. Now for the hard part: you have to open Xendar's temple. All you have is all the books you've been given, a locked door and an unlocked door that's broken. Now the doors can be found down the path where Jenna and Krill's rooms are (Krill has the locked one). As you may have noticed, to unlock it you must use symbols. The symbols represent numbers. There are three symbols on each thing. You have to add to of them together to get the third. The third one you get to pick. Now if you read through one of the books you'll find the answers or the numbers between 1 and 10. But you need more. Luckily for you I have the answers. I will try and draw the symbols as best as possible: 1 __O /| 2 ____ | | | 3 _ _| \| 4 _|\_ 5 | /|\ |---- \_/ 6 __ |__| 7 |___| | 8 ___ \/ 9 __ O__ 10 _/__ \ * 14 _|__O | | If that didn't help, use these images (sent in by DarkSecret): So the answers for the combination are as followed: 4 + 6 = 10 5 + 9 = 14 6 + 1 = 7 7 + 2 = 9 It's quite simple if you think about it. Once youu get the combination correct Dakobah will yap on a bit and then you can go inside: HINT: You may want to go get rid of all the books in your inventory as you are going to need more room in a few minutes. Now go into the temple and pick everything up in this room. Now jump and swim in the water (it's a long way). When you get to the other side, get out and run down the path to Xendar. Take everything where he is (DO NOT MISS THE JEWEL). Now put the beshem on the pedestal (in this room) that is in the middle on the end of the line. On the other two in the middle put the candles on. On the outside you need to put drops of shadow, the powder, ampher dew and the leaf. Stand on the other end of the line to become a sorcerer! Now pick up the Beshem (you can't pick anything else up) and return to Dakobah in his room. He'll say that you can now make drugs! Ahem... I mean, spells. Now you must go to the hideout so leave and take a slider there (use the map if you don't know the way). Unlock it with the key and go in to find Jenna. After the conversation pick everything up (make sure you get the bombs) and read the add for the wiki garden to receive a new destination. Now you can go to the armory at the entrance to Anekbah. It's just to the right if you drop down. Now inside if you talk to him he just won't answer. There are 2 ways to get him to talk. You can reincarnate into the lady in the shop and talk (quite funny actually) or you can make a truth spell. If you do the truth spell you get a free megazooka with 10 ammo. For information of the truth spell go to the spell section. When you're done you then have to go to a book store (and easy one to remember is the one you got the pirate box from for the last mission). Once you've got that you can go to the sewer. Go to the bridge you blew up and jump in the water. Swim down left to the end. Dive down on the left side to find a rod. Move it to open a door. Swim in it and you'll be in there. Swim to the end and get out of the water. You'll get a call from Jenna then proceed and fall down the hole. You'll receive a save point (make sure you save) and then go left. NOTE: If you talk to the guys in the shower, you'll fight them. If you loose you'll fall down the hole again And lose 1 ring. If you win, nothing happens. Go into the next room full of cupboards and you'll get into a fight. -------------------------------------------- BOSS: Tetra Guard HP: 100 This guy hasn't got much HP but is strong as hell. He has OK strength and good speed but great defense. Your aim here is to get him on the ground as much as possible. When you do, he won't last long as he has good speed. My main move was a low kick to get him on the ground. I had 200 HP so he didn't last long. Just make sure you counter attack when he evades your moves. -------------------------------------------- To the right of the room is a cupboard with 2 blue things (looks like alien or insect heads to me). Open it to find a pass and a large medikit. All the others are shut so go through the next door (use the pass) and make sure you've got 200 health and some good guns because this is probably the hardest part of the game. For the first part you have to go to the right and up the ramp to a door. Then you must go down the other ramp, across the rails to the computer. Use that then return to the other room. Go along the top to the other side and a door. Go through to see a scene on what you have to do. Unfortunately I don't have much to help when it comes to directions and enemy locations but if someone could help I would be grateful. This is what I have: You have 15 minutes to set 8 bombs over the facility. You'll know where to because it shows it in the scene. So follow the road and put bombs in the rooms. When you can control as normal in a big room this is what you have to do: go into each of the 3 rooms on the outside of the elevator and use the computers on the left side of the rooms. Once you've done all three, the Z-techs will be let loose. Run to the elevator (don't bother shooting) and use the control in the corner. After the nice scene go speak to Jenna just outside and you'll be taken to Pamoka. You'll see yet another conversation(s) and then you can do as you want in your cell. Pick up the dish and use it on the guard. Say whatever you like and he'll collapse. Use the action button on him a few times and grab his key. Use it on the right side of the cell bars to open it. Go see Jenna if you like. Now go down the hall through the door and into the second room. Use the action button on the Multiplan and then go use your reincarnation spell on the guard looking out the window (bit of a dumbass cause he doesn't see you there unless you walk into the far door). Go back to Jenna then follow her to the elevator. Go down and speak with the guard and you'll teturn to the awakened base. Jenna will tell you there's a traitor so now you have to figure out who it is (why can't she?). Start by going to Soks. He'll open the door to Krills room. Grab the screw driver in the cupboard and use it to open Meshkans door. You'll see him dead. Grab his journal, the key and the acid (there is also rings here). Read the journal for little information and then head to Dakobahs library. Open the locked box with Meshkans key to find some ampher dew. Now go to where Yob is. You'll see Namtar with him, also his clothes are at the far end of the room. Go look at his clothes to find the key to his room. Go and open his room. Take the horn of the sham and mix it in your beshem with the ampher dew to create a spell that reveals demons. Go over to his left most cupboard and look to the left of it. You'll see scraping marks on the ground. Go to the other side and push it to reveal a strong hatch. Use the acid to weaken it. Inside you'll find a demoniac cube. Walk away and you'll get a message on it. So now that you know it was Namtar you can go and rip him up. Go and see him where Yob is and use the spell you just made on him. He'll say a few words and then there'll be a fight. -------------------------------------------- BOSS: Demon Namtar HP: 100 This is the best looking demon so far but definitely not the hardest. He has average attack and speed but laughable defense. Knock him on the ground and finish him of. Since he has bad stats and low HP he should be dead in a matter of seconds. Remember to back off when you get thrown on the ground. This demon is ten times easier than the first for sure. If you die you'll get as much goes as you can bare but you will loose a ring every time. -------------------------------------------- After the fight, Dakobah and Jenna will thank you and you'll get a spell book. Keep this, as you'll need it later. You'll be able to see Boz now so leave and take the slider there. Go through the door and you'll find two doors on either side of you. They have rings, and octagun, ammo and a junpar concert flyer. After a little walk around, take the stairs down and through the door. Walk to the blue computer to finally see Boz. After the conversation, grab the Lahoreh pass and leave the building. Do whatever you like here and when you're ready you can take a slider to the entrace to Lahoreh. ============================================ Lahoreh ============================================ Your first destinaton is the main library. Since there are no roads therefore no sliders you'll have to go by foot. You can find the location by using your map. Once your in you'll hear from Jenna saying there's a new base. Before you go, you're going to need to find the information Boz said. Go down into the center and press the action button on any of the shelves. Buy the books called "Table of Cosmic Correspondence" and "Treatise on Anti-Gravity". After reading them, go around the outside of the inner library until you find a scientist, then show off your reincarnation spell to him. NOTE: If you haven't got enough mana, you can visit the new sorcery shop in Lahoreh. You'll find it near the entrance up a crooked ramp. There is a difference in products there too. Once you are the scientist, make sure you have at least 2 empty slots in your inventory. If you do then go speak to the guard in front of the elevator. He'll let you pass so go it to the top. Go up to the book shelves here and get the second book on Hamastag'n. Examine then go talk to the other scientist in the room (not the guard). He'll ask for some equation or something or other so look at the book you got from the lower floor and then talk to him again. Say the top answer and you'll get his tradutech. Over the other side of the room is a white switch. Use it so the room goes dark and then get into your inventory. Use the tradutech on the hamastag'n book to get the words translated. When the guard turns the lights back on you can read the translation and then leave through the elevator. You can leave the entire building now if you like and go to the new awakened base. If you look on you're map it should be no trouble. Jump into the cliff and go through the big door. There'll be a short scene. I suggest you go in the building in front of you because there are heaps of good potions and items there plus a nice scene. When you want to continue on you can go to the other building (on the right side) where you'll meet up with Jenna and Dakobah. Dakobah will tell you about the jewels and Jahangir park. So now you have heard the last conversation with Jenna and Dakobah until the end of the game. Now you can go see the concert (see side quest section) or you can continue with the storyline. Go to the Yrami square (look on the map) and you'll see that it looks like this: __ * * /\ O * * The circle is in fact the well but it is locked. You must find the correct combination for the four pillars around is (*). The only clues you have is the arrow (/\) and the map (__). If you look at the map closely, it's an old map of Lahoreh. It shows the location of all the answers to the pillars. If you go out to all those locations you'll find that there is a symbol in one of the corners and an arrow in the middle. The arrow indicates the arrow next to the circle (facing the map) and the symbol tells the co-ordinate. The spot where the symbol is indicates the location of the answer. So if you want to cut to the answer, here it is: Symbol #1 _ / / / (circle with a line through it) \// / Symbol #2 __ /__\__/ (wavy lines or 2 ~'s) / \__/ Symbol #3 _/_/_ (2 diagonal lines with a line through it / / or a bent H with the horizontal line out) Use the above numbers on this map: __ 3 1 /\ O 2 2 If this doesn't work, try this image sent in by DarkSecret: Now stand on the circle and you should go down. Walk into the eerie cave to see 5 pedestals. Now you have to do another puzzle (this one's easier). You can do this one by yourself but just to save time, I'll put it down. Btw, you must do the middle pillar before you start the first/second rounds. |_| 1 2 A 3 4 #1 = 3 2 4 1 #2 = 2 1 4 3 1 3 The numbers indicate the pedestals. The box (|_|) shows where the big metallic thing is (just for a sense of direction). The numbers below show what order you should do it to proceed to the next on (or finish). Now the big metal thing should go down so get on it and you'll see yet more symbols on the ground. By the way, this big metal thing is sometimes a real problem to get on as I and many others have experienced. I don't know what to say other than don't give up. I always try jumping on an angle but that doesn't always work. Anyways, heres some ASCII drawings of the symbols on the metal thing: _ \___/ \ \ * / x3 \ \ x3 \____ / \ | _/ \ |__ || _ | | | \___/ | | | / | | | x2 \____ x2 \ \ \ _/ \ _|_|_|_ / / / | | | | | | | | | | | | | | | | | | x1 \ \ \ x2 _|_|_|_ _|_|_|_ | | | Once again, DarkSecret's helping me out with this picture: Now this shows what you have to do in the next room. Move forwards to have a look. As you may have noticed, you need three jewels. You should've gotten one from X'endars temple and the second is in one of the boxes in front of you. But in every box is a jewel that looks and is called the same as the original but one. So you have to get the right one. Unfortunately you have to use the directions by using the above symbols and the ones on each wall. I know what it all means but it never seems to work for me. I end up coming here with an empty sneak and filling it up with jewels. So I am going to determine which one's real. After a lot of hard work I have figured it out. From the first one in front of you, go to the very last one in that row. From there go right one and open it. That's the real one for sure. Now you can leave this dreary cave. After you have exited the well, go to the sorcery in Lahoreh (it's on a crooked boardwalk). But a drop of shadow there and then return to Junpar. Get in a slider and go to the Jahangir park. If you haven't noticed, it's the same place as the rooftop shootout. So run there but go past it and up a slope. You should see a banner with the name there and an arrow. Go through the door and you have made it. ============================================ Jahangir Park & Hamastag'n ============================================ As you enter, take a right and jump across the water. Press the action button to climb and pick up the jinpan feather. Mix it with your beshem and the drop of shadows to make a resurrection spell. Drop down to the left and go up the passage to the very end to find the third and final jewel. Return to the entrance and go into the building straight ahead. You must push in the right symbols to spell the word "Kiwan". The symbols are as followed: K = / ## / ## /_/##__/ ## \ / _____/ I = | ## |_ ## / \___ / \___/ W = _ _/ \ ## / ## ##/ ## ## ## ## ## / / /____ / A = #####__ ##### \ ___ | /___\/ _____/ N = / / / / / ##_____/_/ ## / ##_____/ ## You could also use this image send in by DarkSecret: Once you push them in it should go down. Run along the hall to see Kiwan. Use the resurrection spell and you will have a little chat with him. Once it's done, get out of the building and jump in the river to the left of the entrance. Dive down into a small alcove and move the rod. Quickly swim out (won't have much breath) and you can go to Hamastag'n finally. Get out of the water and go further left into a cave. Follow the path until you're outside again and take a left into a deeper cave. Run along until you see a statue of a head. Put the jewels on and it should open (if the green jewel didn't work, go back and get another one). NOTE: You are about to enter the final dungeon. After this there is no turning back to make a second save slot as you may want to do extra side quests after you beat the game. Walk along the track here and save at the save point to the left. Walk along the bridge to see a nice scene. This can be a frustrating part on your first time so I'll try and make it clear for you. As described in that little scene you should have watched, you must go along the pulleys, bridges and catwalks to get to the cetral stalactite (big thing hanging from the cave's ceiling). To use the lifts, there are green buttons on both sides that point up and down. Use the action button on once and you will go in that direction. Also, there are many rings inside the stalactites but I won't mention them because at this point in the game, you should have well over 20 (more like 50+). Anyways, across the first catwalk (don't worry about those dudes playing hide and seek with you). Run pass the lift (you will see a rope going down) and cross the bridge. Go through the stalactite and go over the next two bridges. This should now be an empty stalactite. Down the far end is a hidden green button. Press this and then do DOWN on the lift on this stalactite. Cross the catwalk and then the next bridge. Go up the lift here and run around the outside of the stalactite to cross another bridge. Go inside the stalactite here and press the next green button. Run all the way to the other side of this stalactite and make a JUMP to the lift here. Go DOWN and pass over the catwalk. Run around this stalactite to see a catwalk that is folded up. If you walk up to this, it will fall into place. You will have to make a BIG JUMP over to the next catwalk which there is a good chance that you stuff up... good luck. When/if you get over, jump into the lift up ahead on the right. Go UP and you'll see a different type of pulley. This is the end. Step onto it and press the action button to go ahead. If you fall off, it is not the end of the world. Under one of the stalactites is a cutout. Stand next to that and a lift will come down to pick you up. The main problem with this is it can really screw you up as you forget where you were up to and have to find your way back... but it isn't too bad at all. After this, all of the catwalks will lower making it dead easy to get around, even though you don't need to anymore. Go have a chat to Soyinka (the woman in this stalactite). NOTE: The guards guarding the queen are impossible to reincarnate, no matter how cool you think they look. For the love of god, you may regret not saving at this point. Just outside of the stalactite with Soyinka in it is the save point. If you miss this and stuff up later, you will find yourself having to redo at least 20mins of work just to get back. Do this ESPECIALLY if you have the PC version as many people have witnessed a glitch ahead that crashes the program and hence, sending you back to the last save point. Anyways, after this, you can take the biggest jump in the game, off the stalactite and into the water (if you really have the energy, you can climb up further). Swim to the land at the end and talk to the Azkeel there, Fodo. Go try a go at the sham but you won't ba able to. Go talk to Fodo again then walk to the green liquid nearby. Press the action button to check it out and then go back to ask Fodo about it. Go back and press the action button and Fodo will give you a little more help than you first expected. Get on the sham and go down the only track. Push the rock out of the way with the action button and run down to a a new fight. -------------------------------------------- BOSS: SHHHY`NLSS HP: 125 This guy has great speed and all right strength but lacks in defense. Once you get him on the ground it's easy from there. You can make quick jabs easily but when he gets you, you may have trouble getting out. -------------------------------------------- If you win you'll rind some rings. If you don't you'll become Shhhy`nlss. Continue and you'll finally enter Hamastag'n. All you have to do is get to the book in the middle. Then you must find the three ghost dudes. The first 2 are simple. From the book, go down the ramp and to your right and open the door there. The second on is down the next ramp but to your left. The last one requires a little more effort. From the book, slide down the wall to the right and climb the extensive ramp along the back wall. You'll come to a dead end with a large medikit there. If you look off the edge, you'll see a roof there. Jump down on it and then in front of it to find the last door. Just so you know, opening an incorrect door lets out a bad ass spirit that fires crap at you. Just run away from them. When you're done, run back to the book and your three ghost guys will be there. Stand in the middle of them and you'll be transported to a snowy place (remember this in the vision before Pamoka?). Use the save point near you (pretty hard to see) and go on. Walk until you see a flying ship and go down (there's 3 magic rings if you want to run all the way around the outside but they're not really useful at this point in the game). On one of the closer blocks you can stand on it and you'll be taken to the stairs. Jump over and walk in. You'll have a conversation with Kushulai´n. When he dies, touch him and grab the Barkayal. Go behin the seat and through the teleporter (same as the one in Gandhars lair). Follow the road and MAKE SURE YOU SAVE HERE as it's the last one in the game (cause you might want to do this part again). Walk in for the final confrontation. -------------------------------------------- BOSS: Astroth HP: 200 NOTE: In this fight, only the Power Rod does damage, as Astroth is a demon and waver weapons do nothing to demons. Also, this is the final battle so don't worry if you have zero rings. You'll notice that he's kinda chained down so run behind the building like structure and you must shoot down the blue things. They are in easy spots and can be reached at a safe zone except one. You must run out (strafe) and shoot the one right of Astroth. When they are all gone, he'll be free. Now to deplete his HP you must shoot hit back. There are two ways to do this: When you run up to him, he'll jump. Run through him and turn around (keep running as he'll jump back). Shoot and he'll turn around slowly. This strategy is probably better for the end of the fight as when he lands, somehow this injures you but only lightly. The other way (not as reliable though) is to shoot him in the head to stun him and then run around. You should continue to shoot him in the head as you run around him to keep him stunned longer. This technique is almost useless towards the end of the fight as he recovers too quickly. When his HP goes below halfway (100) he'll act as if he's on steroids. Do the same but pick up the pace. You'll find that you'll be moving the mouse more than any other time in the game and you may wear a hole in the table. Once you deplete his HP enough it'll be over. -------------------------------------------- Watch the worth while ending and then the credits for another song. Congratulations on beating the game. Check out the side quests below or the characters too. If you want to play again but still want a challenge, you can still change the difficulty settings or try doing it without any medikits. Give the hardest difficulty a run if you think you're a man (I think it's impossible with or without medikits, but feel free to prove me wrong).You may even want to do a speed run of the game and see how long it takes to beat it. If you want some more information about the, David Cage, the director and designer of the game, created a blog as the game was in development. There's some interesting stuff in there, such as Kay'l's name originally being Uzal. Anyways, here's the link: You can also have fun with this little tool that I believe was made by the staff at Quantic Dream. What it does is allows you to switch the characters around when you're playing--even to characters that aren't controlable and that aren't even in the game. It's pretty neat playing as the demons, Boz, Soks, Snow Tigers... but the best is going for a lunchtime jog in downtown Anekbah as the 30ft Astroth. Anyways, here's the link to download: ############################################ %$$%# Complete Storyline ############################################ NOTE: HUGE SPOILERS!! Obviously... This is the complete outline of the storyline and contains mass spoilers. If you haven't beaten the game then I suggest you pass this. The main reason for this section is if your game is broken and you're dying to know what happens or your too lazy to beat the game or you want to revise what happens or you didn't quite take in something fully. Anyways, I won't be including everything people say word-for-word--just the outline. I have set it out in different parts so people can find what they want quicker. Anyways, here it is: NOTE: I am not 100% sure all of this is correct as I have brought the clues together and studied it as though it were a subject at school. I have gotten everything from books, what characters say, people in the street and common sense. If you find something I've missed or something that is incorrect, by all means, inform me. Just make sure you're right otherwise I might end up making it worse. ============================================ Gameplay Storyline ============================================ Anekbah: A man in a mechanical armour-like outfits stands in front of you and begs you to listen to him. He says that there's little time and you should go to his apartment. He also says you're going into another world, etc. He jumps through the portal behind him and finds himself in an alleyway. Soon he's spotted by a creature not much larger than him, is attacked by it and then some mysterious green fog is transfered from you to the creature. There's a noise and then the creature flees, leaving you on the ground. A big robot comes in and and tells you how to get revived and you then get up. You exit the alleyway and head to your apartment (start credits are shown here). The apartment is empty though you look around and have a short vision of a woman being there. You feel as though you should know her. A few moments later, this woman has a gun at your head, pauses, drops the gun and runs up to you to give you a big hug. She is your wife (apparently) and she says you and your police partner were investigating on some serious things with your good friend, Den. She knows nothing about your investigations but suggests that you go down to the Police HQ. You grab a police badge and the gun and then head for the HQ. You meet some people there such as Tarek, Boog and Cpt Lea. You snoop around your office and find a bunch of inverstigations were left unsolved and one in particular: a muder case was shut down. You also learn that you partner and friend on this case, Den, was murdered. You are then called up to Cpt Lea and she threatens to take your badge and then gives you another case. You have to investigate a woman named Jenna whom is accused of being a member of an anti-government clan called the 'Awakened'. You go to interview her and then Telis, your wife, calls you and asks if you want to have lunch. You go to her and explain that Den is dead. Soon you get a call from the HQ saying there's a shootout in a supermarket. Telis gives you a blue telisman with an eye on it before you leave. You go to the supermarket and fight with a few dosen people. You begin to think that something's going on as you were told there were only a few people and a backup was coming--but it was all false. You return to Lea and say weather she is guilty or not (your choice) and then she asks for a cup of Koil. You then want to learn more about this closed murder case which is located up in the archives which is heavily guarded. There are two ways to do this but eventually you get in there. You read the files and learn that A stripper named Anissa found Den's body and Den is now at the morgue. You take of to the morgue and talk to the coroner there. He says that the bodies are always mutilated badly and no trace is ever left behind. The victims are always killed from a heart attack when the superhuman killer tortures them. You then go into the room and meet Den. His not feeling too greate but his sneak is next to him in a bad condition. You have another vision but doesn't reveal much. In another room you manage to find a corpse sample on one of the bodies. After that, you head to Fu'an, a sneak fixer. After a little bribe, you manage to get Den's police card and then you're off to Anissa the stripper. You talk to her and she says that you and Den had already talked to her. You ask some questions anyways and she says she found a piece of paper with some random humbo-jumbo on it. She goes into her room leaving you outside and then there's a scream. You shoot the lock on the door and walk in to find Anissa dead on the floor. You turn around to examine the room and see someone quickly run out the door. It appears that the person was wearing the police uniform. After a little bit of snooping around, you find Anissa's paper she was bragging about and you return to the HQ. You go into Den's office and find his apartment key. As you walk out, you meet up with Tarek, the friendly cop. He asks you how you're going on the investigation and you say you're going to Den's apartment. He wishes you luck and you head off. Just as you're leaving the HQ, Telis, your wife, gives you a distress call where she is smothered in blood and crying for you to come home. As expected, you return home and find Telis on the rooftop. She says a demon came looking for you saying it wanted your soul. You get closer and Telis turns into a scary looking demon. After you fight it, a woman hiding her face in a black cloak emerges and tells you that you're not actually playing a game; it's a trap the demons made to lure you. Apparently your soul is more tastey and Astroth (the demon leader) likes to eat tastey souls. She also says you're in danger and you need to get into Gandhar's lair (he's the big boss of the police) and destroy a gate which is luring more game players into Omikron. To do this you need to use Anissa's paper and a weapon called th power rod. Anyways, you go to the supermarket where the "Power Rod" shop is. To get in you need to shoe the person the telisman you got from Telis earlier. The shopkeeper says you need a piece of corpse of what you want to kill (a demon). Luckily, you got that corpse sample from the morgue and it scores you a free power rod. Next stop is Den's apartment in which you explore and find a mysterious object called the reincarnation spell which enables you to take the body of another at your call. You find a map Den's must've drawn up and a transcan tape. You go to watch it and you see your good buddy Den on the big screen. He says that you and Den were investigating on these murderers and you found out that demons were the cluprits. He also says that he left Tarek in charge if anything went wrong. Conveniently, Tarek is standing right behind as he confesses that he was the one that killed Den and Anissa. He then babbles on about saying that Omikron isn't a game and you there's a fight. With everything required, you travel on to the police HQ and go through the ventilation to Gandhar's lair. When you arrive, you are ambushed by the same demon that attacked you at the start of the game. After you kick its ass, you use Anissa's paper to decipher a code on a hidden door in the room. You walk in and find yourself in a cave. You find a mysterious old man whom calls you the nomad soul and then dissapears. You whip out your power rod and go through the plethora of zombies in the cave before meeting up with Gandhar as he is in his demon form. He says he's glad to see you and turns into a giant insect after he falls into the pit of lava surrounding the 'Gate'. After you kill him, the cave starts collapsing and you sprint to a safe area. You swim through a hole a pond in the city where you meet up with Jenna, the woman you interviewed at the HQ and also the mysterious woman you met after Telis attacked you as a demon. She congradulates you and says you must go and see a man named Yob in the next district, Junpar. At this point in the game, you don't have Kay'l's body as the demons would have found him. Junpar: You go to the temple where you see Yob. He escorts you to the hidden awakened base which Jenna explains to you that this is where the local awakened members hide. She then takes you to a man name Dakobah. He tells you things about what will happen if the demons get you and about Astroth. He then says you should join Phalanx 5 (ie do missions for the awakened). You then go to see Namtar (the man that gives out the missions). He explains that there's a bridge that the government crosses with weapons and such, you have to blow it up. You then go to see Krill, the weapons and explosives expert. He's a bit of a tough nut but tells you what explosives to get. You then go see Soks, the awakened robot and he gives you the explosives. After this, you go to the bridge and after the shootout, you blow it up. You then return to Namtar for the next mission. Namtar explains how Reshev uses a big satalite to transmit his lies to the population and he wants you to put an awakened tape there instead. So now, after you get the tape, you head for the satalite where you clearly ambushed by guards. You insert the tape and see Boz for the first time. He says to all the people of Omikron that Reshev is ignoring the existance of the demons and they should join the awakened to fight for freedom. After the escape, you find yourself at the awakened base again with Jenna in front of you. She seems a little concerned by the amount of guards that appeared after you installed the tape but is pleased that the mission was a success. You report back to Namtar and he tells you that you can rest as there are no new missions for you to do. You realise that returning to Kay'l's apartment would be too risky so he allows you to rest at a hideout for the awakened members. He gives you the key and then you go and see Dakobah. He gives you the privilige of learning about the ancient arts of the beshem (sorcerey) and then gives you the key to his library. You go and scope it out and after a bit of deciphering, you unlock the doors to Xendar's tomb. Dakobah congradulates you and then you head off into the tomb. You come across a jewel, the corpse of Xendar and the beshem. You do a little ritual and you can then make some spells at your own accord. You return to Dakobah and, once again, he congradulates you. Next, you head to the hideout Namtar gave you the key to. You find Jenna there and she says Namtar gave you another mission. Tetra is making a robot better than any others so far and the awakened want is destroyed. She gives you some bombs and then tells you that a man name Qazef knows about the entrance. You go and see Qazef and he says you have to go via the sewer. You go there and eventually end up in a shower room. After planting all of the bombs in the factory, you let the new robots, the z-techs, loose before fleeing via an elevator as the factory is destroyed. You meet Jenna just outside and she says you must move immediately or the guards will come. At that moment, you are ambushed by a bunch of guards and taken to a high security prison called Pamoka. There you meet Cliff Mashroud, the chief of secret police. He demands you to reveal the location of the awakened base but you deny it all. As he leaves, you hear a faint sound from the cell wall. You get closer and find that it is Jenna in another cell. She says you must find a way of escaping before they are condemned to the thought erasers. You then see a guard enter your cell room and use a dish to shoot the laser back at him. You then remove the keys from him and escape the cell. You find a multiplan and gain all your sneak items back and then reincarnate to a guard. You then get Jenna out of her cell and act as if you're taking her out of the prison via a slider. When you return to the awakened base, Jenna explains that she's certain that Cliff Mashrouds ambush was planned and that someone in the awakened base called him. Now she wants you to find out who the traitor is. You explore the base and find Meshkan dead (Meshkan is the sort of like a prophet). You search some more and find a demonic cube that recieves messages from demons inside Namtar's room. You get a message saying that the nomad soul and the priest must be killed. After this, you use your beshem to create a spell that enables you to turn Namtar to his demon form and then you fight him. Afterwards, Jenna and Dakobah thank you greatly for uncovering this and then give you a book that teaches the spell of resurecting the dead. Dakobah also mentions that Boz (leader of the awakened) would like to see you at his old house. You go and visit him and he tells you about Astroth and his intentions. He says that you have a chance of saving your soul if can destroy Astroth. He says that there might be an answer to his at the big library in Lahoreh (another district) and then gives you the pass. After this, he dissapears and then you head to Lahoreh. Lahoreh: You head straight to the Omikron Library and get a call frin Jenna. She explains that there's a new Awakened base in Lahoreh (as the demons knew about the last one) and you should go there next. You inform her about what Boz said and then you continue into the library. You find the entrance to the restricted books are guarded to you reincarnate into a scientist in the library. You get into the restricted sections and find the book. You translate it using a tradutech and then go to the new awakened base in Lahoreh. Dakobah and Jenna are there and they tell you what the translated book talks about. They say that the book of nout is in Hamastag'n; the city of the dead. That book will tell you how to defeat Astroth. They say that the road there can be found via a place called Jahangir Park but require three Vyugrimuka jewels (one which you got from Xendar's tomb). They say one can be found in Yrmali Well which is situated at Lahoreh. They also say that Kiwan, that dead magus, may guide you to the city of the dead. So next off, you head to Yrmali well and find a secret cave with a few puzzles. After this you find another Vyugrimuka jewel. After this you head to Jahangir Park which is found near Junpar. Final Steps: At Jahangir Park, the last of the Vyugrimuka jewels is found and then you make a spell using your beshem to raise the dead. You use this on the corpse of Kiwan. He is a bit annoyed that you woke him but shows you the directions to the cave that leads to Hamastag'n. He also says that no one has ever returned from there and predicts your going to get the book of nout simply for the wealth and fame. After that, you go to the scary-looking door and insert the three Vyugrimuka jewels and it opens. You take a small trip down the track until you arrive at an underground, abandoned city. The place seems abandoned but the pulley systems appear to be in tact. You wander around and get the feeling you're being watched. You journey for a while until you end up in a room guarded by two large Azkeels and a woman. The woman says she is Soyinka, daughter of Matanboukous and ruler of Meyerem. She says that the book of nout and the city of the dead is nearby and that you'll have to go there via a sham. You drop down the bottom and meet Fodo, a sham breeder. He says you can ride a sham if you want. You fail to mount one but instead drink some poisonous liquid named Zklibon (used to keep Krubors away) and you die. Then you soon reincarnate into Fodo. You mount the sham and head to Hamastag'n. You fight your way to the top where you finally find the book of nout. It tells you all about Astroth and that he fought for 7 days and nights and also that if you fail, it'll be another 999 000 years before the planet will be safe again. He says the only way to kill Astroth is by using a sword the Magicians of Phaenon put their souls into. He says the man Kushulainn has it and the only way to get to him is to free the spirits from their tombs here in HAmastag'n. You free them and then they teleport you outside the dome on Omikron. You find yourself in a snowy area surrounded by cliffs. You walk along and see a levitating ship in front of you (you saw this in a vision earlier). You go inside it and see Kushulainn. He speaks to about nothing new except that he is purely ashamed that he didn't vanquish Astroth earlier and believes you can do it. Kushulainn kills himself and you reincarnate into him, taking his sword, the Barkaya'l, and venture through the portal behind you. You find yourself in a place that looks like hell. You follow the track and soon meet up with Astroth, the 20 foot high demon of the last circle, the prince of evil. Astroth disagrees that you will be able to kill him and then confesses that he's be controlling Ix for years and killed the leaders of Tetra. After he blabbers on some more, you attempt to woop his ass... a couple of tries later and he falls to the ground and you plumet the Barkaya'l into his back, destroying him forever! You walk around a little and find the main computer, Ix. It is crumbling into ashes along with the rest of the building. You then go to Reshev's mansion which recieved a beating by the Azkeels. You also find Reshev dead here. You walk on and meet up with Jenna, Boz, Dakobah, Soyinka and the man that appears every now and then to give you those 3 magical levitating rings. The man says his name is Matanboukous, father of Soyinka, author of the book of nout and the only living magician of Phaenon. He says the planet is safe from Astroth and should be alright from here on. Soyinka says the Azkeels fought through to Reshev and thanks you. Boz also thanks you and says the Omikron is now free. Dakobah says that Omikron is safe like the others do and you have to go. Jenna thanks you and say that you and here could have lived together and loved eachother but fate has chosen otherwise. Dakobah has one final word, saying you'll go down in history forever and then he and Matanboukous send your soul back to your universe. The credits roll after it says "The End" and then you quit the game and don't play it again for another few months. ============================================ Politics, Religeon and Such ============================================ Legatee Angus Reshev: This guy is the goverment of Omikron and also a sick individual. He knows quite well about the existance of the demons but hides the truth as he knows he will loose his fortune over it. He spends his time watching fights at the Shament Tournament, starting fights or relaxing at his luxurious hidden mansion. If you don't know what this gut looks like, he's the guy with the block in his head that you can find on the walls in the police HQ or the rotating statue across the road from the police HQ. Vyagrimukha: Vyagrimukha is basically the god of art but everyone worships him/her as though s/he is the creator of their planet. If you ever see a floating blue piece of nothing, that's supposedly supposed to reflect of Vyugrimukha and a crucifix would to Jesus. Ix: During the cobalt wars, there was a fight over who should be leader and who shouldn't so they ended up killing the old leader and setting up a computer to be incharge. They named this computer Ix and it took a nice 62 cycles to create as they installed cameras and microphones and other machinery to gather statistics and make desisions like what the weather should be, who marries who, who can have babies, what food people get, what technology is introduced, etc, etc... Basically, Omikron is run by a computer on no one can even negotiate what it does except for the Legatee, Reshev. *SPOILERS* The actual person controling Ix is Astroth as Ix was built right above that lava lake Astroth was put it. Tetra: Sort of like the Omikron malitia. They have many secrets and create new technologies such as the mechaguards, mechadogs and the z-tech. *SPOILERS* The person that runs Tetra is Astroth as the old leaders went up to him thinking he could run it better but instead he slit their throats and took it for himself. Boz: Boz was once an ordinary man but one day returned to his apartment to find a dead woman there. At that moment, a demon attacked him by cutting his throat and he died. Before the demon could take his soul, he slid into a multiplan computer and now lives on the network. Since a demon killed him, he's out for revenge so he created an alliance named the Awakened to fight these demons. Legatee Reshev was so annoyed that he survived and he was going to talk about the demons in public that he tries to cover it up and made it a law to be a member of the awakened otherwise you would get the chop. NOTE: David Bowie does the voice for this guy! The Dreamers: The Dreamers are a band that are against the goverment and all so, clearly Legatee Reshev wants their thoughts erased. Until then, they play secret performances in hiddens bars, warehouses and factories in different districts (they must have some links to people with passes). NOTE: David Bowie plays the singer here! The Book of Nout: Authored by Matanboukous, the book of night (called "The Book of Nout" in Masa'u Dialect) is a book written several thousands of years ago and contains the secrets of the ancient art in which all the magicians of Phaenon (except for Matanboukous) gave their lived to create. It now lies in Hamastag'n, the city of the dead, as it was placed there by Matanboukous' daughter, Soyinka, at his request. Omikronian Laws: There are many strict rules in Omikron that Ix and the Legatee controls. Here are some of them: - No one can traverse across districts unless they have a rare pass - No one can defy Ix's orders or the Legatee's - The districts cannot be over populated - No one can be members of the Awakened - Babies must be taken at the age of one year and will be placed into work at the age of sixteen The Awakened: A resistance group against Ix and the Legatee; they fight for freedom. They're main reason for existance is to rid of all the demons and Astroth in particular. They sabotage things via their own little malitia group called Phalanx 5. They have different bases scattered all around Omikron and have the symbol of an eye. They're leader is a man named Boz whom died and now lives on the multiplan network. ============================================ Locations ============================================ Phaenon (pronounced FEY-NON): This is the planet in which the game takes place. It's situated in a universe parallel to ours and, for the past two thousand years, had ben engulfed by an ice age. In order to protect civilisation from the cold, a dome was built over a portion of the planet and was named Omikron. Phaenon is is the second planet of the Primat Systemis and has been in its ice since 3753;when the sun, Rad'an, was destroyed. Omikron: Omikron is the name of the dome that hides the inhabitants of Phaenon from the ice age and freedom. It is currently governed by Legatee Angis Reshev whom strictly runs the life of all Omikronians. See the Politics for more details. The dome took was completed in 3849, 96 years after the ice age began. It was installed with its own machinery to flow oxygen and warmth around to make it possible to live, though now that they were exiled to a confined area, the population had to keep low. Omikron is split into 4 zone/districts: Anekbah, Junpar, Lahoreh and Nagataneh, and no one is allowed to move from whichever one they were born in unless they have a pass. Anekbah: Anekbah is sort of like the residential district of Omikron as it contains very little to offer other than apartments and minimal shops. It contains a bank in the center, a morgue, hospital and Omikron's main police HQ. It also has an illegal sorcery shop found in an alleyway that only awakened personel may enter. The sliders found her are blue and hover abover a green road. The building structures are impressivly high and almost always contain apartents. From Anekbah, you can directly access Qalisar and Junpar. Qalisar: (pronounced KA-LIZ-AR) Qalisar is more like a large entertainment center as it has no apartments but shops and stripclubs. There are also resturaunts and galaries along with your general convenience stores. Deep inside Qalisar is a run-down temple that serves little purpose nowadays. The sliders and overall look is almost identical to Anekbah though it is slightly smaller. From Qalisar, you can directly access Anekbah only. NOTE: Qalisar is not a district, rather it is just another part to Anekbah. Junpar: Considering Junpar has only two road, people generally get lost in this district most. The majority of this district is made up of structures that contain apartments, shops, etc. The main features here are the Tetra base, the large river, the bank, the temple (contains a secret awakened base) and the wiki garden. There is also an abandoned area known as the rooftops where Reshev transmits his data to Omikronian's sneaks. The looks of Junpar are different to all the other districts; the sliders are a yellowy-goldy color and the roads, streets and structures look like something from a middle-eastern country. Boz also once lived here when he was human. From Junpar, you can directly access Anekbah, Lahoreh and Jahangir park. Pamoka: This is the secret prison Reshev keeps quiet about. The prison is run by Cliff Mashroud, chief of Omikron's secret police. At the prison, people are tortured, executed or put to thought erasure for crimes that generally have to do with opposing the government (the awakened). The prison isn't too big but has very high security. It's location is unknown but suspected to be near Junpar. Lahoreh: Unlike other districts, Lahoreh contain no sliders as its roads are replaced by water. The only form of transportation is via foot (or air). It's main attractions are the Omikron library, the bank, the illegal sorcery shop, Yrmali Square, Khonsu's old factory and the Awakened's base (that is hidden from the public). From Lahoreh, you can directly access Junpar and Nagataneh. Nagataneh: Little is know about this place as it is not accessable in the game. I have reason to believe that this was going to be an additional district in the game but was dropped due to the lack of time. From Nagataneh, you can access Lahoreh (and probably the rest of Omikron). Hamastag'n: The location of this is fully a secret to everyone (including Reshev) and holds the book of nout--the book which grants any wish. It is also known as the city of the dead and no one has ever returned from there. The only ones that know of its whereabouts are old books and Kiwan--whom is dead. Jahangir Park: Situated on the outskirts of Junpar, this park is tomb to Kiwan, the magus. Nothing is too special about this place at first sight thought the entrance to Hamastag'n is here. Jahingir Park looks a lot like a jungle and contains many Jinpans (large birds). It also has a lot of water and a little bit of exploration can be done here. ============================================ Demons ============================================ A long time ago, Astroth, the ruler of demons, attempted to dominate all of Omikron but failed during an intense fight with a hero name Kushalai'n that lasted for 7 days and 7 nights. After this time, Kushalainn used his sword, the Barkaya'l to vanquish Astroth but was too tired from the fight. Instead the other magicians put him in a cage called the Apylande and put him in the bottom of a lake of lava; the one place he couldn't be destroyed forever. At the present day, Astroth has returned and is aiming for both revenge and domination of Omikron. Astroth is weak and can be revived via a sufficient amount of souls which his demon minions bring for him. Though all of the souls found in Omikron are weak and unsatisfying for Astroth, he now searches for something more tastey. His master plan is to capture the soul of someone far away--in another universe, thus he creates the Omikron video game and then sucks their souls. You are one of these unfortunate players. Unfortunately for Astroth, he has one weakness: the prophetic "Nomad Soul" which comes once every 999 999 years. What Astroth doesn't know is that you're the nomad soul and if you fail, Omikron has to survive another 999 999 years before another chance arises. Astroth's minions captures souls by simply sucking it out of them on the spot leaving their physical form there. During this act, the victims are normally savagely attacked causing maximum injury without killing them. The cause of death is always a heart attack as the body can't take it any longer. After the soul is stolen, it is placed in a cage and then put into the bottom of a lake of lava where it will suffer thousands of simultaneous tortures for all eternity. After time, Astroth will regain he power and then reign in Omikron again where he shall thwart the human existance. ============================================ Transportation ============================================ Transportation around the world of Omikron is limited but does the job. Sliders: Sliders a similar to cars though they are generally driven automatically and levitate inches above the road's surface. The vehicles cannot be driven of the road onto the sidewalks due to restrictions. The main love for sliders are the compatability with multiple passangers and also the speed it is taken to get to destinations. Sliders are also used to race but isn't seen within the game. Their colors can be blue, gold or normal gray (these are the only colors seen in the game). Bikes: Similar to earth's motorbike, it doesn't consume the passanger in a room and holds only one person. It can be riden manually and used by police often. The game doesn't allow you to use bikes though they still exist. Airships: These ships are not accessible in the game but you occasionally see them swiftly glide in the air above the district's buildings. They are large, come in a range of shapes and sizes and also have flashing lights on them. It's suspected that these transport mass quantities of Omikronians between districts. Foot: Simply enough, walking gains you access to all legal areas though at a slow speed. You can run but after a while you get restless and tired. You are also able to jump across small gaps without any trouble. Shams: Shams are a large animal found either underground or outside the dome of Omikron. Their speed is twice as fast as running but very bumpy. They are also capable of moving large objects. The only cons are that the sham must like you before you can mount it and they are not possible to have inside the dome (they are also illegal). Their main use if to travel across the snow (outside the dome) or for their tastey meat! ============================================ Other Things ============================================ Cobalt War: This was a war long ago that lasted a very long time. Astroth was apart of this along with Kushulainn, the magicians of Phaenon and many more. Koopie: Little lizards that some humans keep as pets and can also be eaten in a sandwich. Yum, yum. Sham: Big animals with horns that are used to traverse through the snow. They smell really bad and are very timid. They can also be eaten as steak. Transcan: The equivilant to a TV. It can also play transcan videos. Sneak: Every human has one. It contains a short list of 18 items they have in stock and also information on themselves, maps, sliders, memory, etc. Multiplan: These are little computers you see on walls everywhere. Every human has an account on this network in which they can deposit or withdraw their items from. Seteks: The currency of Omikron. ############################################ $%!@# Side Quests ############################################ This section will give you information on things that aren't related to the storyline. These are set in order of appearance. ============================================ Bar shootout - Anekbah ============================================ To do this you must be Kay'l and have looked at the files on the computer at Kay'ls desk in the HQ. Before you do you will want to have 200 health which can be gained by food/drinks and medikits. These can be found anywhere but you can purchase them from drug stores and bars. Next you're going to need a decent gun (this is optional though). You can get a decagun from Jenna's apartment (read walkthrough). With this you can go to the gun shop to the left of her apartment and buy some ammo for it and an activated radar. You can also train here in the shop for a small fee of 10 seteks. Now save if you like and go to the bar by slider. As you enter, be sure to pick up the concert flyer. Now talk to the guy at the counter and get ready. The second the shoot out begins you'll notice there are about 6 men onto you. Aim at the guy at the counter and walk backwards at the same time. Idol yourself there and shoot off the remaining people in this room. Go to the other end and take a right to the toilets. As you walk in, another guy'll come out of the far toilet. Shoot him and it's over. As a reward you'll receive 1000 seteks, rings and ammo. ============================================ Sha'rment Tournament - Qalisar ============================================ This is a great way to get lots and lost of money and also fight some weird looking Omikronians. Firstly, you will need to find a Sha'rment Tournament flyer, which is next to impossible not to find. Actually, I think there is one in the bar described in the side quest above. When you've read this, go to Qalisar and walk pass Harvey's Bar and the strip club to a small shop with a dark door on the end. Talk to the person here and one option should be a "Mega-Lamp Battery". This cost zero seteks so you should be able to afford it. When you finish talking to him, he'll let you behind the counter to where a hidden elevator is. Go in here and you'll end up at the fighting tournament. After a little scene, you'll face your first opponent. There are about five people to fight that get harder and uglier as the move along. Also, you cannot heal or take breaks in between, so don't stuff around at the start. If you loose the first fight, the crowed will give you seteks that could be anywhere between 0 to 50. After the first fight, you will get about 100, and it'll increase from there. If you win the whole lot, you'll end up with easily over 5000 seteks and the ability to freely run around on the tournament floor (not much fun). These fights are pretty easy (even on the hardest difficulty) so long as you're prepared. Have full health (you can easily buy it from the shop you're in) and even attack and dodge potions may help you here. Either way, no money to buy that megazooker and ammo will be a thing of the past. I think Quantic Dream made this a tad too easy in my opinion. ============================================ HQ master key - Anekbah, Qalisar ============================================ You can do this any time when you're Kay'l. Just go talk to Boog on the -1 floor. Then go to Qalisar. Throughout the place there are art museum-like places. You can buy erotic pictures there. Take it back to Boog and he'll give you the security HQ master key. What to do with it? With it, you can open any door on the -1 and -2 floors that are currently locked. Some of them are full of deadly weapons, others have nothing or worse: suspicious looking droot salad (eeewwwww). Now luckily for you, I have gone to all the effort and gone in every room and made a list of everything in there (note that these are just the people that are locked): Paxir 212 Decagun ammo Large medikit 5 rings Maar 516 Chocovat bar Shamet 337 Small medikit Decagun ammo 20 Seteks Vode`m 457 Octogun Large medikit Octagun ammo 200 seteks Grezz 398 Double waver ammo Pureed Carmen It's obvious that Vode`m 457 has the best stuff so you'll find him in the orange door on the -1 floor. But don't worry if you've missed this as it is not important. You're not missing out on too much. It's about 2000+ seteks worth but you could miss out and sleep easy. ============================================ Temple secret - Anekbah, Qalisar ============================================ This is a hard one to figure out by yourself and can only be started when you're Kay'l. At your apartment (in the kitchen) you'll find some food for koopies in one of the cabinets in the middle. Take it to the lounge room and use it on the hatch on the glass at the back of the room. Your koopie will the fetch you a small key. Take it and go to your office in HQ. There is a box in the cupboard that can be unlocked with the small key. Inside is a message giving directions. This will help you figure it out. Now go to the Qalisar temple (use the map) and stand on the star. Walk 10 steps north and 10 to your right (I think) and press the action button. You should hit a switch which opens a hatch at the far end of the room. Inside you'll find some potions, rings and the reincarnation spell. NOTE: There's a little thing about the reincarnation spell. You can get it at two spots: the Qalisar temple and Den's apartment. If you don't get it at the Qalisar temple it appears at Den's apartment and then disappears from the temple. Weird, eh? ============================================ Dreamers Concerts - Qalisar, Junpar, Lahoreh ============================================ This would have to be one of my favourite parts of the game. In the game there are three concerts of a forbidden band called the dreamers (songs by David Bowie and Reeves Gabrels). To see them you must find flyers. At you can only see them if you are up to the city they're in. The first is in Qalisar. You can find these flyers easily but for an easy location, you can find one at the bar you go to for a side quest (it is the first one). Once you have read it you can go to Harvey's bar and watch them do the song "naked eye". The second is in Junpar. To get a flyer go to the entrance from Anekbah and face the city. To you right is a bar. Inside on the round counter is the flyer. After examining it go to the place but you still need directions. Just go left and right, right. Follow here until you can see a satellite dish on a balcony to your left. Walk over to it and look to your right to see the elevator door to the concert (btw there's a ring just as you enter). To play this, stand in front of the circle they're in. They will do the song "we all go through". The last one is (what I think) the best song but the hardest to find a flyer for. This is one that barely anyone finds so listen carefully. After you have gotten the translation from the big library in Lahoreh, go to the new awakened base in the cliffs. Here on one of the desks to the right is the flyer. Examine it and you'll be able to go. The location is at an old Knonsu factory or something. You can find it on the map. To play this, just stand on the edge near the water in the crowd. They will do the song "pretty things are going to hell". ============================================ Wiki Garden's Treasure ============================================ You might've scored an advertisement for this and might have even gone there to find a few wikis. Fortunately, there's more to it then that. Firstly, you need access to Jaunpar and a life potion. A life potion can be bought from the Anekbah sorcery (see FAQ). Now, you might recall going to dinner with Telis at a restaurant in Anekbah? It was when you go her talisman. If you don't remember, I'll tell you how to get there. Starting at the police HQ, use this map to get there: HQ <@@@@@@@@@@@@@@@@@@@@@@@@@ SM = Super market SM@ @@@ GS = Gun shop @GS @ Morgue HQ = Police Head quarters @ X Restaurant @ @@@@@@@@@@@@@@@@@@@@@@@@ Now that you're there, to the left of the restaurant (marked as X on the above map) is a beggar. If you talk to him, he'll ask for a life potion. Give one to him and you should get a "Beggar's Pass". Now head for the wiki garden. I can't give any decent directions there but it's close to the temple (in Jaunpar). If you get an advertisement, it will be marked on your map. From the entrance, go to the far end and take a left into a little dead end place. If you look closely, around the edges here will be a little "cut off" which means there's something behind it. So use your Begger's Pass on it to find a hidden room. Go all the way down to the bottom to encounter a plethora of items. Fortunately, I've got a list of them here: 3 Magic Rings "Ars Magica Volume VII" "Ars Magica Volume XX" "Ars Magica Volume XXXVI" "Ars Magica Volume DXIII" Attack Potion +15 Dodge Potion +30 Mana Potion +30 Mana Potion +50 Sham Skin Spell Spell of Invulnerability ############################################ %$#@! Character List ############################################ Below is a list of (what I think) every character in the game. Mind you this took my a hell of a long time so I hope you like it. I have just listed the main attributes as that is what all I believe are the necessary things about the people. If you want more detain on characters then go look at Kalis (I think) faq at GameFaqs.com. ****************************************** Name: Betsy - Anekbah Age: 22 Sex: Female Job: journalist Signs: Reporter for Omikron news Interests: Loves truth, nature and sport. Hates lies, hypocrisy and media manipulation of the truth. Location: From the HQ, go across the road and to the left and you'll find her in the drug store there Attack: 30 Body resistance: 20 Speed: 60 Dodge: 20 Fight experience: Novice Items: Betsy´s apartment key ****************************************** Name: Dakme´t - Qalisar Age: 32 Sex: Male Job: Racing driver Signs: Fabulous rich descendant of the prices of Gao´r. Has won Omikron slider racing championship for the last three cycles. Interests: Loves speed, luxury and beautiful women. How to find: you will find him in the toilet of some bar in Qalisar. Location: You'll find him at the dark end of Harvey's bar. Attack: 50 Body resistance: 20 Speed: 80 Dodge: 40 Fight experience: Novice Item: Darkmet´s speed drug ****************************************** Name: Enya´d - Junpar Age: 38 Sex: Male Job: Scientist Signs: Research worker in intramolecular physics. Is said to be on the point discovering a theory that reconciles infra-gravitational and polymeric forces. Interests: Likes to spend his days conducting experiments in his laboratory. Hates inactivity ignorance, anything that distracts him from his work. Location: Centre of the big library. Attack: 20 Body resistance: 20 Speed: 50 Dodge: 50 Fight experience: Novice Item: Enyad´s apartment key ****************************************** Name: Fodo - Hamastag'n Age: 287 Sex: Male Job: Azkeel farmer Signs: Sham breeder near the azkeel city Mayere´m. Interests: Likes shams and goddess-queen Soyinka. Hates violence and danger. Location: Find him down the bottom. Drink the green "vile" stuff. Attack: 110 Body resistance: 30 Speed: 50 Dodge: 110 Fight experience: Taar disciple Item: Mana potion +50 ****************************************** Name: Ganji - Junpar Age: 25 Sex: Male Job: Professional wrestler Signs: Omikron champion of atzaride wrestling. In uziam his mother tongue, his name means "powerful sham". Interests: Loves discipline, rigor self-control. Hates anger, weakness anything that distracts him from the way. Location: He is in a restaurant found near the book store you visit. Attack: 70 Body resistance: 50 Speed: 50 Dodge: 50 Fight experience: Taar disciple Item: Ganji´s apartment key ****************************************** Name: Hunabk´u - Jahangir Age: 31 Sex: Male Job: Taar master Signs: Reared by the taar monks. Raised to the level of grand master by Zoyuk´en. Has a perfect mastery of taar combat techniques, is a indifferent to pain has absolute control of himself. Interests: Likes meditation, peace, the quest for eternal harmony. Hates technology, gratuitous violence. Location: Jahangir park should be your destination (up the top). Attack: 80 Body resistance: 30 Speed: 60 Dodge: 80 Fight experience: Master of the inner voice Item: Hunabk´u´s potion ****************************************** Name: Iman - Junpar Age: 38 Sex: Female Job: Escort girl Signs: Rents her protection service to highly placed Omikron dignitaries. Her mastery of combat techniques and her great beauty make her company much sought-after, despite of her rates- which are highly exorbitant. Interests: Fashion, Taar combat techniques, modern painting. Location: The armoury, right of the Anekbah entrance. Attack: 70 Body resistance: 40 Speed: 60 Dodge: 60 Fight experience: Taar disciple Item: Iman´s apartment key ****************************************** Name: Itzam´a - Junpar Age: 37 Sex: Male Job: Taar monk Signs: The ultimate warrior, with an absolute knowledge of taar combat. Body modified by oxy-organic circuits. Oxylic bone structure Metal mask grafted on face. His disconnected nerves make him intensive to pain. Interests: Likes prayer, fasting, silence, combat training, blood. Hates weakness and pity. Location: The blue and white dude in the temple (this guy rocks). Attack: 80 Body resistance: 70 Speed: 60 Dodge: 70 Fight experience: Master of the inner voice Item: Itzama´s apartment key ****************************************** Name: Jayli´n - Junpar Age: 30 Sex: Male Job: Ix elite soldier Signs: A specialist on weapons, and adept of dose range combat, willing to give his life for ix. Titanium bone structure. Interests: Training, Training, Training Location: Find him next to the wall you climb over during the bridge mission. Attack: 70 Body resistance: 40 Speed: 70 Dodge: 60 Fight experience: Initiate Item: Megazooka - 10 ****************************************** Name: Jorg - Anekbah Age: 33 Sex: Male Job: Mercenary Signs: 2nd generation cyborg, model Khonsu 309K2. Enhanced eye and right cerebral hemisphere Interests: Poetry and Murder Location: At the armoury near the Jaunpar gate. Attack: 70 Body resistance: 70 Speed: 60 Dodge: 70 Fight experience: Initiate Item: Jorg´s apartment key ****************************************** name: Kai´A - Anekbah Age: 17 Sex: Female Job: Student of literature Signs: A serious and applied student. Goes window-shopping as soon as her lectures are over. Fast, agile and sporting, but rather vulnerable. Interests: Likes pre-cobalt literature, short skirts, intense sports and the boys in her school. Hates mathematics and parties she doesn't attend. Location: Book store (only one in this district). Attack: 30 Body resistance: 10 Speed: 60 Dodge: 40 Fight experience: Novice Items: Kai´a´s book of poetry ****************************************** Name: Kay´L - Anekbah age: 30 Sex: Male Job: investigating agent Signs: Received basic Omikron police training in using waver weapons and combat techniques. Interests: Void Attack: 10 Speed: 10 Dodge: 10 Fight experience: Novice Item: None ****************************************** Name: Kuma´r - Junpar Age: 28 Sex: Male Job: Adventurer Signs: Has sworn to avenge the death of his brother, killed by Tetra militia. Nearly lost his life when a mecadog devoured his right leg. No known place of residence. Interests: Likes justice, honour and vengeance. Hates the Tetra, militia, fear. Location: Don't run in the first awakened mission (with bomb) or in alley near rooftops. Attack: 50 Body resistance: 40 Speed: 60 Dodge: 70 Fight experience: Novice Item: None ****************************************** Name: Kushulai´n - his house :) Age: 7216 Sex: Male Job: Hero Signs: Hero of the Cobalt wars, prince of partamor. Interests: Void Location: Last person in the game (just before Astroth). Attack: 180 Body resistance: 99 Speed: 110 Dodge: 130 Fight experience: Master of the inner voice Special items: Barkayal... if you pick it up. ****************************************** Name: Lahyli´n - Qalisar Age: 23 Sex: Female Job: Call girl Signs: Works in the red light district of Qalisar. Her knowledge of Moav message makes her the most sought-after call girl in Qalisar. Interests: Likes money, luxury, all things that are expensive and useless. Hates modern art, alcohol, won´t take no for an answer. Location: The strip show shop (meow!). Attack: 50 Body resistance: 20 Speed: 50 Dodge: 60 Fight experience: Novice Items: Lahyl´in´s pendant ****************************************** Name: Neme´t - Junpar Age: 24 Sex: Male Job: Techball champion Signs: Half centre on the Anekbah islanders techball team. His talent and extraordinary vision of the game have made him famous. Regularly makes the headlines of the scandal press because of his private indiscretions. Interests: Likes training with islanders, the applause of the crowds when he scores, adrenaline, exorbitantly prised sliders. Hates his adversaries, discretion, modesty, poverty, woman who don't like him. Location: Fat guy at Jahangir park. Attack: 60 Body resistance: 30 Speed: 90 Dodge: 70 Fight experience: Novice Item: Nemet´s "Force" drug ****************************************** Name: Nioma´y - Junpar Age: 27 Sex: Female Job: Actress Signs: Has taken part in several small transcan productions. The critics predict a great career. Interests: Likes acting, lying, manipulating those around her, being told she has talent. Hates truth, simple situations, routine. Location: The book store in Lahoreh. Attack: 30 Body resistance: 20 Speed: 60 Dodge: 30 Fight experience: Novice Item: Niomay's Apartment key. ****************************************** Name: Nuyasan - Qalisar Age: 36 Sex: Male Job: Kamiji Trader Signs: One of the last representatives of the kamiji race ,a people living in the frozen wastes before the exodus. Lives by trading rare spices, for which Kamijis have the secret. Interests: No known hobby. Location: Help me please! Attack: 70 Body resistance: 70 Speed: 50 Dodge: 60 Fight Experience: Novice Item: Nuyasa´n´s spices ****************************************** Name: Plume - Junpar Age: 26 Sex: Female Job: Fashion designer Signs: Likes wearing eccentric clothes and jewels. Studied taar combat techniques with her father when still a child. Interests: Likes modern art, high-society cocktails, koopies and vegetarian cooking. Location: Bank (near new base) and if defeated by Tarek. Attack: 60 Body resistance: 20 Speed: 60 Dodge: 60 Fight experience: Taar disciple Item: Plume´s apartment key ****************************************** Name: Quazma´ad - Junpar Age: 35 Sex: Male Job: Demoted mercenary Signs: Ex-mercenary in the Khonsu special troops, was stripped of his rank and expelled from the militia when he failed in his last mission. Has since plunged in to a deep depression. Interests: Likes jirn alcohol Location: Alley before the rooftops. Attack: 60 Body resistance: 40 Speed: 50 Dodge: 60 Fight experience: Initiate Item: Qazmaa´d´s can of liquor. ****************************************** Name: Samyaz´a - Anekbah Age: 28 Sex: Female Job: Spiritualist Signs: Bears the mark of the third eye on forehead, the sign of those chosen by the supreme being to communicate with the dead, has visions of the past and future. Interests: Likes the unknown, solitude, silence, seeing through appearances. Hates the material world, charlatans, intolerance. Location: Restaurant (not the one you have launch with Telis in). Attack: 40 Body resistance: 20 Speed: 60 Dodge: 30 Fight experience: Novice Item: Samyaz´a´s apartment key ****************************************** Name: SHHHY`NLSS - Hamastag'n Age: 2 Sex: Female (!) Job: Krubor Signs: Interests: Fighting Fodo. Location: Lose the fight when you're Fodo. Attack:150 Body resistance: 80 Speed: 70 Dodge: 130 Fight experience: Taar disciple Item: None ****************************************** Name: Syao - Anekbah/Jaunpar Age: 24 Sex: Female Job: Thief Signs: Tattoo on left eye. Agile, fast, Discreet. Wanted by all the police in Omikron. Interests: Likes Biocats, Detective novels Location: Loose the fight against Telis or in an apartment block in Jaunpar or in a Junpar apartment. Attack: 90 Body resistance: 20 Speed: 80 Dodge: 70 Fight Experience: Initiate Item: Gem ****************************************** Name: Tahim´a - Junpar Age: 23 Sex: Female Job: Programmer Signs: Works for the Khonsu trust programming the artificial intelligence of meca units to maintain order. Perfect knowledge of Meta-oxyliac languages. Interests: Likes gaddj music, tech trances, electronic gadgets, soap operas on transcan. Hates bugs, being disturbed when programming, resonable people. Location: Book store near temple. Attack: 40 Body resistance: 20 Speed: 50 Dodge: 50 Fight experience: Novice Item: None ****************************************** Name: Ysmala´n - Anekbah Age: 26 Sex: Female Job: Nurse Signs: Working for a doctorate in oxy-molecular biology. Has perfect knowledge of treatment techniques and bio implants. Intends to work in research as soon as she get her diploma. Interests: Likes studying mortal viruses, discovering new treatments, helping those who need treatment. Hates selfisnes, obscurantism, seeing people suffer. Location: Be in Kay'ls body when giving the pass to Jaunpar or the hospital (not drug store). Attack: 50 Body resistance: 20 Speed: 60 Dodge: 50 Fight experience: Novice Item: Ysmala´n´s medikit ****************************************** Name: Zao´r - Junpar Age: 31 Sex: Male Job: Pamoka prison officer Signs: Known by prisoners to be particularly violent and merciless. A decidedly unpleasant character. Exellent marksman. Interests: Likes Eagles techball team, Kloops beer, waver weapons, law and order. Hates books the degenerate morak of the young, anarchy. Location: In Pamoka (need him for storyline). Attack: 60 Body resistance: 50 Speed: 70 Dodge: 50 Fight experience: Initiate Item: Zaor´s apartment key. ****************************************** ############################################ %$#@# Enemy List ############################################ Take note that this doesn't include bosses. ============================================ Human Guard ============================================ These guys normally have guns that vary. The average one has a waver gun. If they have a waver gun and shoot from a distance it is easier to see than an octagun or decagun. When you fight the you always want to aim for their head when they're close and body from a visible distance. If you have a decagun you can deplete they're HP with barely two shots. Make sure you're not up close as they'll attack you by swinging whatever weapon they have for extra damage. ============================================ Zombies ============================================ These are found in Gandhars lair. They attack weak and only up close. Just shoot for a second or two and they're sure to be on the ground squealing. Beware that they walk in packs. Just be sure you're at a distance when you shoot them. ============================================ Mecadog ============================================ These are possibly the easiest of the lot. They do not shoot but just run up to you and attack. You'll want to be at a distance from them and shoot continuously at they're entire body until they drop. They have more HP than a human guard but less attack power and can only attack from up close. As they run up to you, shoot them and walk backwards but make sure you don't fall off anything high. ============================================ Mecaguard ============================================ These guys are big and can be tough. If they start shooting, run. They can deal massive damage with a single shot. Once you start firing ant them though they'll start to wobble around. Don't stop shooting until they drop. If there's a pack of enemies and a mecaguard, run around shooting the people and take care of the mecaguard last as they're bullets are easily dodged. ############################################ Spell List ############################################ You might've beaten the game with only the reincarnation spell but there are a few others out there. There are a variety of books which explain how to create these but most are impossible to do in the game as the ingreedients aren't possible to get. They are just there as the newspapers are there; for entertainment. ============================================ Reincarnation Spell ============================================ You can receive this spell by one of two ways. You can get it from the cupboard in Den's apartment or if you aren't up to that yet you can get it from the temple. To get it from the temple, go to Kay'ls apartment and grab the food for koopies out of the kitched (in the middle pillar). Now go to the lounge room an up to the glass at the back. Go to the hatch and use the food so the koopy goes and gets a small key. Now go to Kay'ls office in HQ and unlock the box in the cupboard with the key. Inside is a note. Now go to the Qalisar temple (look on the map) and stand on the star. Take 10 steps forwards and 10 right (like it says on the note) and press the action button to open a hatch. Inside you'll find the reincarnation spell (if you haven't gotten it already) and other stuff like potions and rings. Just so you know this is in the sidequest section, too. ============================================ Truth Spell ============================================ You can only use this once in the game. You use it in Junpar when you interrogate Quazef at the armory. To get this you have to mix a dead mans tongue with a wiki and a beshem. You should know where to get the beshem from. You can get the dead man's tongue from Dakobah's library in the awakened base (Junpar). To get a wiki, go to the hideout (Namtar gives you the key) and you'll find an advertisement for it. Then you can go there by slider. There are four or five wikis there to choose from (varies) so take your pick, mix them together and you have yourself a truth spell! ============================================ Unmask a Demon Spell ============================================ For this you have to mix ampher dew in a beshem with a sham horn. You can get the ampher dew from the box in Dakobahs library (unlock with Meshkans key). You can get the sham horn from Namtars room (on the sham head). To use it just walk up to the person you believe is a demon and use it. Before you know it they'll turn into the demon form. You can make and use this only once. You do it at you last visit at the awakened base in Junpar. ============================================ Resurrection Spell ============================================ You must mix a jinpan feather in a beshem with a drop of shadows. You can purchase the drop of shadows from the sorcery in Lahoreh for a small fee. The jinpan feather is located at Jahangir park. Take a right at the entrance and jump over the water. Press the action button to climb and you'll find one there. This spell is used to resurrect Kiwan to open the gate to Hamastag'n. You can use this only once and can only be made once. This is an essential to beating the game. ############################################ Make Money ############################################ An easy way to make money (after examining a flyer) is to go to the Sharmant tournament. You'll find it in one of the end shops in Qalisar. Talk to the guy at the counter and ask for a battery (cost nothing) He'll let you pass. Go down the elevator and walk into the stadium. You'll see a small scene before you fight someone. If you win, the crowd will shower you with money and you'll be taken to the next round. With the money you win you can take it to a sorcery shop and buy stat building potions. Use them and return and you'll find the tournament ten times easier. There are about five fights that get harder and harder. The money you receive is random but in the end you'll rack up over 5000 seteks. Another way is to visit banks. There is one in every district (except Qalisar) and you can sell the majority of your items for seteks. Some items you'll get as little as 5 seteks but if you sell weapons you can get hundreds, even thousands of seteks. There buildings look the same and shouldn't be too hard to find. Here are their locations: Anekbah: Central Junpar: Left of the entrance to Anekbah Lahoreh: Near the new awakened base. ############################################ @#$$% Item List ############################################ This list is complete to my knowledge ($10 says I'm wrong). If I am missing anything, be sure to tell me. After all, I played through the game once just to do this section. Take note that this includes only the name of the item as listed in the game and how much it cost from shops or where to find it (if it is not available from stores). NOTE: When you sell items at the bank, you will receive half of the amount you paid for it. For example, if a book cost 10 seteks, you will get 5 seteks for selling it at the bank. Take note also that you can only sell certain items. ============================================ Essentials ============================================ Magic Rings Found Everywhere Seteks (money) Found Everywhere ============================================ Storyline Items ============================================ 8 Time Bombs Jaunpur Hideout (from Jenna) Anissa's Little Key Strip Club (around her neck) Anissa's Paper Strip Club (secret cupboard) Barkaya's Right Before Astroth Candle Jaunpur Temple Camir's Key Security HQ (Surveillance Room) Corpse Sample In morgue (use surgical instrument) Dakobah's Key Jaunpur Awakened Base (Dakobah) Demoniac Cube Jaunpur Awakened Base (Namtar) Den's Apartment Key Den's Office Den's Card Strip Club (Anissa's Cupboard) Den's Map Den's Apartment (safe) Den's Police Badge Use broken sneak at Fua'n's Den's Sneak. Out of Order Anekbah Morgue (next to Den) Den's Wedding Picture Den's Apartment (Bedroom) Detonator for KR100 Jaunpar Awakened Base (Krill) Drugged Cup of Koil Combine Sleeping drug & Cup of Koil Electronic Cable Security HQ (Ventilators) Electronic Unit Jaunpur Bookstore (from awakened man) Erotic Poster 5 Food for Koopies 20 or Kay'l's Kitchen Fuse Security HQ (Surveillance Room) Guard's Key Pamoka (Cell) Hydromatic Piston Jaunpur Awakened Base (Soks) Insulating Spray 35 Jaunpur Pass Gandhar's Office Jenna 712 Detention Dossier Cpt Lea for mission Jenna's Apartment Key Cpt Lea for mission Kay'l's Apartment Key In your sneak at the start Kay'l's Police Badge Kay'l's Apartment (bedroom) Kay'l's Small Key Kay'l's Apartment (from koopie) Key to Jaunpur Hideout Jaunpur Awakened Base (Namtar) KR100 Explosives Jaunpur Awakened Base (Soks) Lahoreh Pass From Box in Jaunpur Lea's Police Badge Cpt Lea's Office (when drugged) Meshka'n's Key Jaunpur Awakened Base (Meshka'n) Mission Order Jenna 712 Cpt Lea for mission Namtar's Key Jaunpar Awakened Base (Yob) OX2600 Receiver Kit Security HQ (Surveillance Room) Pass for Security HQ Den's Apartment (safe) Plan of Sewers in Zone 9 10 Sleeping Drug Drugstore when prescription given Sleeping Pill Prescription Kay'l's Apartment (bedroom) Small Key Jenna Jenna's Apartment (toilet) Small Key Den Den's Apartment (shower room) Surgical Instrument Anekbah Morgue Telis' Talisman Received from Telis at Restaurant Tetra Pass 1 Tetra Factory in Jaunpur Tradutech Lahoreh Library (Man on Top Floor) Vyagrimukha's Jewel Xenda's Temple WCM Explosives Combine KR100's with detonator ============================================ Medikits ============================================ Medikit Small 50 Medikit Medium 110 Medikit Large 200 Ysmala´n´s Medikit Reincarnate Ysmala'n ============================================ Food and Drinks ============================================ Chokovat Bar 20 Cup of Koil 20 Droot Salad 40 Kloops Beer 20 Koopy Sandwich 20/30 Niam Omelette 50 Pureed Cramen 50 Qazmaa'd's Can of Liquor Reincarnation Item Quanta Cola 15 Sham Steak 100 Wiki 25 Xiam Noddles 75 Yuki 20 ============================================ Books, Parchments, Notices ============================================ Ad for Wikis Garden Found in Jaunpur (hideout) Ancient Scroll Kay'l's Office (locked chest) Ars Magica Volume CCLXII 50 Ars Magica Volume DXIII 40 Ars Magica Volume XIX Jaunpur Awakened Base (Dakobah) Ars Magica Volume XLVII 50 Ars Magica Volume XLVIII 50 Ars Magica Volume XX 60 Ars Magica Volume XXXII Wiki Garden Ars Magica Vloume XXXVII Dakobah's Library Book The Cult of Vyagrimukha 25 Chronicle of the Cobalt Wars 5 Dav Idkaj - Poetry 7209/7211 45 Dav Idkaj - Poetry 7212/7215 5 Eye of Vyagrimukha Parchment Lahoreh Library Flyer the Sha'amet Tournament Found anywhere Gastronomy Under the Dome 5 Jafa'yl's Parchment Temple in Qalisar Jaunpur Secret Conert Flyer Find Anywhere (bar at entrance) Jenna's Note Jenna's Apartment (locked cupboard) Kai'a's Book of Poetry Reincarnation Item Lahoreh Secret Concert Flyer Lahoreh Awakened Base (on a table) Masa'u Runes 5 Mayt'a's Good Recipes 35 Mecaguards Specifications Security HQ (Maintenance room) Memo - Dossier Archives Security HQ (rest room) Memo - The Sect of Awakening Security HQ (Den's office) Meshka'n's Journal Jaunpur Awakened Base (Meshk'an) MK400 Notice In your sneak at the start Multiplan Notice In your Multiplan at the start Mystery of the Book of Nout 5 Omikron, Birth of the City 75 Omikronian Laws Volume - 87 5 Omikronian Laws Volume - 88 55 Omikronian Laws Volume - 89 5 Omikron Laws - Vol 91 55 Omikron News - 9 Andar 7216 7 Omikron News - 11 Nadim 7216 7 Omikron News - 19 Xenep 7216 7 Omikron News - 23 Xenep 72116 7 Omikron News - 34 Nivat 7216 7 Omikron News - 35 Nivat 7216 7 Omikron News - 37 Aqed 7216 7 Omikron News - 41 Andar 7216 7 Omikron Police Memo-Archives Security HQ (Maintenance Room) On Parallel Universes Lahoreh Library Propaganda Document Jenna's Apartment (secret room) Ring Note Kay'l's Apartment (lounge) Sacrifice to the Ancient Gods From Boz in Jaunpur Saye'm Dialects 5 Scrawled Message Security HQ (Den's Office) Taar Fight Techniques Vol 9 Lahoreh Library Taar Fight Techniquees Vol 14 40 Taar Fight Technuques Vol 22 Lahoreh Library Taar Fight Techniques Vol 27 50 Table of Cosmic Correspondence 5 Telis' Message Kay'l's Apartment (on floor) The Bases of Taar Combat 65 The Birth of Ix Volume 1 5 The Birth of Ix - Vol 2 45 The Birth of Ix - Vol 3 50 The Birth of Ix - Vol 4 35 The Book of Beshe'ms Dakobah's Library The Cobalt Wars Vol 1 50 The Cobalt Wars Volume 2 5 The Cobalt Wars Volume 41 5 The Dreamer's Concert Flyer Found anywhere The Entrance to Hamastaga'n 5 The Gospel of Fayenda 5 The Gospel of Yelait 5 The Green Book - Psalms 5 The Green Book - Psalms 2 5 The Legend of Boz 5 The Magic Signs of the Art Dakobah's Library The Necropolis of Hamastaga'n 5 The Political System 55 The Primat Systemis 70 The Quartat Systemis 5 The Secret Rites of the Art Dakobah's Library The Tertiat Systemis 70 Tradutech Translation Translate Hamastaga'n book Treatise on Anti-Gravity 5 Virtual Training Center Ad Found anywhere Zafat't Grammar 5 ============================================ Transcans ============================================ Dreamers 90 New Angels of Thing 90 Pretty Things 120 Seven 90 Something 110 Survive 100 Thursday's Child 110 Transcan Tape Den's Apartment (safe) We All Go Through 100 ============================================ Weapons and Ammo ============================================ Waver Gun Get from Kay'l's apartment Anekbah supermarket shootout Power Rod Get from sorcery in Anekbah Double-Waver 650 Octogun 800 Decagun 1500 Megazooka 3000 Hypra 5000 Double Waver Ammunition x50 100 Octogun Ammunition x30 200 Decagun Ammunition x20 250 Megazooka Ammunition x10 500 Hypra Ammunition x2 900 ============================================ Spells and Potions ============================================ Acid Jaunpur Awakened Base (Jaunpur) Attack Potion +15 120 Attack Potion +30 215 Attack Potion +50 290 Beshe'm Xenda's Temple Cepher Leaf Xenda's Temple Darkmet's Speed Drug Reincarnation Item Dead Man's Tongue Dakobah's Library Dew of Light Dakobah's Library (locked thing) Dodge Potion +15 120 Dodge Potion +30 215 Dodge Potion +50 290 Drops of Shadow 150 Fight Experience Potion 350 Hunabk'u's Potion Reincarnation Item Life Potion 110 Mana Potion +10 50 Mana Potion +30 125 Mana Potion +50 190 Nelmet's "Force" Drug Reincarnation Item Potion Virgin Blood 300 Powder from a Dead Man's Skull Xenda's Temple Regeneration Potion 500 Reincarnation Spell Qalisar Temple or Den's Apartment Resurrection Spell Mix Jinpan Feather & Drops of Shadow Sanctified Beshe'm Xenda's Temple (after ritual) Sham Skin Spell 700 Spell to Unmask Demon Combine Dew of Light with Sham Horn The Taar Stone 700 Truth Spell Combine Dead Man's Tongue and a Wiki Viscous Slime 150 ============================================ Miscellaneous ============================================ A Battery for a Meca-Lamp 0 Activated Radar 100 Beggar's Pass Beggar in Anekbah Betsy's Apartment Key Reincarnation Item Bowl Pamoka (Cell) Enyad's Apartment Key Reincarnation Item Ganji's Apartment Key Reincarnation Item Gem Reincarnation Item Iman's Apartment Key Reincarnation Item Itzama's Apartment Key Reincarnation Item Jorg's Apartment Key Reincarnation Item Lahyl'in's Pendant Reincarnation Item Niomay's Apartment Key Reincarnation Item Nuyasa'n's Spices Reincarnation Item Plume's Apartment Key Reincarnation Item Samyaz'a's Apartment Key Reincarnation Item Security HQ Master Key Receive from Boog Ticket Shooting Gallery 10 Zaor's Apartment Key Reincarnation Item ############################################ !@!#$ Glitches & Bugs ############################################ This game may seem flawless in every way but there are a few things that just aren't meant to happen, were missed or maybe put there for entertainment. I'm on a quest to find every last one and if you would like to help, please do. Anyways, let's begin: SPELLING/GRAMMA MISTAKES: There aren't too many of these but I have noticed a few of them in the books. The ones that I have found are "Taar Fight Techniquees". Obviously they upsized a word in there. Another one is the Omikron Laws books. One of them is called "Omikronian Laws Volume - 88" and another "Omikron Laws - Vol 91". I'm not sure about you but I'd think they'd keep the format the same. GAME FREEZE (PC): Apparently if you got to Hamastag'n and aim straight down at the dogs at the start, the game will freeze. I admit that this happened to me once but I got an e-mail from someone telling me that it happened to them too. I'm pretty sure this is a serious bug on the PC and people should go down the ramp to the right just to be safe. STUFF UPS: In Xendar's temple (Jaunpur awakened base) you will find the Vyagrimukha jewel there. If you don't get it before you get into Lahoreh, the Jaunpur awakened base will lock up with the jewel still inside. Just pray you have an extra save file before Lahoreh. You can also screw things up in Lahoreh when you reincarnate into the scientist at the library. If you reincarnate again before getting the translation of the book... your screwed. STUPID LITTLE THINGS: For example, when you have to blow up the bridge in Jaunpur. If you dive into the water where the bridge collapses, you still survive. It seems that in Omikron, you are only effected by explosion, firearms and hand to hand combat. It seems that getting crushed by a 10 ton bridge while you're underwater is as painful as a slight breeze tickling your nose. You may also notice that in numerous parts throughout the game, things happen that aren't meant to like the first Azkeel you see that is supposedly leaning on the wall is, in fact, a foot away from the wall and is leaning on absolutely nothing. One thing that has happened to me a few times is that people start spinning and doing backflips while they're talking to me. It's probably just because my CDs scratched but it's very rude to be doing a handstand and facing the other way while your talking to someone. But also, things can look awfully fake sometimes like when you use a switch and the hand clearly misses it, but that isn't a glitch, more something they didn't add. There are also the moments when people say words that aren't displayed on the screen. And then when Telis is talking to you crying and when she finishes talking, she returns her face to normal. ############################################ !@#$$ Frequently Asked Questions (FAQ) ############################################ Q - Where's the sorcery shop? A - There are 2 in the entire game, one in Anekbah and the other in Lahoreh. The one in Anekbah is the easiest to find. Go to the police HQ and you'll see that you're on a T intersection (the HQ being at the top of the T). To the left of the T is a supermarket. Next to the that is an alley (not the road) that goes down to a tall tower-looking structure. That is the sorcery (take note that you can only enter it once you have reached a certain point in the game. The Lohoreh one can be found by locating a catwalk around the center of the district. The catwalk seems unstable and the man at the counter should be familiar. Q - Why is it that when I go to see Fuan with Den's sneak, I can't offer to give it to him? A - Go into your inventory and use the item when you are in front of him. This should work. If not this doesn't work, simply talk to him. If that didn't work then you probably don't have Den's sneak in you inventory. Q - Is there an easier way to travel other than the sliders? A - What more do you want? The sliders take you straight to the place. If you're still having trouble finding your way to a specific location, there is a map in your sneak that works ONLY outside in the districs Anekbah, Qalisar, Jaunpar and Lahoreh. It should be a lot easier to navigate that way if you're on foot. Q - There's a door to another district in Lahoreh. How do I open it? A - You can't. Simple. It was actually put there because another district was going to be placed in the game but due to budget and time, the producers had to give it the cut. It would appear that many of these things were going to happen such as sliders in Lahoreh (floating on the water), more locations and some new characters. Pity. Q - There are many spells in books you haven't mentioned. A - Not a question but... those spells are impossible to make as you can't find those ingredients and even if you did, what would you use them on? Honestly, if you used the love spell on someone, 9 times out of 10 says they'll be a demon or worse... a relative! Q - What are transcans? A - Transcans are the equivalent to our television and dvd/vcr. Use the action button on it to watch a few channels. At one point of the game, you get a "Transcan Cassette". All you have to do is use the cassette on the transcan. Q - Can I get other cassettes? A - Yes, but they are not video footage but instead, some more great David Bowie. Here is a list of them: New Angels of... 90 Dreamers 90 Seven 90 Survive 100 We All Go Through 100 Something 110 Thursday's Child 110 Pretty Things 120 The numbers next to them are the prices they are from numerous bookstores. You can also get these from Saya's(?) appartment in Lahoreh where she has the entire collection there for you for free! Q - Where can I get a copy of this game? A - Don't expect to find this at a shop. Ebay would be your best bet and you wouldn't regret it. Q - There's a rumor going around that there's a sequel. Is this true? A - Indeed there is! It is going to be called something like Omikron 2: Karma apparently. I don't know too much but it's going to released for the PS3 and currently has next to no information about it given out. Just don't go crying if it is never released. Go to the GameFAQs message board if you want some more information (if there is any). Q - Do you have any idea what the new game's about? A - I suspect that it'll be a whole new thing. It might be before the ice age or after it, it may be during the cobalt wars or just after the first one finishes. You might pick up the clues like in the end: *SPOILERS* Dakobah and Boz says you can come back whenever you like and Jenna says that you and her could have loved eachother.*END SPOILERS* But then again, I might be 100% off. Q - How long should it take to beat this game? A - It's not a super lengthy game but it is at least 20 hours. It depends on a few things: weather you've played this game before, if you're going to use the walkthrough to answer the puzzles and also if you're going to do the sidequests. Otherwise you could spend up to perhaps 80 hours. My quickest run was 2 hours and 20 mins (approximately). Q - I have ran out of magic rings, is there a place I can buy them? A - No, but honestly, why would you? Throughout the game there are countless amounts of them. My tip: only use them when you are leaving the computer but do this before you finish Den's apartment. Why? Because its impossible to die. I honestly don't see how you can run out as I finish the game with way over 50+. Q - Where can I get easy money from? A - Selling items from your terminal is always the best place to start or reincarnate into someone and raid their apartment/items. But there's also the tournament in Qalisar. See the side quests section for details. Q - Is it possible to stay as Kay'l for the entire game? A - I highly doubt it. The furthest I believe it is possible to get is right before entering Jaunpar. Besides, there are numerous people you NEED to be throughout the game to bypass the objectives. Q - What's all those numbers at the bottom of the sneak? A - That's the Omikronian time! It looks a little like this: 41 Andar 7216. The 41 is the day, Andar is the name of the month and 7216 is the year (or "cycles" as they call it on the game). There are a lot of different months like Nadim and Xenep. You may also notice that these times are found on newspapers to establish when they were released. Q - How do you open the cupboard in Gandhar's office? A - I was hoping you would know as I have no idea. I think I managed to do it once but I have no memory of how nor what was in it. All well... I can't imagine anything too spectacular would be in there, assuming it is openable. Q - (PC) I lost my file when I uninstalled it... is there anyway to get it back? A - Probably not, but you can keep your file if you're going to uninstall it. if you go into the Omikron directory (usually C:/Programs) and look in the "Iam" folder, you'll see a file there called "game". Save that on your computer somewhere (you can compress it to under 1MB) and replace it when you install the game again and your file should still be there. Q - Why does Fu'an hide his identity when you first meet him? Is he hiding something? A - To my knoledge, this wasn't actually put in the game, but there was a point where you go back to him. He's a member of the awakend and he's scared because you're a cop and cops don't like the awakend. Q - How EXACTLY did you find these "Removed Things"? A - If you have the PC version, it's pretty simple. Just go to the nomad sould folder (usually C:\Program Files\Eidos Interactive\Quantic Dream\The Nomad Soul) and go into the IAM or Iam folder. There should be an unrecognised file named 'DIALOG'. If you double click on this and open it with NOTEPAD (might take a while), it'll show a bunch of crap and all the dialogue in the game. If you take and short whizz through it all, you'll find some interesting stuff. Q - What's up with that unnamed waver gun in the Anekbah supermarket? A - Dunno, but it looks like it wan't meant to be there, or at least it wasn't meant to be reachable. I suggest you don't pick it up as you can't get rid of the bloody thing. Q - What happened to the playstation 1/2 versions? A - Here's the story: Quantic Dream (the developers) originally intended for the game to be released on PC and PS1 (never PS2). Eidos (the publisher) thought that the playstation 1 market was about to drop out so they decided to publish to the dreamcast instread of the PS1 as they thought that would have a better future. Unfortunately, Eidos screwed up and, as David Cage (director, designer and CEO) has said in an interview, it was his "biggest regret". So Omikron could have been heaps more popular if Eidos used their brain a little more. But, then again, Omikron was a unique game, making publishing it a real gamble. At least they did better with the Indigo Prophecy. ############################################ #!$@@ Removed Things ############################################ WARNING: There most definitely will be spoilers in this section. You'd best beat the game before venturing here. I have managed to find a list of things that didn't quite make it to the final version of the game (or at least, I can't find it in the game). All of this stuff is in the .../IAM/DIALOG file (for PC versions). What it is, is all the dialog in the game!! I wen't through it and found a bunch of stuff I don't recognise. For a brief summary, skip to the end. So, anyways, here's the stuff I found: NOTE: The way things are written out contain your options and other crap--but you can still make sense of it. --- Hello, dear customer! I have the pleasure of announcing to you that you have been selected to buy our magnificent molecular vacuum cleaner: Muz! Thanks, but I'm not interested. Cool! I beg your pardon? The Muz molecular vacuum cleaner is the housewife's dream! It pulverizes acarids, disintegrates dust and purifies the air in your apartment! All that for the ludicrous sum of 10000 seteks! Where should I deliver this magnificent machine? Thanks, but I'm not interested. You again? What do you want? Err.. nothing... I just came by to say hello... Just one question: Where do you buy the equipment for your missions? If you would be kind enough to leave me alone, I wish to meditate in peace. You can buy weapons in any arms store. You can get more "special" material from Fu-an, an Awakened who runs a store in Qalisar. Thanks for the tip. Zang? The name rings a bell... 1214 Zane'm Street. Find the public Videophone and wait. Barniva, the Propaganda Minister and Sator, the Industry Minister are at present having lunch in a restaurant in Zone 3. Boz has ordered us to kill them. Jenna is already there. She is waiting for you near the restaurant. Go there and she'll tell you how to proceed. Good luck. It's a good thing I happened to be passing, huh? Now listen, follow me. There should be a slider waiting to take us away. OK. Where is that damn slider? It should have been here ages ago. I've got some good news and some bad news for you. The bad news is that on my signal, my Mecaguards will transform you into confetti. The good news is that if you give me the Book of Nout you will die quickly. Well, what will is it to be? If you want the Book you'll have to come and get it. Sorry, I can't give you the Book. I was hoping you'd say that. Kill him! Rust Mecadog! Softie! Amoeba! (Yes I know, my level of insults is going downhill fast... ) I managed to steal the jewel from the museum. Well done! I got some information about the Third Jewel. I'm afraid it's bad news. It's on the 66th subterranean floor of the Khonsu trust HQ. What do you know about the Khonsu HQ? It's the trust's central base. That's where they have their barracks and their most secret labs. It's said that the HQ has 99 subterranean levels. But no one has ever come back to tell the tale... How can I get into the HQ? Let's go and see Dakobah. He mus know a way to get into the HQ. By the way I'd appreciate it if you could stop wetting my slider. Sorry... One of them is in the museum, but I don't know how to steal it. Do you know where I can find the missing rune fragment for the jewel at Yrmali square? Hi Jenna. I've found the way into the Catacombs. I must bring together the Three Jewels of Vyagrimukha. I can think of only one person to help you: Krill. He 's the best for that kind of operation. I'll tell him you need his help. Meet us outside the Yoshida restaurant in Zone 4. We'll be waiting for you. All right. I'll be there. Do you know where I can find the missing rune fragrment for the jewel at Yrmali square? I don't know. I have to steal the jewel from the museum. Do you know how I can get in there? See you later, Jenna. I can think of only one person to help you: Krill. He 's the best for that kind of operation. I'll tell him you need his help. Meet us outside the Trazim restaurant in Zone 4. We'll be waiting for you. All righ, I'll be there. Jenna has told me about your project. The place you wish to enter is heavily guarded. If you follow my plan to the letter, you may stand a chance of succeeding. What must I do? What's your plan? Go to the museum and stake it out. Explore every room and memorize the smallest detail. When you've located the room with the jewel, call me at this number: 810 247. Then I'll give you more instructions. OK. I'm on my way. I wanted to thank you for your help in the museum. See? I didn't need your help to steal the jewel from the museum. I think I was wrong about you. Your bravery has won my respect. I would be proud to fight by your side one day, brother. See you later, Krill. Er, Krill, I haven't got a clue what to do now... Please forgive me Krill. I behaved like an idiot. Krill, I need your help. Forget it. Our cause is all that counts. What have you done since the last time we spoke? I managed to get the museum emptied. What do I do now? I didn't manage to get the museum emptied. You must find a way into the museum. Once you're inside, deactivate the main elevator to stop reinforcements coming. When you've done that, call me for more instructions. Call me when you've got the museum evacuated. Then I'll give you more instructions. Roger. I thought you were able to manage alone? I'm sorry. I behaved badly. I was stupid. Please forgive me. The jewel is nothing but a damned hologram! I hadn't thought of that. They must have placed the real jewel in a safe place. It's bound to be somewhere in the museum. Now I can't help you any more. It's up to you to act. I'll continue to explore the museum. I'll find it in the end. Good luck. I've done it. I've found the jewel room. Perfect. Now you must find a way to get them to evacuate the museum. Once all the visitors are out, we'll be free to operate. Evacuate the museum? How am I supposed to do that? Couldn't we just wait until it closes? No need to evacuate the place. I'd rather go for the jackpot, waste the guards and get the jewel. That's your problem. It's up to you to find a solution. All right. I'll find some way to get them to evacuate the museum. No need to evacuate the place. I'd rather just go straight for the jewel. Call me back when you've got the museum evacuated. Then I'll give you some more instructions. OK The museum never closes. It's always open. You won't get very far if you're going to question my orders. I don't have to take orders from you. I can see you have a talent for strategy... If you want to kill yourself, don't expect me to help you. I don't need your help! I'm sorry. I'll follow your plan. Very well. Sort it out for yourself! I've done it. I managed to evacuate the museum. Now what do I do? Find the Surveillance Room and unlock the door leading to the jewel. Very well. Taar Law says that only the strongest shall survive. Our species improves from one generation to the next. When humanity has reached perfection it will be able to awaken the Supreme Being from his sleep. He will cease to dream and we shall all disappear into the Void. To be numbered among the strong and reach this stage of perfection, we must train for combat. Who shall have the honor of dying today and improving our caste? Harimal! You! Welcome to Fu-an's! What can I do for you? Hey, don't I know you? Hi, Fu-an. You came here before you became Awakened. You already known to us. Everyone is talking about the Nomad Soul. I didn't know you were one of the Awakened. I'd like to buy something. Many people are Awakened but nobody knows it. Best way to stay alive. What can I do for you, Nomad Soul? I'd like to buy something. So, what you like to buy? Here. Do you want more? Yes. No. Please, come back and see me if you want more. See you later. Not enough money. Take something else or come back later. I'll choose something else. That's too bad. I'll come back later. See you later. What are Kerils? Kerils be shells, in water. Why don't you go get later. Where water be, Kerils be. You go if you no fear wet. Why don't you go for them yourself? See you later. Azkeels no like water. Wait for current to bring Kerils to land. So, Kerils very rare! See you later. Say, do you know where I could find some Shams? What are Kerils? Me not know. What are Kerils? Kerils be shells in water. Why don't you dive for soon. Where water be, Kerils be. You go if you no fear wet. Why don't you go for them yourself? See you later. Azkeels no like water. Wait for current to bring Kerils to land. So, Kerils very rare! See you later. You not enter, Fodo. Soyinka busy. I'm not Fodo, I'm the Nomad Soul. Let me in! Of course, and I be Legatee Reshev. Now you move along before I get angry. But I tell you, I am the Nomad Soul. I'm just using Fodo's body. Let me through! I must see Soyinka immediately. I have no time to explain, you must let me through! Me no time for jokes. You move on if you want no problems. I think I'd better leave him be... That food you're cooking smells good. Roast chapala. Only five Kerils for you. Not expensive, good to eat! <Give five Kerils> No thanks. I might come by later. Sorry. I haven't got five Kerils. I might come by later on. Here's chapala. Eat. Stranger need strength. Thanks. Come, stranger. Here be the best goods in Mayerem! Clothes for Sham and your woman, food, herbs. Everything Azkeel needs you find here! ... a Niam omelette ... Jirn mousse green mushrooms, yellow mushrooms, luxury finery for my women. <Not enough Kerils> <Purchase ok> I'd like: Not enough Kerils for that. You return later when you rich. Here's the goods. You want something else? No thanks. Wh... what happen to Fodo? It's too long to explain, Fodo. All I can say is that you've been a great help to me. Me help Nomad Soul? Me glad, but not remember... Now rest. You need it a lot. Farewell, Fodo, and thanks. Farewell, Nomad Soul. So you managed to convince the magicians to give you the Book of Nout... You have accomplished a great miracle. Now I believe you are the Nomad Soul that the legend predicts. Everything must be done to help you accomplish your destiny. I'll take the Book of Nout back to the surface. Tell me more about your father, Matanboukous. Guard the book as if it were you life. He who possesses it has more power than a god. I dread to think of what would happen if it fell into the wrong hands. Can I count on your people to help me? Fear not. None but I shall touch the Book. All the Azkeels are now ready to help you. Great events are in the offing and great changes shall soon be seen. Now, you can count on the Azkeel people to help you to combat the enemies of Omikron. How can I contact the Azkeels when the time comes? Take this Talisman: when your comrades are ready to begin the revolution, activate the talisman. Then the Azkeels will arise from the Catacombs to fight by your side. Can you help me to get back to the surface? The passageway to the surface is open again. Make haste, Nomad Soul. Time is of the essence. Thanks for your help, Soyinka. You take all our hopes with you. I pray that Vyagrimukha may look with favor on your work. When all the magicians of Omikron went to fight Astaroth, my father stayed behind for I was then just a baby and he could not abandon me. The other sorcerers put their souls into the sword, Barkaya'l, and disappeared forever. All his life my father suffered because people believed he was a coward. My greatest wish is that one day I will be able to restore his honor. I'll take the Book of Nout back to the surface. Who dares steal water from the Fountain of the Dead? I am the Nomad Soul. Who are you? Are you a spectre? My soul comes from another dimension. I am looking for the Book of Nout. I am no longer anybody, a mere shade, a puff of wind. A long time ago I was Xael, the magician, high priest of Batschawa. I have come to look for the Book of Nout. Do you know where I can find it? Thousands of men have come before you in search of the Book of Nout and its power. Not one of them has left here alive. Why should you be any different? I do not come in search of power. I need the Book of Nout in order to save my soul and kill Astaroth. Behold the three great temples around this square. Each is the dwelling place of one of the three great masters of the Ancient Art. Go to see them one after the other and present your request to them. If you manage to convince them of your honesty and courage they may consent to give you the Book of Nout. I'll try to convince them. See you later. Well, well, what have we here? It would appear to be a mortal. We don't see many of them around here. Opportunities for a bit of fun are so rare... To what do I owe the honor of this visit? I have come to look for the Book of Nout. I ask you to let me to take it away. The Book of Nout? Is that so? Let me see if there isn't a little favor you can do me in exchange. I would like to be able to continue to study magic. Unfortunately my status as spectre does not allow me to manipulate objects. At best, I can only go through them. There are, however, three works that I would like to consult: "Ars Magica", volumes 1, 2 and 3. You will find them in the tombs of Hamesataga'n. Bring them to me and I will give you my consent. All right. I'll bring you back the three "Ars Magica" works. You succeeded! My beloved wife is back! I shall no longer be alone for all eternity! A thousand blessings on you, mortal. You have proved yourself brave and intelligent. You deserve to have your request granted. I grant you the right to lay hands on the Book of Nout. But in order to acquire the Book, you must first have the consent of the two other main magicians of Hamestaga'n. Go, mortal. May Vyagrimukha protect you. This is not the skull of my late wife. Are you trying to fool me? Go away and don't come back until you have succeeded! I'm happy to see you again, Nomad Soul. I asked you to come because the situation is serious. Reshev is now in possession of the Book of Nout. If we don't stop him he will use the Book's magic and nothing, not even Astaroth, will be able to stop him. What can we do? The only way to prevent Reshev from using the power of the Book is to start the revolution immediately. Our men are mustering our forces and preparing for the uprising. In a few hours we will be ready to attack Omikron. What if the revolution fails? Is there no alternative? What do you want me to do? The time has come for a great confrontation. Either we succeed and open the eyes of all humans, or we fail and become the slaves of Reshev and the demons for all eternity. What do you want me to do? This is an historic moment for all Omikronians. I want you to fight by our side for the most glorious of victories, freedom. Help us to free Dakobah and recover the Book of Nout. I'd like to help you but I must find Astaroth. I know how vital it is for you to find Astaroth but if Reshev uses the Book of Nout he will become an all-powerful god and not a single soul in the whole universe will be beyond his power. What's your plan? What must I do? Have you learned anything new about where Astaroth is hiding? All right. I'll help you. After all, it's my fault that the Book of Nout is again in human hands. Make your way into the Palace of Ix and open the main doors to our troops. Once inside the Palace, find Reshev's private apartments and take back the Book of Nout. You can count on me. What do you know about the Palace of Ix? I have faith in you. You have already proved your worth. While you're attacking the Palace we'll take over the barracks and the Central Bank of the city. All Omikron will rise up against Reshev and the trusts. Here's a Talisman given to me by the Goddess-Queen, Soyinka. When you activate it the Azkeels will come out of the Catacombs and fight by your side. With the people of the Catacombs we are sure to win. All that remains now is for you to accomplish your destiny. You hold the future of Omikron in your hands. I pray that Vyagrimukha may come to help you. See you later, Boz. We're going to win. By the way, where is the Palace? The Palace of Ix is in Nagataneh, the zone reserved for the rulers of Omikron. You'll need a pass to get into to it. See you later, Boz. We're going to win. The Palace is the residence of Ix, the computer, and Legatee Reshev. It's the most highly protected place in Omikron. It is guarded by specially trained elite troops. The Palace is also the symbol of the power of Ix. If we can bring it down, the success of the revolution is assured. Where is the Palace? The Palace of Ix is in Nagataneh, the zone reserved for the rulers of Omikron. You'll need a pass to get in. You can count on me, Boz. I will not fail! No, unfortunately. I have sent out requests to all the databases but I have learned nothing. Wherever he is, Astaroth hasn't left a trace behind him. How do you plan to free Dakobah? So, Kay'l. What's up? Nothing special. Just routine. And you? I've been to the deposit room, in the Archives. All the money seized by the Omikron Police is stored in there. It's a lot of lolly. And with all those Mecagards patrolling, the hoard is quite safe. Sure. Bye. What is this object? It's a Lighter. It's used to light a fire. All right, I accept your gift. I will give you the basis to survive in the Lands of Ice. You may have a slight chance of not dying right away if you follow my advice. Speak, I'm listening. Survival depends on two things: a Sham and a weapon. You seem to have neither. My herd of Shams is grazing not far from here. Walk into the wind when you leave the hut, and you will find them. Take the Sham of your choice. That is my gift to you. Thanks. I'll look after your Sham.. What brings you here, stranger? At last, someone who will speak to me... Why do you call me stranger? Who are you? Kamijis say that the number of words a man may speak in his life is counted. Each time you open your mouth, some heat leaves your body. And here, heat is life... Who are you? Why do you call me "stranger"? The snow covers things, but under it, things stay what they are. You look like a Kamiji but you are not one of us. Why do the others refuse to speak to me? Who are you? Kamijis have no names. Here, there are men on one side and ice on the other. A man's name has no importance when his limbs are numb with cold. Why is it so cold here? Why do the other Kamijis refuse to talk to me? Why did you call me stranger? Kamijis say that the number of words a man may speak in his life is counted. Each time you open your mouth, some heat leaves your body. And here, heat is life... Why do you call me "stranger". I am a Kamiji like you. Why is it so cold here? Where do Kamijis come from? The snow covers things but under it, things stay what they are. You look like a Kamiji but you are not one of us. Why do the other Kamijis refuse to speak to me? I'm a little lost. What is this place? Why is it so cold here? Kamijis say that the number of words a man may speak in his life is counted. Each time you open your mouth, some heat leaves your body. And here, heat is life ... Why is it so cold here? Where do Kamijis come from? What is the name of these lands? This region is called "Kapaleel", the Land of Ice. You are in one of the huts we built to feed our Shams and warm ouselves in. But tell me what you hope to find in this desolate land. I am going to Mahahaleel. You want to go to Mahahaleel, the Land of Spectres? You really are as crazy as you look. Do you expect to cross the Land of Ice without even a Sham or a weapon? The Krubors will crunch your skull before ten snow flakes can land before you... Tell me about Mahaleel. Help me, tell me what I must do in order to reach Mahahaleel. Mahahaleel is an accursed land. The spirits of the dead live there. No Kamiji in his right mind would go there. It is a "Wati'n" place, a sacred place. You will find nothing but death there. Nevertheless I must go there. Help me. Why should I help you? I lose my heat by speaking to you. What can you offer me in return? Wait, I'll find something. Several million years ago Rad'han, our only sun, died. Since then everything has been dark and frozen. The other men took refuge under crystal domes to protect themselves against the cold. We, the Kamijis, preferred to stay free and continue to live as we did before. Where are we, exactly? What is the name of these lands? What kind of man are you to be able to command fire? Now will you help me? You have very great powers. I don't know whether you are a man or an evil spirit, but I prefer to be your friend. Well, then, help me. I want to go to Mahahaleel. What must I do? In order to survive you need two things: a Sham and your weapon. You seem to have neither. My herd of Shams is grazing not far from here. Walk into the wind when you leave the hut, and you will find them. Take the Sham of your choice. That is my gift to you. Thanks I'll look after it.. It's a good thing I was passing this way. The Krubor was about to reduce you to pulp. Keep your heat, brother, or the snow will be your tomb. I found you half dead from cold. You were lucky I noticed your Sham's body, otherwise you were lost. Be more careful if want to keep your heat. I was hunting nearby when I saw the wild beasts attacking you. I managed to scare them away and get you to this hut. Be more careful next time, brother. Kapaleel is not a good place to die. Hello. ... Pretty chilly, huh? What's your name? Nice weather for the time of year. ... Excuse me, do you understand what I'm saying? Would you mind answering? Are you deaf or what? ... All right, then, see you around. I am glad to see you still alive. I no longer felt the echo of your soul. I feared the spectres of Hamestaga'n might have killed you. But now I feel a new strength in you, a spiritual power I have not felt in a very long time. You have seen it, touched it! You have set eyes on the Book of Nout.! I've done better than that, Dakobah. I have brought back the Book of Nout. You have brought back the Book of Nout to the surface of Phaenon? But why did you do that? Don't you realise what would happen if Reshev managed to get his hands on it? The Guardians of the Book told me it was time for it to return to the surface. It needs to witness the great events that await their time. Yes, I believe great events are about to take place. The spirits of the dead say strange and incomprehensible things... Did the Book of Nout teach you how to kill Astaroth? Yes. I must find the sword Barkaya'l in a place called "Mahahaleel". Mahahaleel, "The Territory of the Damned" in the Masa'u language. It is well beyond the dome of Omikron, only spectres and Krubors dwell there. I'd prefer if you don't take the Book of Nout with you. It will be of no use to you in those frozen wastes. Besides, if anything happened to you it would be lost with you. You're right. I leave the Book of Nout in your keeping, Dakobah. I shall hide it in a safe place until your return. May Vyagrimukha guide your steps, Nomad Soul. See you later, Dakobah. Help me, stranger. I beg you, please help me... What happened to you? Are you wounded? What happened? A Krubor attacked me while I was sleeping. I have already lost a lot of blood. If you don't help me, I will die. What should I do? Find a Moy'eb tree and bring me back the sap. Be swift, the heat leaves my body with every second that passes... Hang on, old man. I'll get you the sap from the Moy'eb! Why should I lose heat to help you. What will you give me in exchange? Sorry, old man. I prefer to keep my heat for more important things. Your soul is pure, stranger. I knew it as soon as I saw you. I place my life in your hands. Quickly, bring me the sap from the Moy'eb and save my heat. Your heart is as cold as ice. I have no riches, if that's what you're interested in. I am only a poor Kamiji. Can you guide me to Mahahaleel? I will guide you to the Land of the Spectres. Go, now. The cold is already invading my limbs... Your heart is colder than ice. Go your way and let me die in peace. Hello, Nomad Soul. My probability predictions announced your arrival. Sorry to present you with the spectacle of blood, but I had to get rid of that puppet, Reshev. Are you Ix? How could you know I was going to come? Why did you kill Reshev? That is the name given to me by my creators. I was designed 2000 years ago by your time and my only purpose was to save the human race. Why does Reshev's death give you so much pleasure? How could you know I was going to come? By cross-matching all my data I can predict events. I have known the precise time of your arrival for 1700 years. I couldn't wait for you to come. You create a strange disturbance in my predictions, as if your destiny were not yet established. I have come for the Book of Nout. Have you got it? You're Ix, aren't you? Why did you kill Reshev? The Book of Nout... The destiny of thousands of galaxies concentrated in one simple object... No, I don't have the Book. Can you predict what will happen next? In all probability I shall kill you and Astaroth will become the new master of the universe. I'm sorry but I'll have to erase you. I thought your mission was to save the human race? That's correct, but Astaroth is the best choice for the future of the race. Under his reign humanity will live in slavery but its survival is guaranteed for at least 999 999 years. The field of variables is far too unstable for me to be able to determine the future of humans in any other hypothesis. Human beings will never live under the yoke of demons! Better to die free than live as a slave! Your predictions are wrong! I'm going to kill Astaroth! Your opinion is of no value. It is based only on emotion. Mine is based on calculations. Believe me, for the good of humanity, you must die. Don't count on it! That is the name given to me by my creators. I was designed 2000 years ago by your time and my only purpose was to save the human race. I have come for the Book of Nout. Do you have it? Reshev was only a puppet, a grotesque and idiotic marionnette. I used him for a while but he no longer has any place in my plans. He had no function in the events that are currently taking place. I have come for the Book of Nout. Do you have it? Reshev was only a puppet, a grotesque and idiotic marionnette. I used him for a while but he no longer has any role in my plans. He had no function in the events that are currently taking place. Are you Ix? How could you know I was going to come? That is the name given to me by my creators. I was designed 2000 years ago by your time and my only purpose was to save the human race. How could you know I was going to come? What? Who are you? I am Ix. That's impossible. I'm Ix. You can't be me. I'm Ix. I was designed for the sole purpose of saving the human race. No, I am Ix. You're using a trick to imitate my appearance. You're nothing but a vulgar reflection of my image. No. I am Ix and YOU are a reflection of my image. Nooooo! Welcome to my humble abode! Don't be shy. Come in, come in... Are you Reshev? What have you done with Dakobah and the Book of Nout? I do indeed have the great honor of being Legatee Angus Reshev. And you, I suppose you must be the Nomad Soul. I am pleased to meet you at last. I wanted to thank you personally for bringing back the Book of Nout from the Catacombs. I had been looking for it for an eternity without success. Then you appear from nowhere and you find it first try. Accept my congratulations. You won't keep it for long. I have come for the Book. Where is Dakobah? I'm very much afraid that I won't be able to let you have it. It so happens that I have other plans for it... You'd better hand it over. The Palace is surrounded. You don't stand a chance. Where is Dakobah? Why did you capture Dakobah? Dakobah is my guest. I need his enlightenment to help me to use the Book of Nout. Surrender! The Palace is being stormed. You have no chance of escaping! Surrender? Do you realize you're speaking to a god? I have the Book of Nout, you poor idiot! Nothing and no one can resist my power! I shall be the new master of the universe. I shall be invincible, invulnerable, eternal... I think you're crazy. Perhaps... In any case my plans for the future are no concern of yours. You will soon be nothing more than a heap of cold flesh. Enough of this! Where is Dakobah? What have you done with the Book of Nout? Allow me to introduce my friend Xaar. Pay no heed to his physical appearance, the poor fellow is the result of some random genetic experiments. Xaar loves delicacies. Fresh brain is his favorite. As he doesn't like to think very much, my mind directs him. Farewell, Nomad Soul. And thanks again... I knew you would come, Nomad Soul. Where's Reshev? He's just gone down in the elevator. Hurry, there may yet be time to wrest the Book of Nout from his grasp. I'm on my way. It's a good thing I was passing, huh? What happened here? Where is Dakobah? I've never been so pleased to see you, Krill. Mashroud and his men attacked us by surprise. They captured Dakobah and took him with them. The others managed to escape. What about Jenna? Don't worry about Jenna, she's in a safe place. She told me to stay here to help you. She knew you would come back and fall into the trap. What must I do now? You must go to get your orders from the Virtual Being. He said he wanted to see you urgently. OK. I'm on my way. I knew we would meet again. Our last encounter left us with unfinished business... You humiliated me. Yes, you, you miserable cur, you caused Criff Mashroud to fail! It is high time to remedy that insult. Where is Dakobah? What have you done with the other Awakened? Forget them. You will soon be dead. I am going to kill you with my bare hands in order to prolong the pleasure. I want to look into your eyes as life leaves your body. I'm beginning to think you don't like me... You'll never have that pleasure. I have waited a long time for this moment. Prepare to die! --- So, in conclusion, from this information, we know a few things: 1) You could get to Mahahaleel and somethings happend there 2) Dakobah was kidnapped and you had to go rescue him 3) You could get arrested and fined and also sent to the thought controllers and recieve the death penalty 4) You could buy a Muz molecular vacuum cleaner (whatever use that was for) 5) There was something called a videophone (this wasn't the sneak) 6) There were many more missions with the Awakend, some involving assasinations 7) The scene where you and Jenna were captured by Criff Mashroud was extended 8) One of the Vyugrimukha jewels were in a museum, the other was in Khonsu's HQ which was heavily guarded. The final one was at the bottom of Yrmali Well 9) You might've been able to call people using numbers (videophone?) 10) You could move up rank(s) in the police HQ 11) The Azkeels had their own currency call Kerils, which were shells in the water that you called use to purchase items there 12) You used Fodo for something else and then you returned his body. 13) That fountain in the city of the dead was used for something. 14) Reshev was scared of Astroth and was going to use the Book of Nout on him 15) You could actually go to Reshev's private apartments 16) Reshev lived in pyramid with Ix 17) The Palace of Ix is in Nagataneh, and Boz gave you a pass to get there 18) You could get a lighter 19) Reshev got his hands on the Book of Nout 20) A Moy'eb tree had saps on them that could heal wounds 21) Ix killed Reshev 22) Ix knew everything about the nomad soul since 1700 years ago 23) (not sure about this) You reincarnate into Ix 24) You fought Criff Mashroud 25) You and Krill helped get Dakobah back 26) The Azkeels helped fight a big battle (with Soyinka's help) I just looked through briefly for this information. Anyways, this is my theory in what happened in the end of the original version (this may be incorrect): You go down to the city of the dead and get the Book of Nout and return to the surface. You then go to Nagataneh (another district) and go into the Pyramid of Ix where you meet Ix and Reshev. You kill Ix, Ix kills Reshev and you then go down and kill Astroth. But, perhaps Ix killed Reshev in the version we've all played as you don't see him die (it's assumed demons killed him). I don't know why they ditched these things, they sounded really godly! Maybe they're saving this (or something similar) for the sequel? I doubt it, but you never know. ############################################ %!@#$ Credits ############################################ I'd very, very much like to thank the follwing people for their contributions to this FAQ: Alexandros Ntzintsvelasvili for the alternate way into the archives room and also numerous other things throughout the FAQ "DarkSecret" for all the images of the puzzles and little bits I've missed out on. It sure beats my crappy ASCII. If you have any questions, I highly recommend the gamefaq () and gamespot () messgage boards. ___ ___ _ _ ___ _ __ __ | __||_ _|| | | | || || \ | \(c) |__ | | | | | | | _|| || || | |___| |_| |___|@|_| |_||__/ |__/ ________________________________________________________________________ Stu Pidd Copyright(c) View in:
https://www.gamefaqs.com/pc/193073-omikron-the-nomad-soul/faqs/43197
CC-MAIN-2017-43
en
refinedweb
The Best of Both Worlds: Combining XPath with the XmlReader Dare Obasanjo and Howard Hao Microsoft Corporation May 5, 2004 Download the XPathReader.exe sample file. Summary:. (11 printed pages)(11 printed pages) Introduction About a year ago, I read an article by Tim Bray entitled XML Is Too Hard For Programmers, in which he complained about the cumbersome nature of push model APIs, like SAX, for dealing with large streams of XML. Tim Bray describes an ideal programming model for XML as one that is similar to working with text in Perl, where one can process streams of text by matching items of interest using regular expressions. Below is an excerpt from Tim Bray's article showing his idealized programming model for XML streams. Tim Bray isn't the only one who yearned for this XML processing model. For the past few years, various people I work with have been working towards creating a programming model for processing streams of XML documents in a manner analogous to processing text streams with regular expressions. This article describes the culmination of this work—the XPathReader. Finding Loaned Books: XmlTextReader Solution To give a clear indication of the productivity gains from the XPathReader compared to existing XML-processing techniques with the XmlReader, I have created an example program that performs basic XML processing tasks. The following sample document describes a number of books I own and whether they are currently loaned out to friends. > The following code sample displays the names of the persons that I've loaned books to, as well as which books I have loaned to them. The code samples should produce the following output. Sanjay was loaned XML Bible by Elliotte Rusty Harold Sander was loaned Definitive XML Schema by Priscilla Walmsley XmlTextReader Sample: using System; using System.IO; using System.Xml; public class Test{ static void Main(string[] args) { try{ XmlTextReader reader = new XmlTextReader("books.xml"); ProcessBooks(reader); }catch(XmlException xe){ Console.WriteLine("XML Parsing Error: " + xe); }catch(IOException ioe){ Console.WriteLine("File I/O Error: " + ioe); } } static void ProcessBooks(XmlTextReader reader) { while(reader.Read()){ //keep reading until we see a book element if(reader.Name.Equals("book") && (reader.NodeType == XmlNodeType.Element)){ if(reader.GetAttribute("on-loan") != null){ ProcessBorrowedBook(reader); }else { reader.Skip(); } } } } static void ProcessBorrowedBook(XmlTextReader reader){ Console.Write("{0} was loaned ", reader.GetAttribute("on-loan")); while(reader.NodeType != XmlNodeType.EndElement && reader.Read()){ if (reader.NodeType == XmlNodeType.Element) { switch (reader.Name) { case "title": Console.Write(reader.ReadString()); reader.Read(); // consume end tag break; case "author": Console.Write(" by "); Console.Write(reader.ReadString()); reader.Read(); // consume end tag break; } } } Console.WriteLine(); } } Using XPath as Regular Expressions for XML The first thing we need is a way to perform pattern matching for nodes of interest in an XML stream in the same way we can with regular expressions for strings in a text stream. XML already has a language for matching nodes called XPath, which can serve as a good starting point. There is an issue with XPath that prevents it from being used without modification as the mechanism for matching nodes in large XML documents in a streaming manner. XPath assumes the entire XML document is stored in memory and allows operations that would require multiple passes over the document, or at least would require large portions of the XML document be stored in memory. The following XPath expression is an example of such a query: The query returns the publisher attribute of a book element if it has a child author element whose value is 'Frederick Brooks'. This query cannot be executed without caching more data than is typical for a streaming parser because the publisher attribute has to be cached when seen on the book element until the child author element has been seen and its value examined. Depending on the size of the document and the query, the amount of data that has to be cached in memory could be quite large and figuring out what to cache could be quite complex. To avoid having to deal with these problems a co-worker, Arpan Desai, came up with a proposal for a subset of XPath that is suitable for forward-only processing of XML. This subset of XPath is described in his paper An Introduction to Sequential XPath. There are several changes to the standard XPath grammar in Sequential XPath, but the biggest change is the restriction in the usage of axes. Now, certain axes are valid in the predicate, while other axes are valid only in the non-predicate portion of the Sequential XPath expression. We have classified the axes into three different groups: - Common Axes: provide information about the context of the current node. They can be applied anywhere in the Sequential XPath expression. - Forward Axes: provide information about nodes ahead of the context node in the stream. They can only be applied in the location path context because they are looking for 'future' nodes. An example is "child.'' We can successfully select the child nodes of a given path if "child" is in the path. However, if "child" were in the predicate, we would not be able to select the current node because we cannot look ahead to its children to test the predicate expression and then rewind the reader to select the node. - Reverse Axis: are essentially the opposite of Forward Axes. An example would be "parent." If parent were in the location path, we would want to return the parent of a specific node. Once again, because we cannot go backward, we cannot support these axes in the location path or in predicates. Here is a table showing the XPath axes supported by the XPathReader: There are some XPath functions not supported by the XPathReader due to the fact that they also require caching large parts of the XML document in memory or the ability to backtrack the XML parser. Functions such as count() and sum() are not supported at all, while functions such as local-name() and namespace-uri() only work when no arguments are specified (that is, only when asking for these properties on the context node). The following table lists the XPath functions that are either unsupported or have had some of their functionality limited in the XPathReader. The final major restriction made to XPath in the XPathReader is to disallow testing for the values of elements or text nodes. The XPathReader does not support the following XPath expression: The above query selects the book element if its string contains the text 'Frederick Brooks'. To be able to support such queries, large parts of the document may have to be cached and the XPathReader would need to be able to rewind its state. However, testing values of attributes, comments, or processing instructions is supported. The following XPath expression is supported by the XPathReader: The subset of XPath described above is sufficiently reduced as to enable one to provide a memory-efficient, streaming XPath-based XML parser that is analogous to regular expressions matching for streams of text. A First Look at the XPathReader The XPathReader is a subclass of the XmlReader that supports the subset of XPath described in the previous section. The XPathReader can be used to process files loaded from a URL or can be layered on other instances of XmlReader. The following table shows the methods added to the XmlReader by the XPathReader. The following example uses the XPathReader to print the title of every book in my library: using System; using System.Xml; using System.Xml.XPath; using GotDotNet.XPath; public class Test{ static void Main(string[] args) { try{ XPathReader xpr = new XPathReader("books.xml", "//book/title"); while (xpr.ReadUntilMatch()) { Console.WriteLine); } } } An obvious advantage of the XPathReader over conventional XML processing with the XmlTextReader is that the application does not have to keep track of the current node context while processing the XML stream. In the example above, the application code doesn't have to worry about whether the title element whose contents it is displaying and printing is a child of a book element or not by explicitly tracking state because this is already done by the XPath. The other piece of the puzzle is the XPathCollection class. The XPathCollection is the collection of XPath expressions that the XPathReader is supposed to match against. An XPathReader only matches nodes contained in its XPathCollection object. This matching is dynamic, meaning that XPath expressions can be added and removed from the XPathCollection during the parsing process as needed. This allows for performance optimizations where tests aren't made against XPath expressions until they are needed. The XPathCollection is also used for specifying prefix<->namespace bindings used by the XPathReader when matching nodes against XPath expressions. The following code fragment shows how this is accomplished: Finding Loaned Books: XPathReader Solution Now that we've taken a look at the XPathReader, it's time to see how much improved processing XML files can be compared to using the XmlTextReader. The following code sample uses the XML file in the section entitled Finding Loaned Books: XmlTextReader Solution and should produce the following output: Sanjay was loaned XML Bible by Elliotte Rusty Harold Sander was loaned Definitive XML Schema by Priscilla Walmsley XPathReader Sample: using System; using System.IO; using System.Xml; using System.Xml.XPath; using GotDotNet.XPath; public class Test{ static void Main(string[] args) { try{ XmlTextReader xtr = new XmlTextReader("books.xml"); XPathCollection xc = new XPathCollection(); int onloanQuery = xc.Add("/books/book[@on-loan]"); int titleQuery = xc.Add("/books/book[@on-loan]/title"); int authorQuery = xc.Add("/books/book[@on-loan]/author"); XPathReader xpr = new XPathReader(xtr, xc); while (xpr.ReadUntilMatch()) { if(xpr.Match(onloanQuery)){ Console.Write("{0} was loaned ", xpr.GetAttribute("on-loan")); }else if(xpr.Match(titleQuery)){ Console.Write(xpr.ReadString()); }else if(xpr.Match(authorQuery)){ Console.WriteLine(" by {0}",); } } } This output is greatly simplified from the original code block, is almost as efficient memory-wise, and very analogous to processing text streams with regular expressions. It looks like we have reached Tim Bray's ideal for an XML programming model for processing large XML streams. How the XPathReader Works The XPathReader matches XML nodes by creating a collection of XPath expressions compiled into an abstract syntax tree (AST) and then walking this syntax tree while receiving incoming nodes from the underlying XmlReader. By walking through the AST tree, a query tree is generated and pushed onto a stack. The depth of the nodes to be matched by the query is calculated and compared against the Depth property of the XmlReader as nodes are encountered in the XML stream. The code for generating the AST for an XPath expression is obtained from the underlying code for the classes in the System.Xml.Xpath, which is available as part of the source code in the Shared Source Common Language Infrastructure 1.0 Release. Each node in the AST implements the IQuery interface that defines the following three methods: The GetValue method returns the value of the input node relative to the current aspect of the query expression. The MatchNode method tests whether the input node matches the parsed query context, while the ReturnType property specifies which XPath type the query expression evaluates. Future Plans for XPathReader Based on how useful various people at Microsoft have found the XPathReader, including the BizTalk Server that ships with a variation of this implementation, I've decided to create a GotDotNet workspace for the project. There are a few features I'd like to see added, such as integration of some of the functions from the EXSLT.NET project into the XPathReader and support for a wider range of XPath. Developers that would like to work on further development of XPathReader can join the GotDotNet workspace. Conclusion The XPathReader provides a potent way for processing XML streams by harnessing the power of XPath and combining it with the flexibility of the pull-based XML parser model of the XmlReader. The compositional design of System.Xml allows one to layer the XPathReader over other implementations of the XmlReader and vice versa. Using the XPathReader for processing XML streams is almost as fast as using the XmlTextReader, but at the same time is as usable as XPath with the XmlDocument. Truly it is the best of both worlds. Dare Obasanjo is a member of Microsoft's WebData team, which among other things develops the components within the System.Xml and System.Data namespace of the .NET Framework, Microsoft XML Core Services (MSXML), and Microsoft Data Access Components (MDAC). Howard Hao is a Software Design Engineer in Test on the WebData XML team and is the main developer of the XPathReader. Feel free to post any questions or comments about this article on the Extreme XML message board on GotDotNet.
https://msdn.microsoft.com/en-us/library/ms950778
CC-MAIN-2017-43
en
refinedweb
CodePlexProject Hosting for Open Source Software I'm receiving the following error when submitting a comment and using recaptcha: Unable to cast object of type 'System.String' to type 'System.IO.Stream'. Anyone have a solution? I'm not a programmer so specific files and code to tweak would be helpful. I did see some info about this on this page but I couldn't follow,. I was having the same problem as well. I think it might be specific to those who are using Sql Server. Either way, I tried Andrea's fix (same thread your referenced), and it seems to work great. Really only two steps that I can see so far. I did the following ... Step 1:) In the RecaptchaControl.cs (Located in your App_Code folder, under Extensions, and then Recaptcha), look for the UpdateLog() method. From here, simply comment out the following, or just delete it if you want: Stream s = (Stream)BlogService.LoadFromDataStore(BlogEngine.Core.DataStore.ExtensionType.Extension, "RecaptchaLog"); List<RecaptchaLogItem> log = new List<RecaptchaLogItem>(); if (s != null) { System.Xml.Serialization.XmlSerializer serializer = new System.Xml.Serialization.XmlSerializer(typeof(List<RecaptchaLogItem>)); log = (List<RecaptchaLogItem>)serializer.Deserialize(s); s.Close(); } And the replace it with the following: string s = (string)BlogService.LoadFromDataStore(BlogEngine.Core.DataStore.ExtensionType.Extension, "RecaptchaLog"); List<RecaptchaLogItem> log = new List<RecaptchaLogItem>(); if (!string.IsNullOrEmpty(s)) { using (StringReader reader = new StringReader(s)) { System.Xml.Serialization.XmlSerializer serializer = new System.Xml.Serialization.XmlSerializer(typeof(List<RecaptchaLogItem>)); log = (List<RecaptchaLogItem>)serializer.Deserialize(reader); } } Step 2:) In the admin folder, under Pages, find the RecaptchaLogViewer.aspx page. Open it's Code Behind file (RecaptchaLogViewer.aspx.cs). Find the BindGrid() method. From here do the same code swap outlined above. That's it, you should be good to go! Hope this helps ... James Thanks James! I made those changes and am getting the following error. Any idea where to go from here? Line 34: namespace Controls Line 35: { Line 36: public class RecaptchaControl : WebControl, IValidator Line 37: { Line 38: Source File: e:\webroot\wwwroot\blog\App_Code\Extensions\Recaptcha\RecaptchaControlOLD.cs Line: 36 Hmm, not entirely sure. It sounds like something simple though. Perhaps you have two files, one named "RecaptchaControlOLD.cs", and another one called "RecaptchaControl.cs"? It looks like you might have renamed the ".cs" file, but then didn't rename the Class itself. So for example, above you have "RecaptchaControlOLD.cs" as a file name, but the class is still called "RecaptchaControl". You need to make sure you only have one instance of "public class RecaptchaControl{ }". You might just try deleting which ever file doesn't have your changes. Then, make sure the file name (i.e. RecaptchaControl.cs) matches (at least in this case) the Class name (i.e. public class RecaptchaControl). Hope that helps ... That was it! Didn't realize that would make a difference. Thanks James....really appreciate it. Thanks the 2 steps above worked great! I'm new at blogengine 1.6.1 and I think one of the challenging issues is finding answers to problems -- seems like it is hit or miss but the forums have helped out. Thanks again. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://blogengine.codeplex.com/discussions/229898
CC-MAIN-2017-43
en
refinedweb
C++ class wouldn't build in bare bone QML application I have a simple bare bone QML application and I just want to add a c++ class and use it from QML but I get error and it wouldn't even build. Here is the code. @#include <QGuiApplication> #include <QQmlApplicationEngine> #include "mymodel.h" int main(int argc, char *argv[]) { QGuiApplication app(argc, argv); MyModel model; QQmlApplicationEngine engine; engine.load(QUrl(QStringLiteral("qrc:/main.qml"))); // engine.rootContext()->setContextProperty("_MyModel", &model); return app.exec(); }@ And the class MyModel is: @#ifndef MYMODEL_H #define MYMODEL_H #include <QObject> class MyModel : public QObject { Q_OBJECT public: explicit MyModel(QObject *parent = 0); signals: public slots: }; #endif // MYMODEL_H@ the .cpp file of the class @#include "mymodel.h" MyModel::MyModel(QObject *parent) : QObject(parent) { } @ When I build, I get the following error: @ main.obj:-1: error: LNK2019: unresolved external symbol "public: __thiscall MyModel::MyModel(class QObject *)" (??0MyModel@@QAE@PAVQObject@@@Z) referenced in function _main debug\QMLCalc1.exe:-1: error: LNK1120: 1 unresolved externals @ What is possibly wrong? I can't even get to the setContextProperty() call yet. Have included mymodel.cpp into the project? - p3c0 Moderators Also, unrelated to above error, use setContextProperty before loading the QML or else there would be some Reference Errors. I had to 'Run QMake' from build menu which fixed it and project now builds fine without making any other change but I still don't uderstand why do I have to do that! It is a simple new project and I just added a class. Could anyone tell me what does QMake do? Can I add this as a step to build process? Thanks. - dheerendra It generates the makes files again to compile your sources. You have added new class files to your project. So pro file got updated. I have seen some times it does not generate make file on its own after adding new class files. I have seen this more with when I'm using the qmake/qt creator with VC++. In general it is good practice to re-run the qmake once the pro-file is updated.
https://forum.qt.io/topic/49718/c-class-wouldn-t-build-in-bare-bone-qml-application
CC-MAIN-2017-43
en
refinedweb
ive seen this error mentioned a few times on here when i look up the questiotns but most seem to be related to opening and closing files. I copied this traceroute script and ive added an argparse to it. i also wanted to add a print to it as im running it as a cron and logging to a file. i wanted a time stamp in the log file. Thus i added the print at the bottom and now get an error #!/usr/bin/python import socket import struct import sys import argparse import datetime # We want unbuffered stdout so we can provide live feedback for # each TTL. You could also use the "-u" flag to Python. class flushfile(file): def __init__(self, f): self.f = f def write(self, x): self.f.write(x) self.f.flush() sys.stdout = flushfile(sys.stdout) def main(dest): dest_addr = socket.gethostbyname(dest) port = 33434 max_hops = 30 icmp = socket.getprotobyname('icmp') udp = socket.getprotobyname('udp') ttl = 1 while True: recv_socket = socket.socket(socket.AF_INET, socket.SOCK_RAW, icmp) send_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, udp) send_socket.setsockopt(socket.SOL_IP, socket.IP_TTL, ttl) # Build the GNU timeval struct (seconds, microseconds) timeout = struct.pack("ll", 5, 0) # Set the receive timeout so we behave more like regular traceroute recv_socket.setsockopt(socket.SOL_SOCKET, socket.SO_RCVTIMEO, timeout) recv_socket.bind(("", port)) sys.stdout.write(" %d " % ttl) send_socket.sendto("", (dest, port)) curr_addr = None curr_name = None finished = False tries = 3 while not finished and tries > 0: try: _, curr_addr = recv_socket.recvfrom(512) finished = True curr_addr = curr_addr[0] try: curr_name = socket.gethostbyaddr(curr_addr)[0] except socket.error: curr_name = curr_addr except socket.error as (errno, errmsg): tries = tries - 1 sys.stdout.write("* ") send_socket.close() recv_socket.close() if not finished: pass if curr_addr is not None: curr_host = "%s (%s)" % (curr_name, curr_addr) else: curr_host = "" sys.stdout.write("%s\n" % (curr_host)) ttl += 1 if curr_addr == dest_addr or ttl > max_hops: break if __name__ == "__main__": parser = argparse.ArgumentParser(description='Traceroute') parser.add_argument('-H' '--dest', dest='destination', help='IP or Hostname', required='True', default='127.0.0.1') parser_args = parser.parse_args() main(parser_args.destination) print 'Trace completed at:{0}'.format(datetime.datetime.now().strftime('%d-%m-%Y %H:%M:%S')) Traceback (most recent call last): File "traceroute.py", line 83, in <module> print 'Trace completed at:{0}'.format(datetime.datetime.now().strftime('%d-%m-%Y %H:%M:%S')) ValueError: I/O operation on closed file It looks like the issue is that near the top of the script sys.stdout is reassigned to a custom class which only supports the write() operation. Print is probably trying to call a method on sys.stdout but it no longer has that method. An easy fix would be to use sys.stdout.write instead of print at the end of the script. For example: sys.stdout.write( 'Trace completed {0}\n'. format(datetime.datetime.now(). strftime('%d-%m-%Y %H:%M:%S')) ) Another option would be to save the original value of sys.stdout and restore it near the end of the script, before the print: old_sys_stdout = sys.stdout sys.stdout = flushfile(sys.stdout) ... sys.stdout = old_sys_stdout #print as normal
https://codedump.io/share/DDUuGlFAd5wu/1/python---valueerror-io-operation-on-closed-file---traceroute-script--printing
CC-MAIN-2017-43
en
refinedweb
#include <ast.h> #include <ast.h> Inheritance diagram for unionNode: This class represents union types. Note that it does not represent the definition of the unions, but just an instant of the type. For example, in the declaration "union A * x;" the AST built would look like this: declNode "x" --> ptrNode --> unionNode. The actual definition resides in an suespecNode. The NodeType is Union. suespecNode Definition at line 2760 of file ast.h. [inherited] Type qualifiers. This enum holds the possible type qualifiers. The special COMPATIBLE value indicates which type qualifiers are relevant when comparing two types for compatibility: const and volatile. COMPATIBLE Definition at line 1566 of file ast.h. Coord::Unknown Create new union type. The new union type has no tag and doesn't refer to any definition. Use the sueNode::spec() methods to get and set the reference to the definition, which also contains the name. Referenced by clone(). [virtual] Destroy a unionNode. [inline, inherited] Add a new type qualifier to this typeNode. Definition at line 1707 of file ast.h. References typeNode::_type_qualifiers. Referenced by typeNode::add_type_qualifiers_and(). Add a new type qualifier to this typeNode, and return the typeNode. Definition at line 1713 of file ast.h. References typeNode::add_type_qualifiers(). Sets the alignment necessary for this type. Definition at line 1731 of file ast.h. References typeNode::_alloc_align. Indicates the word alignment necessary for this type. Definition at line 1728 of file ast.h. Sets the size necessary for this type. Definition at line 1725 of file ast.h. References typeNode::_alloc_size. Indicates the size of memory necessary for this type. Definition at line 1722 of file ast.h.. [inline, tdefNode. Definition at line 1896 of file ast.h. 2797 of file ast.h. References union. Definition at line 1906 of file ast.h. Call base_type() with the argument true. Call base_type() with the argument false. Definition at line 2629 of file ast.h. References sueNode::_elaborated. Definition at line 2628 of file ast.h. [static, inherited] Type comparison. This static method compares two types, descending into the subtypes and following typedefs as necessary. The two boolean arguments control how strict the algorithm is with respect to type qualifiers. Passing true requires all type qualifiers to be the same. Passing false only requires those type qualifiers that affect compatibility to be the same. (Was TypeEqualQualified in type.c) This routine relies on the qualified_equal_to() methods on each kind of typeNode to perform the appropriate comparison and dispatch to the sub-type when necessary. Referenced by typeNode::operator<=(), and typeNode::operator==(). Reimplemented in primNode. Follow typedefs. Follow the chain of typedefs from this type, returning the underlying (non-typedef) type.. Return this typeNode's subtype, and set it to be empty. Definition at line 1685 of file ast.h. References typeNode::_type. Integral promotions. This method is used during parsing to convert smaller types (char, short, bit-fields and enums) into integers according to the rules in ANSI 6.2.1.1. In addition, our version converts float into double. char short float double Reimplemented in arrayNode, and structNode. Definition at line 1880 of file ast.h. Reimplemented in primNode, and enumNode. Definition at line 1878 of file ast.h. Reimplemented in primNode. Definition at line 1871 of file ast.h. Reimplemented in ptrNode, arrayNode, and funcNode. Definition at line 1881 of file ast.h. Definition at line 1875 of file ast.h. Definition at line 1873 of file ast.h. Definition at line 1872 of file ast.h. Definition at line 1877 of file ast.h. Definition at line 1882 of file ast.h. Reimplemented in primNode, ptrNode, arrayNode, and enumNode. Definition at line 1879 of file ast.h. Definition at line 1874. Strict type inequality. This is just a negation of the operator== Definition at line 1798 of file ast.h. Weaker type comparison. Compare this type to the given type, masking off type qualifiers that don't affect compatibility of types. Definition at line 1788 of file ast.h. References typeNode::equal_to(). Strict type comparison. Compare this type to the given type, requiring all type qualifiers to be the same. Definition at line 1776 of file ast.h. [virtual, inherited] Generate C code. Each subclass overrides this method to define how to produce the output C code. To use this method, pass an output_context and a null parent. Output a type. Implements typeNode.. Virtual type comparison routine. Each typeNode subclass overrides this routine to provide its specific type comparison. This is used by the static equal_to() method to perform general deep type comparison. Reimplemented from typeNode. Remove a type qualifier from this typeNode. Definition at line 1718 of file ast.h. References typeNode::_type_qualifiers. Report node count statistics. The code can be configured to gather statistics about node usage according to type. This method prints the current state of that accounting information to standard out. Definition at line 2632 of file ast.h. References sueNode::_spec. Definition at line 2631(). Set this typeNode's subtype. To set the subtype to be empty, call this method with a value of NULL. Definition at line 1692 of file ast.h. Return this typeNode's subtype. Definition at line 1680 of file ast.h. References typeNode::_type. Referenced by tree_visitor::at_ptr(), and funcNode::returns(). Set this typeNode's type qualifiers. Definition at line 1700 of file ast.h. Return this typeNode's type qualifiers. Definition at line 1696 of file ast.h. Referenced by typeNode::type_qualifiers_name(). Return a string representation of this typeNode's type qualifiers. Definition at line 1703 of file ast.h. References typeNode::type_qualifiers(). Convert type qualifiers to string. This method is used when generating C code to convert the type qualifiers into string form. Unwind typedefs. Usual arithmetic conversions. From ANSI 6.2.1.5: Many binary operators that expect operands of arithmetic type cause conversions and yield result types in a similar way. The purpose is to yield a common type, which is also the type of the result. This method takes the types of the left and right operands and returns a pair of types indicating the conversions of the two operands, respectively. When necessary, these conversion include the integral promotions. exprNode::usual_arithmetic_conversions() Usual unary conversion type. The purpose of this method escapes me. The constNode class seems to use it, but all it does is return itself. No other typeNode overrides it. Definition at line 1892
http://www.cs.utexas.edu/users/c-breeze/html/classunionNode.html
CC-MAIN-2017-43
en
refinedweb
Settings that should be applied to all projects can go in ~/.sbt/1.0/global.sbt (or any file in ~/.sbt/1.0 with a .sbt extension). Plugins that are defined globally in ~/.sbt/1.0/plugins/ are available to these settings. For example, to change the default shellPrompt for your projects: ~/.sbt/1.0/global.sbt shellPrompt := { state => "sbt (%s)> ".format(Project.extract(state).currentProject.id) } You can also configure plugins globally added in ~/ ~/.sbt/1.0/plugins/ directory is a global plugin project. This can be used to provide global commands, plugins, or other code. To add a plugin globally, create ~/.sbt/1.0/plugins/build.sbt containing the dependency definitions. For example: addSbtPlugin("org.example" % "plugin" % "1.0") To change the default shellPrompt for every project using this approach, create a local plugin ~/.sbt/1.0/plugins/ShellPrompt.scala: import sbt._ import Keys._ object ShellPrompt extends Plugin { override def settings = Seq( shellPrompt := { state => "sbt (%s)> ".format(Project.extract(state).currentProject.id) } ) } The ~/.sbt/1.0/plugins/ directory is a full project that is included as an external dependency of every plugin project. In practice, settings and code defined here effectively work as if they were defined in a project’s project/ directory. This means that ~/.sbt/1.0/plugins/ can be used to try out ideas for plugins such as shown in the shellPrompt example.
http://www.scala-sbt.org/1.x-beta/docs/offline/Global-Settings.html
CC-MAIN-2017-43
en
refinedweb
#include <ne_ssl.h> A client certificate can be in one of two states: encrypted or decrypted. The ne_ssl_clicert_encrypted function will return non-zero if the client certificate is in the encrypted state. A client certificate object returned by ne_ssl_clicert_read may be initially in either state, depending on whether the file was encrypted or not. ne_ssl_clicert_decrypt can be used to decrypt a client certificate using the appropriate password. This function must only be called if the object is in the encrypted state; if decryption fails, the certificate state does not change, so decryption can be attempted more than once using different passwords. A client certificate can be given a "friendly name" when it is created; ne_ssl_clicert_name will return this name (or NULL if no friendly name was specified). ne_ssl_clicert_name can be used when the client certificate is in either the encrypted or decrypted state, and will return the same string for the lifetime of the object. The function ne_ssl_clicert_owner returns the certificate part of the client certificate; it must only be called if the client certificate is in the decrypted state. When the client certificate is no longer needed, the ne_ssl_clicert_free function should be used to destroy the object. ne_ssl_clicert_read returns a client certificate object, or NULL if the file could not be read. ne_ssl_clicert_encrypted returns zero if the object is in the decrypted state, or non-zero if it is in the encrypted state. ne_ssl_clicert_name returns a NUL-terminated friendly name string, or NULL. ne_ssl_clicert_owner returns a certificate object. The following code reads a client certificate and decrypts it if necessary, then loads it into an HTTP session. ne_ssl_client_cert *ccert; ccert = ne_ssl_clicert_read("/path/to/client.p12"); if (ccert == NULL) { /* handle error... */ } else if (ne_ssl_clicert_encrypted(ccert)) { char *password = prompt_for_password(); if (ne_ssl_clicert_decrypt(ccert, password)) { /* could not decrypt! handle error... */ } } ne_ssl_set_clicert(sess, ccert); ne_ssl_cert_read Joe Orton <neon@lists.manyfish.co.uk>
http://www.makelinux.net/man/3/N/ne_ssl_clicert_read
CC-MAIN-2015-14
en
refinedweb
Search Type: Posts; User: oneofthelions Search: Search took 0.01 seconds. - 28 Dec 2012 11:46 PM - Replies - 1 - Views - 1,409 Hi, I am trying to insert a TreeMap or GeoMap, part of GoogleMaps. But I am not able to see the graph within ExtJS Layout. Is there any plugin for Google maps? ... - 28 Apr 2011 2:11 AM - Replies - 0 - Views - 1,679 I have a button at which when the user hovers over I display a tooltip. function createTooltp(toolTipId) { tooTip = new Ext.ToolTip({ target: toolTipId, anchor: 'left', ... - 17 Feb 2011 9:47 PM Thanks to the information on Mask, I shall keep in mind. But I called my function as you mentioned. But this doesn't work. This return to the function is not happening? - 13 Feb 2011 11:35 PM It only Mask the text field to allow digits and hyphen. But the regex is not checked to allow only one hyphen and at most two digits after hyphen. Currently the user can enter as many digits and... - 11 Feb 2011 2:56 AM I have a text filed var issueNoField = new Ext.form.TextField({ fieldLabel:'Issue No', width: 120, vtype: 'hyphen' - 10 Feb 2011 4:38 AM Jump to post Thread: Ext Js Numeric and hyphen by oneofthelions - Replies - 0 - Views - 1,953 Hi, I was using TextField of the form. I also want to allow Hyphen. I need help in calling a regular expression which does a check that no alphabets and only numbers allowed and one hyphen as well. ... - 6 Feb 2011 10:03 PM - Replies - 2 - Views - 2,913 I did it with using a button feature. <script type="text/javascript"> function openFAQPopup(){ var <portlet:namespace/>learnUrl = "<%=learnUrl%>"; var <portlet:namespace/>win... - 4 Feb 2011 1:36 AM - Replies - 2 - Views - 2,913 Hi - I am using Liferay Portal for my portlets. Navigating from JSP1 with onclick call a JS function. This would have Ext.Window(). The new JSP2(gadget) is poped up. There is a link, on click I... - 3 Feb 2011 3:38 AM - Replies - 2 - Views - 1,626 I have local var (Array list) which has three key, value pairs. I want the second value to be displayed in the combo box and its key as default. Ext.onReady(function(){ var group =... Results 1 to 9 of 9
http://www.sencha.com/forum/search.php?s=1178498d84f3105f6c7f128b555d7931&searchid=10603484
CC-MAIN-2015-14
en
refinedweb
23 April 2013 21:53 [Source: ICIS news] HOUSTON (ICIS)--The pace of coal gasification projects in ?xml:namespace> While Air Products supplies oxygen for gasification applications such as coal-to-liquids (CTL) and coal-to-chemicals (CTC). The oxygen is used to produce diesel fuel for transportation or synthetic natural gas, which can be further refined into products such as methanol, acetic acid and olefins, according to the company. McGlade sees no slowdown in the number of gasification projects in “Our assessment of the market is that there [are] multiple opportunities of the scale and quantity of projects that you've seen if you looked over a trend line of the last several
http://www.icis.com/Articles/2013/04/23/9661903/coal-gasification-projects-should-continue-in-china-executive.html
CC-MAIN-2015-14
en
refinedweb
Java Thread Example – Extending Thread Class and Implementing Runnable Interface Processes and Threads are two basic units of execution. Java concurrency programming is more concerned with threads. Process A process is a self contained execution environment and it can be seen as a program or application. However a program itself contains multiple processes inside it. Java runtime environment runs as a single process which contains different classes and programs as processes. Thread Thread can be called lightweight process. Thread requires less resources to create and exists in the process, thread shares the process resources. Java Multithreading Every. Benefits of Threads - Threads are lightweight compared to processes, it takes less time and resource to create a thread. - Threads share their parent process data and code - Context switching between threads is usually less expensive than between processes. - Thread intercommunication is relatively easy than process communication. Java provides two ways to create a thread programmatically. - Implementing the java.lang.Runnable interface. - Extending the java.lang.Thread class. Java Thread Example by implementing Runnable interface To make a class runnable, we can implement java.lang.Runnable interface and provide implementation in public void run() method. To use this class as Thread, we need to create a Thread object by passing object of this runnable class and then call start() method to execute the run() method in a separate thread. Here is a java class example implementing Runnable interface. package com.journaldev.threads; public class HeavyWorkRunnable implements Runnable { @Override public void run() { System.out.println("Doing heavy processing - START "+Thread.currentThread().getName()); try { Thread.sleep(1000); //Get database connection, delete unused data from DB doDBProcessing(); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("Doing heavy processing - END "+Thread.currentThread().getName()); } private void doDBProcessing() throws InterruptedException { Thread.sleep(5000); } } Java Thread Example by extending Thread class We can extend java.lang.Thread class to create our own thread class and override run() method. Then we can create it’s object and call start() method to execute our custom thread class run method. Here is a simple example showing how to extend Thread class. package com.journaldev.threads; public class MyThread extends Thread { public MyThread(String name) { super(name); } @Override public void run() { System.out.println("MyThread - START "+Thread.currentThread().getName()); try { Thread.sleep(1000); //Get database connection, delete unused data from DB doDBProcessing(); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("MyThread - END "+Thread.currentThread().getName()); } private void doDBProcessing() throws InterruptedException { Thread.sleep(5000); } } Here is a test program showing how to create a thread and execute it. package com.journaldev.threads; public class ThreadRunExample { public static void main(String[] args){ Thread t1 = new Thread(new HeavyWorkRunnable(), "t1"); Thread t2 = new Thread(new HeavyWorkRunnable(), "t2"); System.out.println("Starting Runnable threads"); t1.start(); t2.start(); System.out.println("Runnable Threads has been started"); Thread t3 = new MyThread("t3"); Thread t4 = new MyThread("t4"); System.out.println("Starting MyThreads"); t3.start(); t4.start(); System.out.println("MyThreads has been started"); } } Output of the above java program is: Starting Runnable threads Runnable Threads has been started Doing heavy processing - START t1 Doing heavy processing - START t2 Starting MyThreads MyThread - START Thread-0 MyThreads has been started MyThread - START Thread-1 Doing heavy processing - END t2 MyThread - END Thread-1 MyThread - END Thread-0 Doing heavy processing - END t1 Once we start any thread, it’s execution depends on the OS implementation of time slicing and we can’t control their execution. However we can set threads priority but even then it doesn’t guarantee that higher priority thread will be executed first. Run the above program multiple times and you will see that there is no pattern of threads start and end. Runnable vs Thread If your class provides more functionality rather than just running as Thread, you should implement Runnable interface to provide a way to run it as Thread. If your class only goal is to run as Thread, you can extend Thread class. Implementing Runnable is preferred because java supports implementing multiple interfaces. If you extend Thread class, you can’t extend any other classes. Tip: As you have noticed that thread doesn’t return any value but what if we want our thread to do some processing and then return the result to our client program, check our Java Callable Future. Update: From Java 8 onwards, Runnable is a functional interface and we can use lambda expressions to provide it’s implementation rather than using anonymous class. For more details, check out Java 8 Lambda Expressions Tutorial.
http://www.journaldev.com/1016/java-thread-example-extending-thread-class-and-implementing-runnable-interface
CC-MAIN-2015-14
en
refinedweb
Created on 2010-02-23.09:38:59 by yanne, last changed 2010-04-02.09:33:02 by pekka.klarck. jth@l228:~$ cat TestObj.java public class TestObj { public String toString() { return "Circle is 360\u00B0"; } } jth@l228:~$ javac TestObj.java jth@l228:~$ jython22 -c "import TestObj as T; print unicode(T())" Circle is 360° jth@l228:~$ jython25 -c "import TestObj as T; print unicode(T())" Traceback (most recent call last): File "<string>", line 1, in <module> UnicodeDecodeError: 'ascii' codec can't decode byte 0xb0 in position 13: ordinal not in range(128) I can get unicode working by first calling toString() of the object: jth@l228:~$ jython25 -c "import TestObj as T; print unicode(T().toString())" Circle is 360° Should unicode() call toString() automatically also with 2.5? This should be a simple fix (as long as it doesn't break anything). PyJavaType just needs to override __unicode__ to do the right thing fixed in r6996, thanks Thanks for the fix Philip.
http://bugs.jython.org/issue1563
CC-MAIN-2015-14
en
refinedweb
Difference between pages "Funtoo:Package" and "Category:Ebuilds" (Difference between pages) Latest revision as of 00:42, June 24, 2014 (view source)Drobbins (Talk | contribs) Revision as of 00:43, June 24, 2014 (view source) Drobbins (Talk | contribs) (Blanked the page) Line 1: Line 1: −The Package namespace uses the default form [[Has default form::Ebuild]].+ Revision as of 00:43, June 24, 2014 Retrieved from ""
http://www.funtoo.org/index.php?title=Dell_PowerEdge_11G_Servers&diff=4425&oldid=4424
CC-MAIN-2015-14
en
refinedweb
15 March 2011 09:46 [Source: ICIS news] TOKYO (ICIS)--Cosmo Oil's 220,000 bbl/day refinery in ?xml:namespace> “It has not been extinguished though (the fire) has become smaller compared to when it first started. We don’t have an idea when it will be put out,” the source said in Japanese. Liquefied petroleum gas (LPG) tanks at the refinery exploded an hour after the 9.0-magnitude quake struck northeastern “We are focusing on extinguishing the fire. After it’s put out, we’ll examine (the damages),” the source said. All production and deliveries at and from the refinery were still suspended, he said. But Cosmo Oil’s refineries in Additional reporting by Nurluqman Suratman
http://www.icis.com/Articles/2011/03/15/9443858/cosmo-oils-220000-bblday-chiba-refinery-still-in-flames.html
CC-MAIN-2015-14
en
refinedweb
number of instances807582 May 16, 2002 10:33 AM How can I get the number of instances (or the list of instances) of a class ? This content has been marked as final. Show 17 replies 1. Re: number of instances807582 May 16, 2002 11:03 AM (in response to 807582)Hi, try this, Class c[] = t.getClass().getDeclaredClasses(); where t is instance of test class... Thanks ramesh - 3. Re: number of instances807582 May 16, 2002 11:17 AM (in response to 807582)Class from API : public Class[] getDeclaredClasses() Returns an array of Class objects reflecting all the classes and interfaces declared as members of the class represented by this Class object. This includes public, protected, default (package) access, and private classes and interfaces declared by the class, but excludes inherited classes and interfaces. This method returns an array of length 0 if the class declares no classes or interfaces as members, or if this Class object represents a primitive type, an array class, or void. 4. Re: number of instances807582 May 16, 2002 4:35 PM (in response to 807582)Hello Olivier, the short answer is: you can't. Other options are: - Get your hands on a profiling tool. - DO NOT DO THIS AT HOME: create your own version of java.lang.Object, and add some logging routines to its constructor. Beware of legal issues with this one! - Implement a JPDA solution. Good luck, Manuel Amago. 5. Re: number of instancesDrClap May 16, 2002 11:22 PM (in response to 807582)If this is a class you wrote, then include a static ArrayList variable, and in every constructor add "this" to that variable. You will then have a list of all instances of the class. This has the unfortunate side effect that those instances can never be garbage-collected, so if that is a problem you will want to use weak, soft, or phantom references -- I don't know which would be best. 6. Re: number of instances807582 May 19, 2002 3:14 PM (in response to 807582)normaly, we add classes to a list or a vector, if we want to have references to multiple instantiated classes later. if you want to check how many classes of type X an application instantiate so just use the debugger interface (com.sun.jdi - see class BootStrap to begin) if you want to have a list of all classes in your applikation, just use a self defined classloader. -michael 7. Re: number of instances807582 May 24, 2002 12:18 AM (in response to 807582)If you just want to know how much instances created for a user defined class then i think this would be a solution Just declare a static variable in your class and in the constructor increment that variable by one.In this way you can count how much instances created for a class. try this ravi 8. Re: number of instances807582 Sep 19, 2002 5:11 AM (in response to 807582)A static counter won't work 'cause it'll only tell you how many object have been created; not how many are currently being referenced. Decerasing the counter on finalized won't be completely accurate either. 9. Re: number of instances807582 Jan 12, 2004 9:38 AM (in response to 807582)Hi Marnagi, It will be very much helpful if you could provide some additional details of implementing the JPDA solution. It looks to be a much better solution. Thanks, CyberTeT 10. Re: number of instances807582 Jan 15, 2004 2:29 PM (in response to 807582)Hi Oliver, how you can see, don't is a simple task, I don't know a fast solution, but: The main problem, is that objects don't have destroy() method, then you don't know when instance a object is destroyed, neither know when will be garbage collected. a) If you store all new object instance in a collection, array... (hold instance reference): 1) All instances never be garbage-collected. 2) Only can add instances, never decrease instance number (don't know when instance destroy). b) If use a instance counter 1) Same problem, only can add to counter, althoug garbage collector will destroy not referenced instances. Are you sure what don't have other way for solve your problem ?, this is hard work ! I may be will try find other solution ! 11. Re: number of instances807582 Jan 20, 2004 8:54 AM (in response to 807582) public class YourClass { private final static Object mutex = new Object(); public final static int numberOfInstances = 0; public YourClass() { synchronize(mutex) { numberOfInstances++; } } protected void finalize() throws Throwable { synchronize(mutex) { numberOfInstances--; } } } 12. Re: number of instances807582 Jan 20, 2004 9:36 AM (in response to 807582) elquebuscamascosas, your idea is good, but who and when inco will call the finalize method ??? protected void finalize() throws Throwable { synchronize(mutex) { numberOfInstances--; } 13. Re: number of instances807582 Jan 20, 2004 10:44 AM (in response to 807582)See Javadoc: java.lang.Object.finalize() /** * Called by the garbage collector on an object when garbage collection * determines that there are no more references to the object. * A subclass overrides the <code>finalize</code> method to dispose of * system resources or to perform other cleanup. * <p> * The general contract of <tt>finalize</tt> is that it is invoked * if and when the Java<font size="-2"><sup>TM</sup></font> <tt>finalize</tt> method may take any action, including * making this object available again to other threads; the usual purpose * of <tt>finalize</tt>, however, is to perform cleanup actions before * the object is irrevocably discarded. For example, the finalize method * for an object that represents an input/output connection might perform * explicit I/O transactions to break the connection before the object is * permanently discarded. * <p> * The <tt>finalize</tt> method of class <tt>Object</tt> performs no * special action; it simply returns normally. Subclasses of * <tt>Object</tt> may override this definition. * <p> * The Java programming language does not guarantee which thread will * invoke the <tt>finalize</tt>. * <p> * After the <tt>finalize</tt>. * <p> * The <tt>finalize</tt> method is never invoked more than once by a Java * virtual machine for any given object. * <p> * Any exception thrown by the <code>finalize</code> method causes * the finalization of this object to be halted, but is otherwise * ignored. * * @throws Throwable the <code>Exception</code> raised by this method */ protected void finalize() throws Throwable { } 14. Re: number of instances807582 Jan 20, 2004 11:11 AM (in response to 807582)Ok, you have al reason of world !... your code is very good !, I don't remember finalize() method of java.lang.Object Class.
https://community.oracle.com/thread/1685563?tstart=30
CC-MAIN-2015-14
en
refinedweb
NAME getsockname - get socket name SYNOPSIS #include <sys/socket.h> int getsockname(int sockfd, struct sockaddr *addr, socklen_t *addrlen); DESCRIPTION gets.21 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.ubuntu.com/manpages/karmic/man2/getsockname.2.html
CC-MAIN-2015-14
en
refinedweb
compy component js boilerplate js project compy - lightweight app builder/compiller ===== Compy is a lightweight approach for developing web apps (framework/lib agnostic). Based on TJ's component package manager it allows you to install components and use them in your code right away. Compy makes your development fun because: - you can install and use components without any configurations - you can use local require - you can use coffeescript, sass, jade and other plugins - you can run karma tests - you will have livereload with simple static server watch screencast for live intro. ##install $ npm install compy -g ##plugins compy can use component's plugins to extend it's functionality. For example if you want to use coffee in your project, you need to npm install component-coffee in your project's folder. compy was tested with following plugins: - rschmukler/component-stylus-plugin — precompile stylus - segmentio/component-jade — precompile jade templates - anthonyshort/component-coffee - require CoffeeScript files as scripts - anthonyshort/component-sass - compile Sass files using node-sass - kewah/component-builder-handlebars - precompile Handlebars templates - ericgj/component-hogan - Mustache transpiler for component (using Hogan) - segmentio/component-sass — Sass transpiler for component - segmentio/component-json — Require JSON files as Javascript. - queckezz/component-roole — Compile Roole files - bscomp/component-lesser - LESS transpiler for compy - segmentio/component-markdown - Compile Markdown templates and make them available as Javascript strings. cli comands Usage: compy <command> [options]Options:-h, --help output usage information-V, --version output the version number-d, --dir <path> project source path. Must contain package.json-o, --output <path> output directory for build/compile-v, --verbose verbosity-f, --force force installation of packages-s, --staticServer <path> custom server that serves static with compy middleware--dev install dev dependenciesCommands:install [name ...] install dependencies or componentcompile compile app (in dist folder by default)build build the app (compile and minify sources)server [watch] run static server. If "watch" option enabled - watch changes, recompile and push livereloadtest run karma testswatch watch and rebuild assets on changeplate [appname] generate boilerplate package.jsongraph show all dependencies/versions installed config The configuration for compy sits in package.json inside compy namespace. main is an entry point of your app and the only required property. writing tests To run karma based tests with compy. The package.json configuration should be adjusted and all required karma plugins should be installed. For example to run mocha tests with sinon and chai inside phantomjs following configurations should be set: And plugins should be installed locally. $ npm install karma-mocha karma-sinon-chai karma-phantomjs-launcher now with compy test all *.spec.js files will be runned as a mocha tests. license MIT
https://www.npmjs.com/package/compy
CC-MAIN-2015-14
en
refinedweb
I'm working my way through the C++ tutorials and I'm stuck on character arrays. Basically, I don't understand how a user can input a sentence into the program and then how to store that sentence in a variable, ready to be called in a cout<< line. Here is a piece of code I'm trying to work on (to no avail). The problem here isn't identical but an answer to the previous question will probably help answer this one. Why doesn'tWhy doesn'tCode: #include <iostream> //For cout using namespace std; int main() { int monstype; char monsdescription[35]; cout <<"please choose the monster you wish to fight:\n1. Big hairy ugly thing\n2. Small and easy on the eyes\n3. Medium size, much the same as you!\n"; cin>> monstype; if (monstype == 1){ monsdescription = "Big hairy ugly thing"; } else if (monstype == 2){ monsdescription = "Small and easy on the eyes"; } else{ monsdescription = "Medium size, much the same as you!"; } cout <<"You have chosen "<< monstype <<", "<< monsdescription <<""; cin.get(); } work? The error line says:work? The error line says:Code: if (monstype == 1){ monsdescription = "Big hairy ugly thing"; } incompatible types in assignment of 'const char [27]' to 'char [35]' or with an error line :with an error line :Code: else{ monsdescription = "Medium size, much the same as you!"; } invalid array assignment
http://cboard.cprogramming.com/cplusplus-programming/139255-character-arrays-basics-stuck-applying-tutorial-printable-thread.html
CC-MAIN-2015-14
en
refinedweb
Cook Scheduler Client API for Python Project description The Cook Scheduler Python Client API This package defines a client API for Cook Scheduler, allowing Python applications to easily integrate with Cook. Quickstart The code below shows how to use the client API to connect to a Cook cluster listening on localhost:12321, submit a job to the cluster, and query its information. from cookclient import JobClient client = JobClient('localhost:12321') uuid = client.submit(command='ls') job = client.query(uuid) print(str(job)) Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/cook-client-api/0.3.4/
CC-MAIN-2021-49
en
refinedweb
Definition at line 28 of file RColumnReaderBase.hxx. #include <ROOT/RDF/RColumnReaderBase.hxx> Return the column value for the given entry. Called at most once per entry. Definition at line 36 of file RColumnReaderBase.hxx. Implemented in ROOT::Internal::RDF::RDefineReader, ROOT::Experimental::Internal::RNTupleColumnReader, ROOT::Internal::RDF::RDSColumnReader< T >, ROOT::Internal::RDF::RTreeColumnReader< T >, ROOT::Internal::RDF::RTreeColumnReader< RVec< T > >, and ROOT::Internal::RDF::RTreeColumnReader< RVec< bool > >.
https://root.cern/doc/master/classROOT_1_1Detail_1_1RDF_1_1RColumnReaderBase.html
CC-MAIN-2021-49
en
refinedweb
This article shows how to publish an OData feed of LinkedIn Ads data by creating a WCF Service Application. The CData ADO.NET Provider for LinkedIn Ads enables you to use the Windows Communication Foundation (WCF) framework to rapidly develop service-oriented applications that provide LinkedIn Ads data to OData consumers. This article shows how to create an entity data model to provide the underlying connectivity to the LinkedIn Ads LinkedIn Ads.LinkedInAds" type="System.Data.CData.LinkedInAds.LinkedInAdsProviderServices, System.Data.CData.LinkedInAds.Entities.EF6" /> </providers> </entityFramework> </configuration> - Add a reference to System.Data.CData.LinkedInAds LinkedIn Ads Data Source and enter the necessary credentials. A typical connection string is below: OAuthClientId=MyOAuthClientId;OAuthClientSecret=MyOAuthClientSecret;CallbackURL=;InitiateOAuth=GETANDREFRESH LinkedIn Ads uses the OAuth authentication standard. OAuth requires the authenticating user to interact with LinkedIn using the browser. See the OAuth section in the Help documentation for a guide. - Select LinkedIn Ads LinkedInAdsService{ public class LinkedInAdsDataService : DataService<LinkedInAds.
https://www.cdata.com/kb/tech/linkedinads-ado-odserv.rst
CC-MAIN-2021-49
en
refinedweb
Cross level data transmission Use of context Step 1: create a context object const MyContext = React.createContext(); export default MyContext; The second step is to use the provider to transfer data export default const Index = (props) => { const [data] = React.useState({}); return <MyContext.Provider value={data}> {prop.children} </MyContext.Provider> }; Note that the data here should be a status of the Index component, so as to avoid that every time the Index is refreshed, the data is regenerated, resulting in the sub components also diff, explicate. - Child component consumption value There are three ways to consume the value of context; - contextType class MyClass extends React.Component { componentDidMount() { let value = this.context; /* After the component is mounted, use the value of the MyContext component to perform some side effects */ } componentDidUpdate() { let value = this.context; /* ... */ } componentWillUnmount() { let value = this.context; /* ... */ } render() { let value = this.context; /* Render based on the value of the MyContext component */ } } MyClass.contextType = MyContext; Note: you can only subscribe to a single context through this API. If you want to subscribe to more than one, please use Cunsumer. If you are using experimental public class fields syntax , you can use static This class property to initialize your contextType. class MyClass extends React.Component { static contextType = MyContext; render() { let value = this.context; /* Render based on this value */ } } Cunsumer <MyContext.Consumer> {value => /* Rendering based on context value*/} </MyContext.Consumer> This method requires a function as a child . This function receives the current context value and returns a React node. Passed to function value The value is equivalent to the value provided by the Provider closest to the context above the component tree value Value. If there is no corresponding Provider, value Parameter is equivalent to passing to createContext() of defaultValue. - useContext It is easy to use, but it can only be used in hook syntax: function ShowAn(){ //Call useContext and pass in the context object obtained from MyContext. const value = useContext(MyContext); return( <div> the answer is {value} </div> ) More detailed contents can be referred to react official documents Publish subscribe mode This is not a cliche. You can understand everything. To put it simply, press the data into the queue in the subscription phase, and traverse the queue to get the desired results for operation when publishing; What I want to say here is that one detail we should pay attention to when writing the code of publish subscribe mode is that there is also unsubscribing in pairs with subscriptions. Only when the unsubscribing program is written can it be optimized to a certain extent. We can do this in the function of the push phase in the function of the re subscription phase, const subscribe = (data) => { Array.push(data); return () => { Array = Array.filter(item => item !== data); } } // Then write this in the place where the subscription function is called const unsubscribe = subscribe(data); When we want to unsubscribe, is it convenient to call unsubscribe() directly! Take a closer look to see if it is very similar to the use of useEffect in hook syntax. One more thing Today, I saw an interesting blog about asynchronous interrupt. At the beginning, I talked about ajax, axios and fetch timeout of jq. The previous ones are relatively simple. ajax has. abort() function. axios already has a time field, and fetch can also be interrupted. Just need an AbortController object to obtain signal and pass this signal object as a fetch option. async function fetchWithTimeout(timeout, resoure, init = {}) { const ac = new AbortController(); const signal = ac.signal; const timer = setTimeout(() => { console.log("It's timeout"); return ac.abort(); }, timeout); try { return await fetch(resoure, { ...init, signal }); } finally { clearTimeout(timer); } } Here's the point (click on the blackboard to draw knowledge points). Everything can timeout. Axios and fetch both provide a way to interrupt asynchronous operation, but what should we do for an ordinary Promise without abort capability? A very ingenious solution is given in this paper; Promise doesn't have the abort capability, but we can use the. race method to interrupt it. race means racing, so is the behavior of Promise.race() well understood? function waitWithTimeout(promise, timeout, timeoutMessage = "timeout") { let timer; const timeoutPromise = new Promise((_, reject) => { timer = setTimeout(() => reject(timeoutMessage), timeout); }); return Promise.race([timeoutPromise, promise]) .finally(() => clearTimeout(timer)); // Don't forget to clear the timer }
https://programmer.help/blogs/knowledge-points-of-react-article-lesson1.html
CC-MAIN-2021-49
en
refinedweb
Writing CSS is easy. Making it scalable and maintainable is not. How many times: - did you update the CSS of your application and you broke something else? - did you wonder where the CSS you have to change is coming from? - did you update some HTML and it broke your design? - did you write some CSS and wonder why it wasn’t applied to then discover it was overridden by some other CSS? This is when you decide there is a better way and come across some CSS methodologies, which seems like a good solution to all those headaches. You heard of SMACSS, BEM, OOCSS, but at the end of the day, it’s all about what fits your projects. Also, you may think you’re perfectly fine without them and you may be right. But you might be missing out on big improvements too. You should at least have an idea on what’s out there and why you or you’re not using it. So what should you use? First, what is the issue you are trying to solve? Why are you looking into this? - Prevent your CSS to break each time you touch something? - Finding CSS you want to change easily? - Working better as a team? - Write less to do more? It’s all about maintainability and reusability. Now how do you get there? When eating an elephant take one bite at a time. - Creighton Abrams Same apply here. Applying a modular approach to your CSS will save you hours. Web components are the future of the web and starting now will make your life easier. Each piece of your app/design helps you build the final results and you should be able to move or replace it without breaking anything else. This is the goal most CSS methodologies aim for. How to choose what fits your needs? SMACSS: CSS organisation and architecture The theory SMACSS stands for Scalable and Modular Architecture CSS. According to it’s author Jonathan Snook, this is a style guide rather than a rigid spec or framework. SMACSS is about organising your CSS in 5 rules: Base This include selector rules. No classes or id here. This is to reset browser rules and set a base style for elements which are going to be consistent and reused. You are defining here the default style for your elements. This can include html, body, h1, h2, h3, h4, h5, h6, img, a… Layout This is where the style used to lay out your pages will sit. It should be separated to your module style for flexibility. You want to be able to use your layout style to build your pages in the most flexible way possible. A module or components should be added to any place in your site independent from the layout. Using classes instead of id allow you to reuse those layout style anywhere and lower your CSS specificity, making it easier to control. Modules A module is a part or a component of your page. Your menu, dialog box, download list or any widget you have on your page. It depends of your design. A module is independent from your layout so it can live anywhere in you app. You should be able to copy/paste the html and move somewhere else, and it will look and behave the same. A module should be encapsulated in one file and easily accessible. It will be easy to find and you’ll be in control of what you want to update as it won’t be depending on any other style. That’s your single source of truth for a feature. States A state will be a style which modifies or overrides other rules. A great example is accordion when collapsing or expanding elements. Using a is-collapsed class will make sure your element is collapsed. This is a good place to use !important (and probably the only one) as you want this state to be applied no matter what. Also, it can relate to modified state with javascript. Good practise is to prefix or add a namespace to those states classes like is- or has-: is-hidden, is-displayed, is-collapsed, has-children, etc Theme Idea is to have a file called theme.css where you can define all the theme rules. // box.scss .box { border: 1px solid; } // theme.scss .box { border-color: red; } In practise The first time I read about SMACSS, I found it great but a bit hard to know how to implement it. My approach was mainly through my file architecture. I tend to have the following directories and file structure: / **global** \_base.scss // Base rules \_settings.scss \_states.scss // generic state rules … / **layout** \_grid.scss … / **modules** \_card.scss \_menu.scss … As you can see, I have all the layers except the theme one, as I never really needed it. Perks : ✓ Modular ✓ Flexible ✓ File organisation ✓ States are great reusable classes. Cons : - Can be hard to put in practise. - No class name convention, modules and submodules can be hard to identify BEM: naming convention The theory BEM stands for Block Element Modifier. It’s a naming convention and it works really well with modular CSS. A block is a module or a component, however you prefer to call it. This is a piece of your design you encapsulate to reuse it anywhere on your site. **// Block** .button {} **// Element** .button\_\_icon {} .button\_\_text {} **// Modifier** .button — red {} .button-blue {} In practise BEM is great for flexibility. I personally use the SMACSS appellation: module. Therefore, a block is a module. I have a different file per module and BEM allow me to encapsulate those modules perfectly. When reading the HTML, I know exactly what is part of the module or not and I don’t need to nest my CSS, which means less specificity and then less headaches. Full example // Without BEM .nav ul .item { color: black; float: left; } // With BEM .nav\_\_item { color: black; float: left; } Perks : ✓ Modular ✓ Flexible ✓ Easy to maintain ✓ Write less CSS ✓ Be in control of your CSS Cons : - Long HTML classes - Verbose OOCSS: Object Oriented CSS In theory OOCSS stands for Object Oriented CSS, and the purpose is to encourage code reusability and ease maintainability. To do so, OOCSS is based on two principles: Separate structure from skin All your elements have some kind of branding, right? Colours, background, borders. Also, they do have a structure which you may sometimes repeat between those elements. A good example can be a button again. Before you may have had: .button { display: inline-box; width: 200px; height: 50px; color: white; background: black; } .box .button { color: black; background: red; } Now with OOCSS: .button { width: 200px; height: 50px; } .button-default { color: white; background: black; } .button-red { color: black; background: red; } You can see the point. Now you can reuse your CSS for every button and make it less specific. Separation of containers and content This is to separate your style from it’s content. You can have a side menu and a box in you main content that applies style to some paragraph. .sidemenu p { } .box p { } This ties your HTML and CSS together forever and apply the same style to all p tag in those elements. This may not be what you want. Also, if you need the same style in your article content for example, you will repeat it again. Instead, create a new style for this paragraph which you can reuse at any time. Like BEM, OOCSS avoid specificity through nesting and using ids. It also allows you to apply separation of concerns really easily. In practise I didn’t use OOCSS much until recently. I suppose I kind of did by separating colour modules using BEM modifier but I never did it at a large scale. I found the concept really interesting and start seeing a use in my current company. We have a product which have 3 different skins available on 3 different url. We have the same modules used on most of the 3 websites but with a different branding. We created objects we reuse on those website and we have modules defining the branding for each websites. Perks : ✓ High reusability ✓ Ease of maintainability Cons : - Can be confusing for new developers. What is an object and what is not? This will force you to document things and introduce new dev to your codebase (which is good to do). ITCSS: CSS organisation to avoid high specificity The theory ITCSS stands for Inverted Triangle CSS. This is a way to organise your CSS according the specificity of your CSS rules. - Settings : variables, mixins, anything you set to use as setting or functions - Tools : external includes - Generic: reset/normalize rules - Elements : elements base style - Objects : Object and structure style - Components : Modules styles - Trumps : utilities, grid, states This goes from the less specific rules to the most specific one. This will allow you to take back control of your style. I am sure we all tried to style an element and adding a rule to find out it doesn’t work because of more specific rule was defined above in your stylesheet. Controlling your style specificity will save you hours of debugging and hating your colleagues. In practise /\* === SETTINGS === \*/ @import “global/site-settings”; @import “global/mixins”; @import “global/typography”; /\* === TOOLS === \*/ @import “libs/material-icons”; /\* === GENERIC === \*/ @import “global/normalize”; /\* === ELEMENTS === \*/ @import “global/base”; /\* == Modules === \*/ // Module example @import “modules/icon”; @import “modules/header”; @import “modules/footer”; @import “modules/form”; @import “modules/buttons”; @import “modules/menu”; @import “modules/form”; @import “modules/logo”; /\* == Trumps === \*/ @import “layout/grid”; @import “global/states”; @import “global/utilities”; Perks : ✓ Lower CSS specificity ✓ Clear organisation Cons : - It can be hard to decide what goes into which categories for juniors but this is somethings that can be really clear to more intermediate and senior in my opinion, who should help others. In a nutshell We are all aware of it, CSS can turn pretty bad because of it’s own nature. At the end of the day, it’s about what you need and what makes sense to you. But if you work a large scalable project, what you want is to: - Control your specificity - Be modular and create reusable style - Ease maintenance - Work better as a team - Write less to achieve more All the above can help you do that in their own way. I personally think they work best mixed to fit my needs depending on what makes sense for the project I work on. I keep an open approach rather than “this is what you need to use cause it’s trendy”. Whats works best for you? What have you been using on your projects? Any other methodologies? Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/digitaledawn/css-lost-in-methodologies-mng
CC-MAIN-2021-49
en
refinedweb
Porting QML Applications to Qt 5 Example The new version of Qt Quick in Qt 5 brings in some changes to the way QML applications are developed. For the complete list of changes that affect existing QML applications, refer to Porting QML Applications to Qt 5. This topic will walk through the porting process to make the flickr Qt 4 QML demo work on Qt 5. If you have the SDK based on Qt 4.8 installed, you can find this demo application under <install_dir_root>/Examples/4.x/declarative/demos/. Follow these step-by-step instructions to port the flickr Qt 4 QML application work to Qt 5: - Open the flickr project using Qt Creator. - Edit all the .qmlfiles and replace the import QtQuick 1.0statements with import QtQuick 2.3. - Add the additional import QtQuick.XmlListModel 2.0statement to qml/common/RssModel.qml. Note: XmlListModel is part of a submodule under QtQuickand it must be imported explicitly in order to use it. - Make the following changes to qmlapplicationviewer/qmlapplicationviewer.h: - Replace the #include <QtDeclarative/QDeclarativeView>with #include <QQuickView>. - Replace QDeclarativeViewwith QQuickViewin the class declaration for QmlApplicationViewer. - Replace the parameter for QmlApplicationViewerconstructor from QWidgetto QWindow. - Make the following changes to qmlapplicationviewer/qmlapplicationviewer.cpp: - Replace all the QtCoreand QtDeclarativeinclude statements with these: #include <QCoreApplication> #include <QDir> #include <QFileInfo> #include <QQmlComponent> #include <QQmlEngine> #include <QQmlContext> #include <QDebug> - Replace all instances of QWidgetwith QWindow, and QDeclarativeViewwith QQuickView. - Remove the code between #if defined(Q_OS_SYMBIAN)and #endifmacros as Symbian platform is not supported in Qt 5. - Remove the code between #if QT_VERSION < 0x040702and #else, and #endif // QT_VERSION < 0x040702macros towards the end. - Save changes to the project and run the application. Once you see the application running, check whether it behaves as expected. Here is a snapshot of the application running on Ubuntu v12.04: Related Topics Porting QML Applications to Qt.
https://doc.qt.io/archives/qt-5.7/portingqmlapp.html
CC-MAIN-2021-49
en
refinedweb
Division algorithm in a polynomial ring with variable coefficients I am working on an algorithm to divide a polynomial f by a list of polynomials [g1, g2, ..., gm]. The following is my algorithm: def div(f,g): # Division algorithm on Page 11 of Using AG by Cox; # f is the dividend; # g is a list of ordered divisors; # The output consists of a list of coefficients for g and the remainder; # p is the intermediate dividend; n = len(g) p, r, q = f, 0, [0 for x in range(0,n)] while p != 0: i, divisionoccured = 0, False print(p,r,q); while i < n and divisionoccured == False: if g[i].lt().divides(p.lt()): q[i] = q[i] + p.lt()//g[i].lt() p = p - (p.lt()//g[i].lt())*g[i] divisionoccured = True else: i = i + 1 if divisionoccured == False: r = r + p.lt() p = p - p.lt() return q, r Here is an example of implementing the algorithm: K.<a,b> = FractionField(PolynomialRing(QQ,'a, b')) P.<x,y,z> = PolynomialRing(K,order='lex') f=a*x^2*y^3+x*y+2*b g1=a^2*x+2 g2=x*y-b div(f,[g1,g2]) Here is the result: (a*x^2*y^3 + x*y + 2*b, 0, [0, 0]) (((-2)/a)*x*y^3 + x*y + 2*b, 0, [1/a*x*y^3, 0]) (x*y + 4/a^3*y^3 + 2*b, 0, [1/a*x*y^3 + ((-2)/a^3)*y^3, 0]) (4/a^3*y^3 + ((-2)/a^2)*y + 2*b, 0, [1/a*x*y^3 + ((-2)/a^3)*y^3 + 1/a^2*y, 0]) (((-2)/a^2)*y + 2*b, 4/a^3*y^3, [1/a*x*y^3 + ((-2)/a^3)*y^3 + 1/a^2*y, 0]) (2*b, 4/a^3*y^3 + ((-2)/a^2)*y, [1/a*x*y^3 + ((-2)/a^3)*y^3 + 1/a^2*y, 0]) Error in lines 6-6 Traceback (most recent call last): and some other error messages. We can see that it worked well until the leading term is 2b. it does not recognize the 2b as a term. I tried: (x).lt().divides(1) It gives the answer False. But I tried (x).lt().divides(a) It gives error message. Is there a way to solve this? Thank you for your help!
https://ask.sagemath.org/question/37098/division-algorithm-in-a-polynomial-ring-with-variable-coefficients/?sort=votes
CC-MAIN-2021-49
en
refinedweb
Description Opens the specified file for reading or writing and assigns it a unique integer file number. You use this integer to identify the file when you read, write, or close the file. The optional arguments filemode, fileaccess, filelock, and writemode determine the mode in which the file is opened. Syntax FileOpen ( filename {, filemode {, fileaccess {, filelock {, writemode {, encoding }}}}} ) Return value Integer. Returns the file number assigned to filename if it succeeds and -1 if an error occurs. If any argument's value is null, FileOpen returns null. Usage The mode in which you open a file determines the behavior of the functions used to read and write to a file. There are two functions that read data from a file: FileRead and FileReadEx, and two functions that write data to a file: FileWrite and FileWriteEx. FileRead and FileWrite have limitations on the amount of data that can be read or written and are maintained for backward compatibility. They do not support text mode. For more information, see FileRead and FileWrite. The support for reading from and writing to blobs and strings for the FileReadEx and FileWriteEx functions depends on the mode. The following table shows which datatypes are supported in each mode. When a file has been opened in line mode, each call to the FileReadEx function reads until it encounters a carriage return (CR), linefeed (LF), or end-of-file mark (EOF). Each call to FileWriteEx adds a CR and LF at the end of each string it writes. When a file has been opened in stream mode or text mode, FileReadEx reads the whole file until it encounters an EOF or until it reaches a length specified in an optional parameter. FileWriteEx writes the full contents of the string or blob or until it reaches a length specified in an optional parameter. The optional length parameter applies only to blob data. If the length parameter is provided when the datatype of the second parameter is string, the code will not compile. In all modes, PowerBuilder can read ANSI, UTF-16, and UTF-8 files. The behavior in stream and text modes is very similar. However, stream mode is intended for use with binary files, and text mode is intended for use with text files. When you open an existing file in stream mode, the file's internal pointer, which indicates the next position from which data will be read, is set to the first byte in the file. A byte-order mark (BOM) is a character code at the beginning of a data stream that indicates the encoding used in a Unicode file. For UTF-8, the BOM uses three bytes and is EF BB BF. For UTF-16, the BOM uses two bytes and is FF FE for little endian and FE FF for big endian. When you open an existing file in text mode, the file's internal pointer is set based on the encoding of the file: If the encoding is ANSI, the pointer is set to the first byte If the encoding is UTF-16LE or UTF-16BE, the pointer is set to the third byte, immediately after the BOM If the encoding is UTF-8, the pointer is set to the fourth byte, immediately after the BOM If you specify the optional encoding argument and the existing file does not have the same encoding, FileOpen returns -1. File not found If PowerBuilder does not find the file, it creates a new file, giving it the specified name, if the fileaccess argument is set to Write!. If the argument is not set to Write!, FileOpen returns -1. If the optional encoding argument is not specified and the file does not exist, the file is created with ANSI encoding. When you create a new text file using FileOpen, use line mode or text mode. If you specify the encoding parameter, the BOM is written to the file based on the specified encoding. When you create a new binary file using stream mode, the encoding parameter, if provided, is ignored. Examples This example uses the default arguments and opens the file EMPLOYEE.DAT for reading. The default settings are LineMode!, Read!, LockReadWrite!, and EncodingANSI!. FileReadEx reads the file line by line and no other user is able to access the file until it is closed: integer li_FileNum li_FileNum = FileOpen("EMPLOYEE.DAT") This example opens the file EMPLOYEE.DAT in the DEPT directory in stream mode (StreamMode!) for write only access (Write!). Existing data is overwritten (Replace!). No other users can write to the file (LockWrite!): integer li_FileNum li_FileNum = FileOpen("C:\DEPT\EMPLOYEE.DAT", & StreamMode!, Write!, LockWrite!, Replace!) This example creates a new file that uses UTF8 encoding. The file is called new.txt and is in the D:\temp directory. It is opened in text mode with write-only access, and no other user can read or write to the file: integer li_ret string ls_file ls_file = "D:\temp\new.txt" li_ret = FileOpen(ls_file, TextMode!, Write!, & LockReadWrite!, Replace!, EncodingUTF8!) See also
https://docs.appeon.com/pb2021/powerscript_reference/ch02s04s176.html
CC-MAIN-2021-49
en
refinedweb
I recently bought an Arduino pH sensor kit for measuring pH value of my hydroponic setup, it cheap but has very little information/document on how to use it, so I decided to figure it out myself on how it works and how to use it. Popular pH measurement kits for Arduino If you search for pH sensor with Arduino on Internet, you are likely see 3 major commercially available or mass-produced solutions: Atlas Scientific offers high quality and well-designed sensor kit for pH measurement. It Gravity analog pH Kit consists of a consumer-grade pH sensor and interface board, plus 3 package of calibration buffer solutions will cost $65.00. Atlas Scientific hardware is high quality but doesn’t seems to be open-sourced. DFRobot also has a solution with the same name Gravity (why?) as Atlas Scientific. its version 1 Gravity: Analog pH Sensor Kit consists of pH probe plus the sensor board and is priced at $29.50. There is a version 2 of Gravity: Analog pH Sensor Kit which comes with the board with enhanced design at $39.50 by including buffer solutions and mounting screws for the board. DFRobot published its schematic, PCB layout and Arduino code for version 1 on its website and github under GPL2 license. But it only publish the PCB layout for version 2 without schematic, so I don’t know what exactly was enhanced in the design for the version 2. The third commonly available pH sensor kit for Arduino that you see almost in every e-commerce marketplaces such as Taobao, AliExpress and Amazon is this “mystery” pH sensor kit that I bought. You can find it at as low as $17.00 for a pH probe with the sensor board. It is “mystery” because it seems that there are multiple Chinese manufacturers producing the same board but I can’t really find out which company actually own the design. I bought it anyway with the thinking that if I could understand how the pH probe works and with a little bit of “reverse-engineering” of the circuit design to help me to have better understanding of the circuitry, then I should be able to figure out on how to make it work. This fits my tinker spirit well… Other than those three commonly available pH sensor kits, there are others available in the market, but they are relatively niche with limited distribution. If you are interested on pH measurement or pH sensor board, you might to read further on A review on Seeed Studio pH and eC sensor kits – Part 1″. How pH probe work electronically? A pH probe consists of two main parts: a glass electrode and a reference electrode as shown in the picture below. I’m not very good at chemistry, so I won’t try to explain it that way, this pH theory guide provides very comprehensive explanation about the theory behind. In the nutshell, pH is determined essentially by measuring the voltage difference between these two electrodes. The pH probe is a passive sensor, which means no excitation voltage or current is required. It produces a voltage output that is linearly dependent upon the pH of the solution being measured. An ideal pH probe produces 0v output when pH value is at 7, and it produces a positive voltage (a few hundred mili-volts) when pH value go down, and a negative voltage level when pH value go up, causing by the hydrogen irons forming at the outside (and inside) of the membrane glass tip of the pH probe when the membrane comes into contact with solution. The source impedance of a pH probe is very high because the thin glass bulb has a large resistance that is typically in the range of 10 MΩ to 1000 MΩ. Whatever measurement circuit connect to the probe requires to be high-impedance in order to minimise the loading effect of the circuit. Hardware – The pH sensor board explained The pH sensor board that I bought came without any user guide, schematic or example code. I asked the small Chinese vendor for information but in vain. I decided to “reverse-engineering” the schematic diagram but eventually I find the schematic diagram at the attachment of this Arduino forum discussion. The pH sensor board can be divided into 3 different sections based on its functionality. I colour the three key sections with different color for discussion here. pH Measurement Circuit The light green section with the TLC4502 high-impedance operation amplifier basically consists of a voltage divider and a unity-gain amplifier. The pH output(Po) provided an analog output for pH measurement. As pH probe swing between positive and negative voltage, and since TLC4502 is operate with single power source, half of the TLC4502 is used as a voltage divider to provide a reference voltage of 2.5v to “float” the pH probe input so that the output of Po will be +/-2.5v based on pH value. A potentiometer RV1 is used for calibration purpose that I will further discuss later. This part of the circuit is well-designed and it is all it needed for measuring the pH value. The other parts of the board in my opinion are not well designed and sort of in the category of “nice-to-have” and not essential. pH Threshold Detection Circuit The yellow section provides a pH threshold detection/notification circuit. For example, you could adjust the potentiometer RV2 so that when pH level reach a threshold level (e.g. say 7.5), the RED LED D1 will be turned on (Digital output Do changed from high to low). Alternatively, you could use it to detect the lower pH level threshold, say, when pH value is below 5.5, the RED LED will be turned off and Do changes from low to high. But you can’t set both lower and upper thresholds with this circuit. In my opinion, it will be easier to just use software solution than this hardware solution for threshold detection. Temperature Reading Circuit The light blue/cyan section of the board consists of 1 and a half LM358 OpAmp, and provides an analog reading at To. U2B of LM358 acts as a not so accurate voltage divider and provides a voltage reference of 2.5v to a Wheatstone bridge that consists of R13 – R15 and a thermistor TH1. The U3A behave as an differential OpAmp, the output is then pass through a low-pass filter and further amplified by a non-inverting OpAmp U3B. This entire circuit has nothing to do with pH measurement, at least not directly. I will talk about this toward the end of this article. The sole reason for measuring temperature in the context of measuring pH value is because that pH curve slope changes when temperature change between 0 to 100 degree Celsius. It is therefore important to measure the temperature of the solution, and add temperature compensation factor into the pH calculation. One thing interesting is that all the manufacturers for this board design that I saw in the market had the thermistor solder on the board instead of having a water-proof thermistor probe like the one that I described in my my previous post. By soldering thermistor on-board, that means the thermistor is measuring ambience temperature near the board instead of the temperature of the solution where pH was measured, this simply doesn’t make sense. This makes me think that all those Chinese manufacturers are simply copying the design from a circuit diagram or reverse-engineering without understanding the purpose of having the thermistor for temperature measurement in the context of pH measurement application. Now I studied and understand the circuit diagram, it is time to calibrate the pH sensor and write some code for measuring the pH value! How to calibrate the pH sensor? As discussed previously that by design the pH probe oscillates between negative and positive values. When the pH reading is at 7.0, the pH output is offset by 2.5v so that both negative and positive values generated by the pH probe can be represented as positive values in full range, this means that when pH is at 0, the Po would be at 0v, and when pH is at 14, the Po would be at 5v. In order to make sure that when pH is at 7.0, we can calibrate the reading to make sure that Po will be at 2.5v by disconnecting the probe from the circuit and short-circuiting the inner pin of the BNC connector with the outer BNC ring. With a multimeter measure the value of Po pin and adjust the potentiometer to be 2.5V. Don’t worry if you don’t have a multimeter, you can write an Arduino sketch to read the analog input by connecting the Po to analog input A0 of the Arduino. ph_calibrate.ino #include <Arduino.h> const int adcPin = A0; void setup() { Serial.begin(115200); } void loop() { int adcValue = analogRead(adcPin); float phVoltage = (float)adcValue * 5.0 / 1024; Serial.print("ADC = "); Serial.print(adcValue); Serial.print("; Po = "); Serial.println(phVoltage, 3); delay(1000); } Connect Po to Aanalog input A0 on Arduino, and G to Arduino GND. Run the Arduino sketch, and open the Serial Monitor of Arduino IDE to observe the reading, slowly adjust the potentiometer RV1 (the one near the BNC connector on the board) until the Po reading equal to 2.50v. This is assuming that all the pH probe are equal and will produce exactly 0v at pH reading of 7.0, but in reality all probes are slightly different from each other, especially for consumer-grade pH probe. Temperature also affect the reading of pH sensor slightly, so the better way is to use a pH buffer solution of pH=7.0 to calibrate the probe. All the buffer solution will have the temperature compensation information on its package that you could factor-in for your calibration. pH buffer packages for calibration purpose available in liquid form or in powders form, liquid pack is easy to use but powders pack is good for storage. These solutions are sold in different values but the most common are pH 4.01, pH 6.86 and pH 9.18. pH values are relatively linear over a certain range (between ph 2 to ph 10), we need two calibration points to determine the linear line, and then derives the slope of the line so that we could calculate any pH value with a given voltage output (see Figure 2 chart above). What value of pH buffer to use for this second calibration depends on your application, if your application is for measuring acidic solution, use buffer solution for ph=4.01 for the second calibration; buf if your application is mostly for measuing basic/alkanine solution, use buffer solution of ph=9.18 for the second calibration. In my case, as hydroponic for vegetable grow tends to be slightly acidic with ph ranging between 5.5 – 6.5, I use ph=4.01 buffer solution for my calibration. To avoid cross contamination, dip the probe in distill water for a couple of minites before dipping it in different buffer solutions. For increase the accuracy, let the probe stay in the buffer solution for a couple of minutes before taking the reading as the result. Use the same Arduino sketch to get the voltage reading for pH=4.01, and write down the voltage value, in my case, the voltage is 3.06 @ pH=4.01. The voltage readings at ph of 4.01 Vph4 and at pH of 7.0 Vph7 allows us to draw a straight line, and we can get the Voltage change per pH value m as: m = (ph7 - ph4) / (Vph7 - Vph4) / m = (7 - 4.01) / (2.5 - 3.05) m = -5.436 So the pH value at any voltage reading at Po can be derived with this formula: pH = pH7 - (Vph7 - Po) * m i.e. pH = 7 - (2.5 - Po) * m Measure pH value With the formula, we can create the Arduino sketch to measure the pH value based on the voltage reading at the Po. #include <Arduino.h> const int adcPin = A0; // calculate your own m using ph_calibrate.ino // When using the buffer solution of pH4 for calibration, m can be derived as: // m = (pH7 - pH4) / (Vph7 - Vph4) const float m = -5.436; void setup() { Serial.begin(115200); } void loop() { float Po = analogRead(adcPin) * 5.0 / 1024; float phValue = 7 - (2.5 - Po) * m; Serial.print("ph value = "); Serial.println(phValue); delay(5000); } How about Temperature Measurement? As I mentioned before it doesn’t make sense to measure the ambience temperature near the PCB, so the first thing I did is de-solder the on-board thermistor and replace it with one of those water-prove thermistors. A Wheatstone Bridge circuit is nothing more than two simple series-parallel arrangements of resistances connected between a reference voltage supply and ground producing zero voltage difference between the two parallel branches when balanced. When one of the arm of the resistance arrangements consists of a thermistor, its resistance changes as temperature changed, causing the imbalance of two resistance arms and a voltage difference developed between the two parallel branches in according to the change of the thermistor resistance which is directly related to the change of the temperature. Specifically to this circuit, the voltage reference is provided by U2B which formed a voltage divider and produce a reference voltage (let’s called it Vref) of 2.5V at pin 7 of U2B. According to the characteristics of thermistor, the thermistor will have a resistance of 10k-ohms at temperature of 25 degree Celsius. The Wheatstone Bridge will be balanced and the output voltage Vd of the Wheatstone Bridge at the terminals of resistors R16 and R18 will be zero and will swing above and below 0 volt when temperature changes. The Vd is then amplified by U3A which seems to be a differential amplifier, the U3B is a typical non-inverting amplifier. As I’m not quite sure about the gain of U3A so I decided to ask at Electrical Engineering StackExchange, and I got my questions answered within an hour. The circuit has a total gain of 14.33 when the thermistor is at 10k (i.e. when temperature is at 25 degree Celsius). However, the gain of U3A will change when the thermistor resistance change, obviously this is not a very good design. I also got confirmed my suspicion that there is a missing 20k resistor between between pin 3 of U3A and ground on the circuit diagram, interestingly the circuit board is designed to have this resistor, but where the resistor is supposed to be is left empty (why?). Further inspect the circuit I noticed that the R12 on the board is actually having a value of 51.1k instead of 100k as shown in the circuit diagram. So the over gain will be 1.33+5.11+1=7.44. We can derive the Vd based on the measured voltage of To, and further derive the value of resistance of TH1 at the temperature where To is measured: Vd = To / 7.44 Vd = Vref * (R14 / (R14 + R15)) - Vref * (R13 / (R13 + TH1)) Absolute temperature T based on Steinhart–Hart equation for thermistor calculation can then be derived from: T = 1 / (1/To + 1/B * ln(TH1/Ro)) Where: T is the absolute temperature to be measured in Kelvin; To is the reference temperature in Kelvin at 25 degree Celsius; Ro is the thermistor resistance at To; B is Beta or B parameter = 3950, provided by manufacturer in their specification. In theory, the primary benefit of Wheatstone Bridge circuit is its ability to provide extremely accurate measurements in contrast with a simple voltage divider, as a voltage divider is often affected by the loading impedance of the measuring circuit. In actual application, the accuracy of the Wheatstone Bridge is highly depend on the precision of the resistors used to form the Wheatstone Bridge, the precision of voltage reference as well as the circuit that connected to the Wheatstone Bridge. Although I figured out the formula on how to measure the temperature, I did not write the code to calculate the temperature, as the gain of U3A will vary as the value of the thermistor varies in according to the temperature. This make the reading result almost unpredictable and I will probably not use this circuit for measuring the water temperature without further modifying the design. In Summary Overall, this pH sensor board has a good pH measurement circuit design, the rest parts of the circuit are quite useless and a little bit over-engineered. By eliminating the bad part of the circuit design and kept the good part, it could be simpler and maybe slightly cheaper than current design for a pH sensor board. Related topic: A review on Seeed Studio pH and EC sensor kits – Part 1(PH)”. A review on Seeed Studio pH and EC sensor kits – Part 2(EC). 22 comments by readers Dear Sir! It’s really helpful topic. Thank you so much for your sharing. I have been making 1 pH meter kit since sep-2019 up to now, so I have 1 problem; I tested with pH7.01 buffer solution, it show me pH = 7.04 at 2.51V; and with pH4.01 buffer solution I got pH = 3.98 at 3.02V. Now I take 1 liter waste water sample from the collection pit (Wastewater system) then tested, it displayed pH = 7.62 at 2.49V ….. but I droped the pH probe to test directly with waste water treatment system then it showed pH = – 6.68 at 4.83V. Can you explain for me why the “Po” voltage out so high up from 2.49V to 4.83V with the same water sample? Thank you so much, I don’t know what caused it, but ph=-6.68 at 4.83v is basically out of the linear range that the probe could accurately measured, or your probe is not connected. Also, don’t measure pH on running flow as the value will not be accurate, and if you have a TDS probe, put it away from pH probe. Thanks you for the nice article. I would like to add the following. When the pH sensor module is used with a micro controller like Arduino, calculations and calibrations can be simplified by eliminating the voltage calculation. When calibrating with 7.0 pH solution, float phVoltage = (float)adcValue * 5.0 / 1024 = 2.5only if adcValue = 512so RV1 can be tuned to that. The slope can be used in equation I think voltage calculation is useful when someone designs a circuit without using software and in which case, the signal from ‘Do’ can be used to trigger a device like a relay to turn something on/off. Depending on the circuit, active-high or active-low relay can be used. Thanks for the comment. On ‘Do’, it has Hysteresis effect, meaning if you expect it to trigger something when a value is reach to a certain point, it does not necessary will trigger at the same point when the the value falling back. So personally, I prefer to use software solution than the ‘Do’. I too prefer the software approach. Many of these boards are based on expired patents, that’s why they are ‘low-cost’. Which means they are old and back in the days, software were not widely used. Most circuits were designed and run based on hardware. I could think of a few applications that might utilize the function. Of course, accuracy can be achieved, not at the speed we are accustomed to but enough to satisfy the needs. Hi, great article. The ph changes with temperature. how would we apply temp compensation to the ph readings. Thanks The PH variation due to temperature is less significant than EC measurement, for example, ph of 4.00 at 25 degree C will be around 4.01 @30 C and increase by 0.01 for approx every 5 degrees temperature change, but it remains at about 4.00 in the range between 10 – 25 C. So I will simply ignore the temperature factor in ph measurement unless you measuring ph about 30 degree C or below 10 degree C. I won’t trust the temperature measurement circuit for this particular board for the reason that I mentioned in my article. If you really want to have temperature measurement, get a water-proof DS18B20. Hi, there Check out below linked pdf file that gives a formular on how to apply temp compensation. Hi Henry, I am not an expert but I want to build a smart PH and EC sensor which helps me regulate the water nutrients and PH levels for an indoor hydroponics system. Can you guide me on an economical way to achieve this? Regards Chan Hi, There Thanks for the post and discussion, I too duplicated the whole circuit for PH measurement, as well as adding water-proofing DS18B20 and EC sensor on PCB, in which I planned to build an eco measuring system for in door plant growing measurement. Everything worked fine but I got two problems: 1. Big variations of PH value output measured alone without DS18B20 and EC sensor, not sure why, since I don’t have oscillator, couldn’t identify where the variation coming from, maybe you could help with it. 2. All 3 sensors putting together, DS18B20 and EC sensors working fine, but PH sensor got significant interference by DS18B20 and EC sensor, which seems to be reasonable(each of them emitted potential into solution with impact the micro-voltage level PH Sensor. I will have to isolate/disconnect DS18B20 and EC sensor through Arduino software and hardware switch circuit. The calibration of PH sensor went well, which I used standard buffer resolution, output the data exactly as expected, but when I put it into a glass with tap water, it gave big variation, as I mentioned above… Would appreciate if anyone could help me out. The things that I could think of 1) PH sensor doesn’t work well with running water or in a current stream. 2) Which MCU you are using? Arduino? I would suggest that you add a 10uF and 100nF capacitors in parallel at PH sensor board’s Vcc and Ground. 3) If you are using a PH sensor board other than this design, make sure the output impedance of the board that feed into ADC of Arduino is less than 10k ohm, you can read about my recent experience on evaluation Seeed Studio’s pH sensor board for the problem that I encountered. I don’t have any issue of having DS18B20 temperature sensor together with PH sensor, actually in my setup, my PH sensor is right next to the DS18B20 with less than 1cm separation, DS18B20 should stay away from EC sensor as EC sensor generated a magnetic/electrode field around its tip. Thanks for the quick response, Henry. In my view, DS18B20 and EC sensor once power on having direct impact to PH sensor. As confirmed by your comment, I can understand the EC sensor having impact to PH sensor, DS18B20 shouldn’t have impact to PH sensor, I don’t understand this part, need to figure out. Another alternative is to use water-proof thermal resistor B 3950 with screwnut for easy installation. I am actually making a board with all components on including PH, EC sensor circuits, with ATMEGA328P MCU simulating as Arduino Nano. To add 10uF and 100nF to VCC of the Amplifier CD4052 is agood idea, somehow I missed it. The layout needs to be improve to have the CD4052 closer to PH BNC slot. I haven’t experienced the ADC issue with impedence problem, will check out your blog later for reference. Great comments, thanks Henry. I mentioned in my research that the temperature sensor is being removed and I installed another sensor Can the sensor be left in place and the measured temperature adjusted? What is the effect of temperature on the pH value? On pH dependency on temperature, you should consult your probe manufacturer’s user guide. In general, for instance at 0 °C the pH of pure water is about 7.47. At 25 °C it is 7.00, and at 100 °C it is 6.14. For practical application, it depend on where is your region, I lives in tropical region, and see my answer to question 3 above. Hi Henry! First of all, i want to say that its very very useful information – really apreciate it. I making final year project – automatic control system for hydroponics. As you wrote at the beginning of the post, there is very little – almost none information about how to calibrate properly these sensors. Your post is very informative, but still i couldn’t figure out the point about temperature compensation. Can you please clearify it? From what i gave understood, is that you didn’t wrote any code for temp. compensation due to the fact that the built in thermistor is useless- beacuase it measures ambient temp. (not in water). I do want to make precise calibration on my EC Sensor, and i got submersible DS18B20 sensor. Should i left the T1 (Temperature out) pin on the EC sensor unconnected? Or if i don’t want to use it – connect to GND? Any help is appreciated. I gave a few reasons on why I didn’t do temperature compensation on pH measurements in the comments, you can scroll up and read it. For the EC, I’m not sure what EC sensor you are using, so I can’t really comment on how it should be connected, in general, if you don’t use it, leave it open should be fine. You can also read my another post which uses a water-proof DS18B20 with an EC sensor. Hi Henry, I found your article really very interesting. I followed all the steps in your article, I realized that when I short the BNC signal to make the 2.5 volt adjustment, the maximum range I can get by adjusting the trimmer is 2.4- 4.99 Volts. My board does not go below 2.4 volts. Do you think it is defective? Hello, I have the same issue while calibrating the minimum I can get it 2.6 volts, I can’t event reach 2.5… For me the range is 2.6-4.99 You can do what i did, while calibrating the minimun I only was able to get 2.59V, to be able to get get proper readings of pH value i just get left the bnc potentiometer at the minimum possible and got the readings of the powder solutions: 6.86 and 4.01. And with this using the equation y = ax+b i could get the correct values of pH. Ex: pH = a*xV + b 4.01 = a*(3.04)+b 6.86 = a*(2.54)+b Then i get the result y = -5.7*21.338 and thats it. Hi, I have the same problem! When I shortened the BNC to make the 2.5 volt adjustment, the ADC value only goes down to 831 and not 512? I can’t adjust RV1 further to get to 2.5 volts! Do you think the board is defective? Thanks for your help! Hi, when trying to calibrate the sensor I get an incorrect voltage from PO when trying to adjust it with the (BOATER 3296) potentiometer. I try to read it from the PO pin but it just reads ADC = 872; Po = 4.262V. The actual readings can be measured at the external part of the BNC probe or at the potentiometer. They show the correct voltage that I’ve tuned to 2.5V for the 7 Ph baseline. Is this a fault with the BNC interface? I am running this code to measure the voltage at PO, the potentiometer and the BNC connector #include const int adcPin = A0; void setup() { Serial.begin(115200); } void loop() { int adcValue = analogRead(adcPin); float phVoltage = (float)adcValue * 5.0 / 1023; Serial.print(“ADC = “); Serial.print(adcValue); Serial.print(“; Po = “); Serial.println(phVoltage); delay(1000); } As far as I can see, there are no data of C values of the schematic. Did any of you know what the values are?
https://www.e-tinkers.com/2019/11/measure-ph-with-a-low-cost-arduino-ph-sensor-board/
CC-MAIN-2021-49
en
refinedweb
Structure encapsulating the complete state of an 802.11 device. More... #include <net80211.h> Structure encapsulating the complete state of an 802.11 device. An 802.11 device is always wrapped by a network device, and this network device is always pointed to by the netdev field. In general, operations should never be performed by 802.11 code using netdev functions directly. It is usually the case that the 802.11 layer might need to do some processing or bookkeeping on top of what the netdevice code will do. Definition at line 786 of file net80211.h. The net_device that wraps us. Definition at line 789 of file net80211.h. Referenced by ath5k_probe(), ath5k_start(), ath_pci_probe(), iwlist(), iwstat(), net80211_alloc(), net80211_autoassociate(), net80211_change_channel(), net80211_check_settings_update(), net80211_deauthenticate(), net80211_free(), net80211_handle_auth(), net80211_prepare_assoc(), net80211_prepare_probe(), net80211_probe_start(), net80211_register(), net80211_rx(), net80211_rx_err(), net80211_set_rate_idx(), net80211_set_state(), net80211_step_associate(), net80211_tx_complete(), net80211_tx_mgmt(), net80211_unregister(), rtl818x_init_hw(), rtl818x_init_rx_ring(), rtl818x_init_tx_ring(), rtl818x_probe(), rtl818x_start(), trivial_change_key(), trivial_init(), wpa_derive_ptk(), wpa_psk_start(), and wpa_send_eapol(). List of 802.11 devices. Definition at line 792 of file net80211.h. Referenced by net80211_check_settings_update(), net80211_get(), net80211_register(), and net80211_unregister(). 802.11 device operations Definition at line 795 of file net80211.h. Referenced by net80211_alloc(), net80211_change_channel(), net80211_filter_hw_channels(), net80211_netdev_close(), net80211_netdev_irq(), net80211_netdev_open(), net80211_netdev_poll(), net80211_netdev_transmit(), net80211_prepare_assoc(), net80211_prepare_probe(), net80211_probe_start(), net80211_probe_step(), net80211_process_capab(), net80211_process_ie(), net80211_register(), net80211_set_rate_idx(), net80211_set_state(), and net80211_unregister(). Driver private data. Definition at line 798 of file net80211.h. Referenced by ath5k_attach(), ath5k_config(), ath5k_detach(), ath5k_irq(), ath5k_poll(), ath5k_probe(), ath5k_remove(), ath5k_setup_bands(), ath5k_start(), ath5k_stop(), ath5k_tx(), ath9k_bss_info_changed(), ath9k_config(), ath9k_irq(), ath9k_process_rate(), ath9k_start(), ath9k_stop(), ath9k_tx(), ath_isr(), ath_pci_probe(), ath_pci_remove(), ath_tx_setup_buffer(), ath_tx_start(), grf5101_rf_init(), grf5101_rf_set_channel(), grf5101_rf_stop(), grf5101_write_phy_antenna(), max2820_rf_init(), max2820_rf_set_channel(), max2820_write_phy_antenna(), net80211_alloc(), rtl818x_config(), rtl818x_free_rx_ring(), rtl818x_free_tx_ring(), rtl818x_handle_rx(), rtl818x_handle_tx(), rtl818x_init_hw(), rtl818x_init_rx_ring(), rtl818x_init_tx_ring(), rtl818x_irq(), rtl818x_poll(), rtl818x_probe(), rtl818x_set_hwaddr(), rtl818x_start(), rtl818x_stop(), rtl818x_tx(), rtl818x_write_phy(), rtl8225_read(), rtl8225_rf_conf_erp(), rtl8225_rf_init(), rtl8225_rf_set_channel(), rtl8225_rf_set_tx_power(), rtl8225_rf_stop(), rtl8225_write(), rtl8225x_rf_init(), rtl8225z2_rf_init(), rtl8225z2_rf_set_tx_power(), sa2400_rf_init(), sa2400_rf_set_channel(), sa2400_write_phy_antenna(), write_grf5101(), write_max2820(), and write_sa2400(). Information about the hardware, provided to net80211_register() Definition at line 801 of file net80211.h. Referenced by iwlist(), iwstat(), net80211_filter_hw_channels(), net80211_free(), net80211_prepare_probe(), net80211_probe_step(), net80211_process_ie(), net80211_register(), net80211_rx(), net80211_send_assoc(), and net80211_step_associate(). A list of all possible channels we might use. Definition at line 806_add_channels(), net80211_change_channel(), net80211_duration(), net80211_filter_hw_channels(), net80211_prepare_probe(), net80211_probe_step(), net80211_process_ie(), net80211_register(), and rtl818x_config(). The number of channels in the channels array. Definition at line 809 of file net80211.h. Referenced by iwstat(), net80211_add_channels(), net80211_change_channel(), net80211_filter_hw_channels(), net80211_prepare_probe(), net80211_probe_start(), net80211_probe_step(), and net80211_process_ie(). The channel currently in use, as an index into the channels array. Definition at line 812_change_channel(), net80211_duration(), net80211_filter_hw_channels(), net80211_prepare_probe(), net80211_probe_start(), net80211_probe_step(), net80211_process_ie(), net80211_register(), and rtl818x_config(). A list of all possible TX rates we might use. Rates are in units of 100 kbps. Definition at line 818 of file net80211.h. Referenced by ath5k_config(), ath9k_config(), iwstat(), net80211_cts_duration(), net80211_ll_push(), net80211_marshal_request_info(), net80211_prepare_probe(), net80211_process_ie(), net80211_set_rate_idx(), net80211_set_rtscts_rate(), net80211_tx_mgmt(), rc80211_set_rate(), rc80211_update_rx(), rc80211_update_tx(), rtl818x_config(), and rtl818x_tx(). The number of transmission rates in the rates array. Definition at line 821 of file net80211.h. Referenced by iwstat(), net80211_marshal_request_info(), net80211_prepare_probe(), net80211_process_ie(), net80211_set_rate_idx(), net80211_set_rtscts_rate(), rc80211_maybe_set_new(), rc80211_pick_best(), and rc80211_update_rx(). The rate currently in use, as an index into the rates array. Definition at line 824 of file net80211.h. Referenced by ath5k_config(), ath9k_config(), iwstat(), net80211_cts_duration(), net80211_ll_push(), net80211_prepare_assoc(), net80211_prepare_probe(), net80211_process_ie(), net80211_set_rate_idx(), net80211_set_rtscts_rate(), net80211_tx_mgmt(), rc80211_maybe_set_new(), rc80211_pick_best(), rc80211_set_rate(), rc80211_update_tx(), rtl818x_config(), and rtl818x_tx(). The rate to use for RTS/CTS transmissions. This is always the fastest basic rate that is not faster than the data rate in use. Also an index into the rates array. Definition at line 831 of file net80211.h. Referenced by ath5k_config(), ath9k_config(), net80211_cts_duration(), net80211_set_rtscts_rate(), and rtl818x_config(). Bitmask of basic rates. If bit N is set in this value, with the LSB considered to be bit 0, then rate N in the rates array is a "basic" rate. We don't decide which rates are "basic"; our AP does, and we respect its wishes. We need to be able to identify basic rates in order to calculate the duration of a CTS packet used for 802.11 g/b interoperability. Definition at line 843 of file net80211.h. Referenced by net80211_marshal_request_info(), net80211_process_ie(), and net80211_set_rtscts_rate(). The asynchronous association process. When an 802.11 netdev is opened, or when the user changes the SSID setting on an open 802.11 device, an autoassociation task is started by net80211_autoassocate() to associate with the new best network. The association is asynchronous, but no packets can be transmitted until it is complete. If it is successful, the wrapping net_device is set as "link up". If it fails, assoc_rc will be set with an error indication. Definition at line 858 of file net80211.h. Referenced by net80211_alloc(), net80211_autoassociate(), net80211_netdev_close(), and net80211_step_associate(). Network with which we are associating. This will be NULL when we are not actively in the process of associating with a network we have already successfully probed for. Definition at line 866 of file net80211.h. Referenced by iwstat(), net80211_autoassociate(), net80211_step_associate(), trivial_init(), wpa_handle_3_of_4(), wpa_make_rsn_ie(), and wpa_start(). Definition at line 874 of file net80211.h. Referenced by net80211_autoassociate(), and net80211_step_associate(). Definition at line 875 of file net80211.h. Referenced by net80211_autoassociate(), and net80211_step_associate(). Context for the association process. This is a probe_ctx if the PROBED flag is not set in state, and an assoc_ctx otherwise. Referenced by net80211_autoassociate(), and net80211_step_associate(). Security handshaker being used. Definition at line 879 of file net80211.h. Referenced by net80211_check_settings_update(), net80211_netdev_close(), net80211_prepare_assoc(), net80211_step_associate(), wpa_psk_start(), and wpa_psk_step(). State of our association to the network. Since the association process happens asynchronously, it's necessary to have some channel of communication so the driver can say "I got an association reply and we're OK" or similar. This variable provides that link. It is a bitmask of any of NET80211_PROBED, NET80211_AUTHENTICATED, NET80211_ASSOCIATED, NET80211_CRYPTO_SYNCED to indicate how far along in associating we are; NET80211_WORKING if the association task is running; and NET80211_WAITING if a packet has been sent that we're waiting for a reply to. We can only be crypto-synced if we're associated, we can only be associated if we're authenticated, we can only be authenticated if we've probed. If an association process fails (that is, we receive a packet with an error indication), the error code is copied into bits 6-0 of this variable and bit 7 is set to specify what type of error code it is. An AP can provide either a "status code" (0-51 are defined) explaining why it refused an association immediately, or a "reason code" (0-45 are defined) explaining why it canceled an association after it had originally OK'ed it. Status and reason codes serve similar functions, but they use separate error message tables. A iPXE-formatted return status code (negative) is placed in assoc_rc. If the failure to associate is indicated by a status code, the NET80211_IS_REASON bit will be clear; if it is indicated by a reason code, the bit will be set. If we were successful, both zero status and zero reason mean success, so there is no ambiguity. To prevent association when opening the device, user code can set the NET80211_NO_ASSOC bit. The final bit in this variable, NET80211_AUTO_SSID, is used to remember whether we picked our SSID through automated probing as opposed to user specification; the distinction becomes relevant in the settings applicator. Definition at line 921 of file net80211.h. Referenced by ath5k_config(), ath9k_bss_iter(), ath9k_config_bss(), iwlist(), iwstat(), net80211_autoassociate(), net80211_check_settings_update(), net80211_handle_mgmt(), net80211_ll_push(), net80211_netdev_close(), net80211_netdev_open(), net80211_rx(), net80211_send_disassoc(), net80211_set_state(), net80211_step_associate(), net80211_update_link_quality(), and rtl818x_config(). Return status code associated with state. Definition at line 924 of file net80211.h. Referenced by net80211_autoassociate(), net80211_deauthenticate(), net80211_ll_push(), net80211_set_state(), and net80211_step_associate(). RSN or WPA information element to include with association. If set to NULL, none will be included. It is expected that this will be set by the init function of a security handshaker if it is needed. Definition at line 932 of file net80211.h. Referenced by net80211_marshal_request_info(), net80211_prepare_assoc(), wpa_psk_init(), wpa_send_2_of_4(), and wpa_start(). 802.11 cryptosystem for our current network For an open network, this will be set to NULL. Definition at line 940 of file net80211.h. Referenced by ath_tx_setup_buffer(), net80211_handle_auth(), net80211_netdev_close(), net80211_netdev_transmit(), net80211_prepare_assoc(), net80211_rx(), net80211_tx_mgmt(), trivial_change_key(), trivial_init(), and wpa_install_ptk(). 802.11 cryptosystem for multicast and broadcast frames If this is NULL, the cryptosystem used for receiving unicast frames will also be used for receiving multicast and broadcast frames. Transmitted multicast and broadcast frames are always sent unicast to the AP, who multicasts them on our behalf; thus they always use the unicast cryptosystem. Definition at line 951 of file net80211.h. Referenced by net80211_prepare_assoc(), net80211_rx(), and wpa_install_gtk(). MAC address of the access point most recently associated. Definition at line 954 of file net80211.h. Referenced by ath5k_config(), ath9k_bss_iter(), eapol_key_rx(), net80211_handle_assoc_reply(), net80211_ll_push(), net80211_prepare_assoc(), net80211_rx(), net80211_send_disassoc(), net80211_step_associate(), rtl818x_config(), wpa_derive_ptk(), and wpa_send_eapol(). SSID of the access point we are or will be associated with. Although the SSID field in 802.11 packets is generally not NUL-terminated, here and in net80211_wlan we add a NUL for convenience. Definition at line 962 of file net80211.h. Referenced by iwstat(), net80211_autoassociate(), net80211_check_settings_update(), net80211_marshal_request_info(), net80211_prepare_assoc(), net80211_probe_start(), net80211_process_ie(), net80211_step_associate(), and wpa_psk_start(). Association ID given to us by the AP. Definition at line 965 of file net80211.h. Referenced by ath9k_bss_iter(), and net80211_handle_assoc_reply(). TSFT value for last beacon received, microseconds. Definition at line 968 of file net80211.h. Referenced by net80211_prepare_assoc(), and net80211_update_link_quality(). Time between AP sending beacons, microseconds. Definition at line 971 of file net80211.h. Referenced by iwstat(), and net80211_prepare_assoc(). Smoothed average time between beacons, microseconds. Definition at line 974 of file net80211.h. Referenced by iwstat(), and net80211_update_link_quality(). Physical layer options. These control the use of CTS protection, short preambles, and short-slot operation. Definition at line 983 of file net80211.h. Referenced by ath5k_config(), ath5k_txbuf_setup(), ath9k_bss_info_changed(), net80211_duration(), net80211_process_capab(), net80211_process_ie(), rtl818x_tx(), and rtl8225_rf_conf_erp(). Signal strength of last received packet. Definition at line 986 of file net80211.h. Referenced by iwstat(), and net80211_rx(). Rate control state. Definition at line 989 of file net80211.h. Referenced by net80211_free(), net80211_rx(), net80211_step_associate(), net80211_tx_complete(), rc80211_maybe_set_new(), rc80211_pick_best(), rc80211_set_rate(), rc80211_update(), and rc80211_update_tx(). Fragment reassembly state. Definition at line 994 of file net80211.h. Referenced by net80211_accum_frags(), net80211_free_frags(), and net80211_rx_frag(). The sequence number of the last packet we sent. Definition at line 997 of file net80211.h. Referenced by net80211_ll_push(), and net80211_tx_mgmt(). Packet duplication elimination state. We are only required to handle immediate duplicates for each direct sender, and since we can only have one direct sender (the AP), we need only keep the sequence control field from the most recent packet we've received. Thus, this field stores the last sequence control field we've received for a packet from the AP. Definition at line 1008 of file net80211.h. Referenced by net80211_rx(). RX management packet queue. Sometimes we want to keep probe, beacon, and action packets that we receive, such as when we're scanning for networks. Ordinarily we drop them because they are sent at a large volume (ten beacons per second per AP, broadcast) and we have no need of them except when we're scanning. When keep_mgmt is TRUE, received probe, beacon, and action management packets will be stored in this queue. Definition at line 1021 of file net80211.h. Referenced by net80211_alloc(), net80211_handle_mgmt(), and net80211_mgmt_dequeue(). RX management packet info queue. We need to keep track of the signal strength for management packets we're keeping, because that provides the only way to distinguish between multiple APs for the same network. Since we can't extend io_buffer to store signal, this field heads a linked list of "RX packet info" structures that contain that signal strength field. Its entries always parallel the entries in mgmt_queue, because the two queues are always added to or removed from in parallel. Definition at line 1034 of file net80211.h. Referenced by net80211_alloc(), net80211_handle_mgmt(), and net80211_mgmt_dequeue(). Whether to store management packets. Received beacon, probe, and action packets will be added to mgmt_queue (and their signal strengths added to mgmt_info_queue) only when this variable is TRUE. It should be set by net80211_keep_mgmt() (which returns the old value) only when calling code is prepared to poll the management queue frequently, because packets will otherwise pile up and exhaust memory. Definition at line 1046 of file net80211.h. Referenced by net80211_handle_mgmt(), and net80211_keep_mgmt().
http://dox.ipxe.org/structnet80211__device.html
CC-MAIN-2019-51
en
refinedweb
En savoir plus à propos de l'abonnement Scribd Découvrez tout ce que Scribd a à offrir, dont les livres et les livres audio des principaux éditeurs. OISD-RP-174 Second Edition Jul y 2008 For Restricted Circulation WELL CONTROL OISD RP 174 Prepared by FUNCTIONAL COMMITTEE FOR REVIEW OF WELL CONTROL OIL INDUSTRY SAFETY DIRECTORATE 7th Floor, New Delhi House, 27, Barakhamba Road, New Delhi 110 001. III NOTE OISD (Oil Industry Safety Directorate) publications are prepared for use in the Oil and Gas Industry under Ministry of Petroleum & Natural Gas. These are the property of Ministry of Petroleum & Natural Gas and shall not be reproduced or copied and loaned or exhibited to others without written consent from OISD. Though every effort has been made to assure the accuracy and reliability of the data contained in the document, OISD hereby expressly disclaims any liability or responsibility for loss or damage resulting from their use. The document is intended to supplement rather than replace the prevailing statutory requirements. IV FOREWORD The Oil Industry in India is 100 years old. Because of various collaboration agreements, a variety of international codes, standards and practices have been in vogue. Standardisation in design philosophies and operating and maintenance practices at a national level was hardly in existence. This coupled with feed back from some serious accidents that occurred in the recent past in India and abroad, emphasised the need for the industry to review the existing state of art in designing, operating and maintaining oil and gas installations. With this in view, the Ministry of Petroleum and Natural Gas in 1986 constituted a Safety Council assisted by the Oil Industry Safety Directorate (OISD) staffed from within the industry in formulating and implementing a series of self regulatory measures aimed at removing obsolescence, standardising and upgrading the existing standards to ensure safe operations. Accordingly, OISD constituted a number of functional committees of experts nominated from the industry to draw up standards and guidelines on various subjects. The recommended practices for "Well Control" have been prepared by the Functional Committee for revision of Well Control". This document is based on the accumulated knowledge and experience of industry members and the various national / international codes and practices. This document covers recommended practices for selection of well control equipment, installation requirements of well control equipment, inspection and maintenance of well control equipment, methods for well control and competence of personnel. Well Control issues related to both onland and offshore operations have been covered. Suggestions are invited from the users after it is put into practice to improve the document further. Suggestions for amendments to this document should be addressed The Coordinator Functional Committee on Well Control, Oil Industry Safety Directorate, 7thFloor, New Delhi House, 27, Barakhamba Road, New Delhi -110 001. Email: oisd@vsnl.com V COMMITTEE FOR PREPARING STANDARD ON " WELL CONTROL" 1998 ----------------------------------------------------------------------------------------------------------------------------- Name Designation & Position in Organisation Committee ---------------------------------------------------------------------------------------------------------------------------- 1.S/Shri .A.K. Hazarika GM(D) Leader ONGC, Mumbai 2. S.L. Arora GM(D) Member ONGC, Ahmedabad 3. A. Borbora Dy. CE(D) Member OIL, Duliajan 4. C.S. Verma Dy. CE(D) Member Oil, Rajasthan 5. A. Verma CE(P) Member ONGC, Mumbai 6. V.P. Mahawar CE(D) Member ONGC, Dehradun 7. B.K. Baruah DGM(D) Member ONGC, ERBC 8. S.K. Ahuja SE(D) Member ONGC, ERBC 9. P.K. Garg Addl. Director (E&P) Co-ordinator ---------------------------------------------------------------------------------------------------------------------------- VI Functional Committee for Complete Review of OISD-STD-174, 2008 LEADER Shri K. Satyanarayan Oil and Natural Gas Corporation Ltd., Ankleshwar. MEMBERS Shri V.P. Mahawar Oil and Natural Gas Corporation Ltd., Ahmedabad. Shri R.K. Rajkhowa Oil India Ltd., Duliajan, Assam. Shri S.K. Ahuja Oil and Natural Gas Corporation Ltd., Mumbai. Shri B.S. Saini Oil and Natural Gas Corporation Ltd., Sibsagar, Assam. Shri A.J . Phukan Oil India Ltd., Duliajan, Assam. MEMBER COORDINATOR Shri H.C.Taneja Oil Industry Safety Directorate, New Delhi. VII Contents Section Description Page 1.0 Introduction 1 2.0 Scope 1 3.0 Definitions 1 4.0 Planning for Wel l Control 3 4.1 Cause of Ki ck 3 4.2 Cause of Reduction in Hydrostatic Head 3 4.3 Well Pl anning 3 5.0 Diverter Equipment and Control System 3 5.1 Procedures for Diverter Operations 4 6.0 Well Control Equipment & Control System 4 6.1 Selection 4 6.2 Periodic Inspection and Maintenance 5 6.3 Surface Blow out Prevention Equipment 5 6.4 Subsea Blow out Prevention Equipment 7 6.5 Choke and Kill Lines 10 6.6 Wellhead, BOP Equipment and Choke & Kill Lines Installation 12 6.7 Blow out Preventer Testing 13 6.8 Minimum Requirements for Well Control Equipment 14 for Workover Operations (on land) 7.0 Procedures and Techniques for Well Control (Prevention 15 and Control of Kick) 7.1 Kick Indi cations 15 7.2 Prevention and Control of Kick 15 7.3 Kick Control Procedures 17 8.0 Drills and Training 21 8.1 Pit Dri ll (On bottom) 21 8.2 Trip Drill (Drill Pipe in BOP) 21 8.3 Trip Drill (Collar in Blowout Preventer) 22 8.4 Trip Drill (String is out of Hole) 22 8.5 Well Control Training 22 9.0 Monitoring System 22 9.1 Instrumentation Systems 22 9.2 Trip Tank System 22 9.3 Mud Gas Separator (MGS) 23 9.4 Degasser 23 10.0 Under Balanced Dri lling 23 10.1 Procedures for UBD 24 11.0 Well Control Equipment Arrangement for HTHP Wells 26 12.0 References 27 Abbreviations 28 Annexure I to VIII 1 Recommended Practices for Well Control 1.0 Introduction Primary well control is by maintaining hydrostatic pressure in the wellbore at least equal to (preferably more than) the formation pressure to prevent the flow of formation fluids. During drilling and workover operations flow of formation fluids into the wellbore is considered as kick. If not controlled, a kick may result in a blowout. For safety of personnel, equipment and environment, it is of utmost importance to safely prevent or handle kicks. This document provides guidance on selection, installation and testing of well control equipment. The recommended practices also include procedures for preventing kicks while drilling and tripping, safe closure of well on detection of kicks, procedures for well control drills, during drilling and workover operations. Recommendations for the surface installations are applicable to sub-sea installations also unless stated otherwise. All the sections / sub-sections of this document mentioning drilling are relevant to workover operations also, wherever applicable. Terms like drilling fluid means workover fluid in the context of workover operations. 2.0 Scope This document covers selection, installation and testing of well control equipment both surface and sub-sea, and recommended practices for kick prevention, and control and competence requirement (training and drills) for personnel, in drilling and workover operations. 3.0 Definitions 3.1 Accumulator (BOP Control Unit) A pressure vessel charged with nitrogen or other inert gas and used to store hydraulic fluid under pressure for operation of blowout preventers and/or diverter system. 3.2 Annular Preventer A device, which can seal around different sizes & shapes object in the wellbore or seal an open hole. 3.3 Blowout An uncontrolled flow of well fluids and/or formation fluids from the wellbore. 3.4 Blowout Preventer A device attached to the casinghead that allows the well to be sealed to confine the well fluids to the wellbore. 3.5 Blowout Preventer Stack The assembly of well control equipment including preventers, spools, valves, and nipples connected to the top of the casing head. 3.6 Bottomhole Pressure (BHP) Sum of all pressures that are being exerted at the bottom of the hole and can be written as: BHP = static pressure + dynamic pressures Static pressure in a wellbore is due to mud column hydrostatic pressure and surface pressure. Dynamic pressures are exerted due to mud movement or the pipe movement in the wellbore. BHP under various operating situations is: Not circulating (static condition) BHP = hydrostatic pressure due to mud column While drilling (over balance) BHP = Hydrostatic pressure of mud +annular pressure losses. While drilling (MPD/UBD) BHP = Hydrostatic pressure of mud + annular pressure losses +Surface annular pressure While shut-in after taking kick BHP = Hydrostatic pressure + surface pressure While killing a well BHP = Hydrostatic + surface press. +annular pressure losses Running pipe in the hole BHP = Hydrostatic pressure + surge pressure 2 Pulling pipe out of hole BHP = Hydrostatic pressure - swab pressure. 3.7 Choke manifold The assembly of valves, chokes, gauges, and piping to control flow from the annulus and regulate pressures in the drill string / annulus flow, when the BOPs are closed. 3.8 Degasser A vessel, which utilizes pressure reduction and/or inertia to separate entrained gases from the liquid phases. 3.9 Diverter A device attached to the wellhead or marine riser to close the vertical access and direct flow into a line away from the rig. 3.10 Fracture Pressure The pressure required to initiate a fracture in a sub surface formation (geologic strata). Fracture pressure can be determined by Geo-physical methods; during drilling fracture pressure can be determined by conducting a leak off test. 3.11 Hydrostatic Pressure Pressure exerted by the fluid column at the depth of interest is termed as hydrostatic pressure. The magnitude of hydrostatic pressure depends upon the density and the vertical height of liquid column. Hydrostatic pressure can be calculated by the following formula. Hyd. pressure (psi) = 0.052 x mud wt.(ppg) x TVD (feet) Hyd. pressure (kg/cm2) =Mud wt.( gm/cc) x TVD (mtrs)/10 where TVD =True vertical depth. 3.12 Influx The flow of fluids from the formation into the wellbore. 3.13 Kick A kick is intrusion of unwanted formation fluids into wellbore, when hydrostatic head of drilling fluid column is / becomes less than the formation pressure. Kick can lead to blowout, if timely corrective measures are not taken. 3.14 Kill Rate Reduced circulating rate (kill rate) is required when circulating kicks so that additional pressure to prevent formation flow can be added without exceeding pump liner rating. Kill rate is normally half of the normal circulating rate. For subsea stacks in deep water, kill rates less than half of the normal circulating rate may be required to avoid excessive back pressure in the choke flow line. 3.15 Kill Rate Pressure The circulating pressure measured at the drill pipe gauge when the mud pumps are operating at the kill rate. 3.16 Marine riser system The extension of the wellbore from the subsea BOP stack to the floating drilling vessel which provides for fluid returns to the drilling vessel, supports the choke, kill, and control lines, guides tools into the well, and serves as a running string for the BOP stack. 3.17 Maximum Allowable Annular Surface Pressure (MAASP) It is maximum allowable annular surface pressure during well control; any pressure above this may damage formation / casing / surface equipment. 3.18 Mud Gas Separator A device that removes gas from the drilling fluid returns, when a kick is being circulated out. Mud gas separator is also known as gas buster or poor-boy degasser. 3.19 Pipe-light Pipe-light occurs at the point where the formation pressure across the pipe cross-section creates an upward force sufficient to overcome the downward force created by the pipes weight- a potentially disastrous scenario. 3.20 Pore Pressure Pressure at which formation fluid is trapped in the pore (void) spaces of the rock is termed as formation pressure or pore pressure. It can be expressed in various ways like: In term of pressure - psi or kg/cm2 In term of pressure gradient - psi /ft or kg/cm2/meter. 3 In term of equivalent mud wt. - ppg or gm/cc. 3.21 Shall The word shall is used to indicate that the provision is mandatory. 3.22 Should The word should is used to indicate that the provision is recommendatory as per sound engineering practice. 3.23 Underbal anced Drilling (UBD) Drilling operation, when the hydrostatic head of a drilling fluid is intentionally (naturally or induced by adding natural gas, nitrogen, or air to the drilling fluid) kept lower than the pressure of the formation being drilled with the intention of bringing formation fluids to the surface. 4.0 Planning for well control 4.1 Cause of Ki ck Kick may be caused due to: i. Encountering higher than anticipated pore pressure. ii. Reduction in hydrostatic pressure in the wellbore. 4.2 Cause of Reduction in Hydrostatic Head I. Failure to keep the hole full of drilling fluid II. Swabbing, III. Loss of circulation IV. Insufficient drilling fluid density. V. Gas cut drilling fluid VI. Loss of riser drilling fluid column. 4.3 Well Pl anning I. Well planning should include conditions anticipated to be encountered during drilling / working over of the well, the well control equipment to be used, and the well control procedures to be followed. II. For effective well control the following elements of well planning should be considered: a. Casing design and kick tolerance b. Cementing c. Drilling fluid density d. Drilling fluid monitoring equipment e. Blowout prevention equipment selection f. Contingency plans with actions to be taken if the maximum allowable casing pressure is reached g. Hydrogen sulphide environment, if expected. III. During well planning shallow gas hazard should also be considered. Well plan should include mitigating measures considering the following: a. Pilot hole drilling, b. Use of diverter. c. Riserless drilling (with floater) 5.0 Diverter Equipment and Control System A diverter system is used during top-hole drilling; it allows routing of the flow away from the rig to protect persons and equipment. Components of diverter system include annular sealing device, vent outlet(s), vent line(s), valve(s), control system. Recommended practices for diverter system: I. The friction loss should not exceed the diverter system rated working pressure, place undue pressure on the wellbore and /or exceed other equipments design pressure, etc., e.g. marine riser. The diverter system should be accordingly designed. II. To minimise back pressure (as much as practical) on the wellbore while diverting well fluids, diverter piping should be adequately sized. 4 III. Vent lines should be 10 or above for offshore and 8 or above for onshore. IV. Diverter lines should be straight as far as possible, properly anchored and sloping down to avoid blockage of the lines with cuttings etc. V. The diverter and mud return (flow line) lines should be separate lines. VI. Diverter valves should be full opening type either pneumatic or hydraulic with automatic sequencing / manual sequencing. VII. The diverter control system may be self contained or an integral part of the blowout preventer control system. It should be located in safe area. VIII. The diverter control system should be capable of operating the diverter system from two or more locations - one to be located near the driller's console. IX. When a surface diverter system and a sub-sea BOP stack are used, two separate control / accumulator systems are required. This will allow the BOPs to be operated and the riser disconnected in case the diverter control system gets damaged. X. Size of the hydraulic control lines should be as per manufacturers recommendations. XI. Control systems of diverter should be capable of closing the diverter within maximum 45 seconds and simultaneously opening the valves in the diverter lines. XII. Telescopic/slip joints (in case of floating rigs) should be incorporated with double seals, to improve the sealing capability when gas has to be circulated out of the marine riser. XIII. Alternate means to operate diverter system (in case primary system fails) should be provided. 5.1 Procedures for Diverter Operations Following procedure is recommended for use of diverter: I. Stop drilling II. Pick up Kelly until tool joint is above rotary. III. Open vent line towards downward wind direction, close diverter packer and close shale shaker inlet valve. IV. Stop pump and check for flow through open vent line. V. If flow is positive, pump water or drilling fluid as required moderating the flow. VI. Monitor and adjust packer pressure as and when required. VII. Alert the personnel on the rig. VIII. Take all precautions to prevent fire by putting off all naked flames and unnecessary electrical systems. Additional l y following are applicabl e in case of subsea wells: I. Monitor and adjust slip joint packer pressure as and when required. II. Watch for gas bubbles in the vicinity of drilling vessel. 6.0 Well Control Equipment & Control System 6.1 Selection I. All the equipment including ram preventers, lines, valves and flow fittings shall be selected to withstand the maximum anticipated surface pressures. Annular preventer can have lower rating than ram BOP. II. Welded, flanged or hub end connections are only recommended on all pressure systems above 3000 psi. III. In sour gas areas H2S trim (refer NACE MR0175 / ISO 15156) equipment should be used. IV. Kill lines should be of minimum 2 nominal size and choke line should be of minimum 3 nominal size. 5 V. Size of choke line and choke manifold should be same. VI. Closing systems of surface BOPs should be capable of closing each ram preventer and annular preventer up to 18 size within 30 seconds and annular preventer above 18 size within 45 seconds. VII. Closing systems of sub-sea BOPs should be capable of closing each ram preventer within 45 seconds and annular preventer within 60 seconds. VIII. Ram type subsea preventers should be equipped with an integral or remotely operated locking system. Surface ram preventer should be equipped with mechanical / hydraulic ram locks. 6.2 Periodic Inspection and Maintenance I. The organisation should establish inspection and maintenance procedures for well control equipment. Inspections and maintenance procedures should take into consideration the OEMs recommendations. II. Inspection recommendations, where applicable, may include: a. Verification of instrument accuracy b. Relief valve settings c. Pressure control switch settings d. Nitrogen precharge pressure in accumulators e. Pump systems f. Fluid Levels g. Lubrication Points h. General condition of i) Piping systems ii) Hoses iii) Electrical conduit/cords iv) Mechanical components v) Structural components vi) Filters/strainers vii) Safety covers/devices viii) Control system adequacy ix) Battery condition III. Inspections between wells: after each well, the well control equipment should be cleaned, visually inspected, preventive maintenance performed before installation at the next well. The inspection should include the seal area of the connectors (Choke and kill lines) for any damage. IV. Major inspection: after every 5 years of service or as per OEMs recommendation. The BOP stack, choke manifold, and diverter assembly should be disassembled, and inspected in accordance with the OEMs guidelines. V. Spare parts requirement as per OEM should be considered. However, minimum spare parts as listed below should be readily available: i) A complete set of ram seals for each size and type of ram BOP in use. ii) A complete set of bonnet or door seals for each size and type of ram BOP in use. iii) Ring gaskets to fit end connections. iv) A spare annular BOP packing element and a complete set of seals. VI. During storage of BOP metal parts and related equipment, they should be coated with a protective coating to prevent rust. Storage of elastomer parts should be in accordance with manufacturers recommendations. VII. System should be in place to control use of rubber / elastomer parts, having limited shelf life. VIII. Separate maintenance history / log book of all the BOPs, Choke manifold and Control unit should be maintained. IX. All pressure gauges on the BOP control system should be calibrated at least every three years. 6.3 Surface Blow out Prevention Equipment Surface blow out prevention equipment is used on land operations and offshore operations where the wellhead is above the water level. I. Well control equipment can be classified under the following categories based on pressure rating: 6 a) 2000 psi WP b) 3000 psi WP c) 5000 psi WP d) 10000 psi WP e) 15000 psi WP, and f) 20,000 psi WP II. Refer Annexure-I for recommended 2000 psi BOP stack. One double, or two single ram type preventers - one of which be equipped with correct size pipe rams the other with blind or blind- shear rams. III. Refer Annexure-II for recommended 3000/5000 psi BOP stack. The stack comprises of, besides annular BOP, one double, or two single ram type preventers - one of which should be equipped with correct size pipe rams and the other with blind or blind-shear rams. IV. Refer Annexure-III for recommended 10000 / 15000 / 20000 psi BOP stack. The stack comprises of, besides annular BOP, three single, or one double and one single ram type preventers: one of which be should be equipped with blind or blind-shear rams and the other two with correct size pipe rams. V. When the bottom ram preventer is equipped with proper size side outlets, the kill and choke lines may be connected to the side outlets of the bottom preventer. In that case the drilling spool may be dispensed with. VI. Inspite of the above, a drilling spool use may be considered for the following two advantages: a. Stack outlets at drilling spool localizes possible erosion in less expensive drilling spool. b. It allows additional space between preventers to facilitate stripping, hang off, and / or shear operations. 6.3.1 Control System for Surface BOP Stacks (Onshore and Bottom-supported Offshore Instal lations) I. Control systems are typically simple closed hydraulic control systems consisting of a reservoir for storing hydraulic fluid, pump equipment for pressurizing the hydraulic fluid, accumulator banks for storing power fluid, manifolding, piping and control valves for transmission of control fluid for the BOP stack functions. II. A suitable control fluid should be selected as the system operating medium based on the control system operating requirements, environmental requirements and user preference. III. Two (primary and secondary) or more pump systems should be used having independent power sources. Electrical and / or air (pneumatic) supply for powering pumps should be available at all times such that the pumps will automatically start when the system pressure has decreased to approximately ninety percent of the system working pressure and automatically stop within plus zero or minus 100 psi of the system design working pressure. IV. With the accumulators isolated, the pump system should be capable of closing annular BOP on the drill string being used, open HCR valve on choke line and achieve the operating pressure level of annular BOP to effect a seal on the annular space within 2 minutes. V. Each pump system should be protected from over pressurisation by a minimum of two devices designed to limit the pump discharge pressure. One device should limit the pump discharge pressure so that it will not exceed the design working pressure of a BOP Control System. The second device normally a relief valve, should be sized to relieve at a flow rate of at least equal to the design flow rate of the pump systems, and should be set to relieve at not more than ten percent over the design pressure. VI. The combined output of all pumps should be capable of charging the entire accumulator system from precharge pressure to the maximum rated control system working pressure within 15 minutes. 7 VII. The hydraulic fluid reservoir should have a capacity equal to at least twice the useable hydraulic fluid capacity of the accumulator system. VIII. In the field, the precharge pressure should be checked and adjusted to within 100 psi of the recommended precharge pressure during installation of the control system and at the start of drilling each well (interval not to exceed sixty days). IX. The BOP control system should have a minimum stored hydraulic fluid volume, with pumps inoperative, to satisfy the greater of the following two requirements: a) Close from a full open position at zero wellbore pressure, all of the BOPs in the BOP stack, plus 50 % reserve. b) The pressure of the remaining stored accumulator volume after closing all of the BOPs should exceed the minimum calculated (using the BOP closing ratio) operating pressure required to close any ram BOP (excluding the shear rams) at the maximum rated wellbore pressure of the stack. X. All rigid or flexible lines between the control system and BOP stack should be fire resistant including end connections, and should have a working pressure equal to the design working pressure of the BOP control system. All control system interconnect piping, tubing hose, linkages etc. should be protected from damage from drilling operations, drilling equipment movement and day to day personnel operations. XI. The control unit should be installed in a location away from the drill floor and easily accessible to the persons during an emergency. XII. A minimum of one remote control panel accessible to the driller to operate all system functions during drilling operations should be installed at onshore rigs. In offshore, one control panel shall be available at a non hazardous area preferably tool pusher office for BOP stack functions, besides the one near the driller. XIII. Remote control panels should have light indicators to show open/close/block position of each BOPS and Hydraulically operated choke and kill valves. For onshore it is optional and for offshore unit it is must. XIV. For offshore units emergency backup BOP control system should be available. A backup system consists of a number of high pressure gaseous nitrogen bottles manifolded together to provide emergency auxiliary energy to the control manifold. The nitrogen backup system is connected to the control manifold through an isolation valve and a check valve. If the accumulator pump unit is not able to supply power fluid to the control manifold, the nitrogen back-up system may be activated to supply high pressure gas to the manifold to close the BOPs. 6.4 Subsea Blow out Prevention Equipment Subsea BOP stack arrangements should provide means to: I. Close in on the drill string and on the casing or liner and allow circulation. II. Close and seal on open hole and allow volumetric well control operations. III. Strip the drill string using the annular BOP(s). IV. Hang off the drill pipe on a ram BOP and control the wellbore. V. Shear logging cable or the drill pipe and seal the wellbore. VI. Disconnect the riser from the BOP stack. VII. Circulate the well after drill pipe disconnect. VIII. Circulate across the BOP stack to remove trapped gas. 8 6.4.1 Subsea BOP Stack Subsea blow out prevention equipment is used on subsea wellhead. I. Well control equipment can be classified in following categories based on pressure rating. a) 2000 psi WP b) 3000 psi WP c) 5000 psi WP d) 10000 psi WP e) 15000 psi WP and f) 20,000 psi WP II. Arrangements for subsea BOP stack at Annexure IV and V should be referred. III. Annular BOPs are designated as lower annular and upper annular. Annular BOP may have a lower rated working pressure than the ram BOPs. IV. Choke and kill lines are manifolded such that each can be used for either purpose. The identifying labels for the choke and kill lines are arbitrary. When a circulating line is connected to an outlet below the bottom ram BOP, this circulating line is generally designated as kill line. When kill line is connected below the lowermost BOP, it is preferable to have one choke line and one kill line connection above the bottom ram BOP. When this bottom connection does not exist, either or both of the two circulating lines may alternately be labeled as a choke line. V. Some differences as compared to surface BOP systems are: a. Choke and kill lines are normally connected to ram preventer body outlets to reduce stack height and weight, and to reduce the number of stack connections. b. Spools may be used to space preventers for shearing tubulars, hanging off drill pipe, or stripping operations. c. Blind-shear rams are used in place of blind rams. d. Ram preventers should be equipped with an integral or remotely operated locking system. 6.4.2 Control System for Subsea BOP Stack For subsea operations, BOP operating and control equipment should include: I. Floating drilling rigs experience vessel motion, which necessitates placement of the BOP stack on the sea floor. The control systems used on floating rigs are usually open-ended hydraulic systems (spent hydraulic fluid vents to sea) and therefore employ water-based hydraulic control fluids. II. An independent automatic accumulator unit for subsea BOP control system complete with an automatic mixing system to maintain mixed fluid ratios and levels of mixed hydraulic fluids. III. The accumulator capacity should be sufficient for closing, and opening all ram type preventers, annular preventers and fail-safe-close valves without recharging accumulator bottles, and the remaining pressure should be either 200 psi above recommended precharge pressure or value based on the closing ratio of ram preventer in use, whichever is more. IV. The unit should be equipped with two or more pump system driven by independent power source. Capacity of the pumps should meet following: a. With accumulator isolated, each pump system should be capable of closing annular preventer and opening fail-safe-close valve of choke within 2 minutes time. b. Combined output of all the pumps should be capable of charging accumulator to the rated pressure within 15 minutes. V. Accumulators should be installed on the BOP stack for quicker response of the functions, and its precharge pressure should be compensated for water gradient. 9 VI. Two full function remote control panels to operate BOP stack functions should be available, out of which one should be accessible to driller on the rig floor. A flow meter for indicating control fluid flow should be located on each remote control panel. VII. The remote panels should be connected to the control manifold in such a way that all functions can be operated independently from each panel. VIII. Two independent control pods with all necessary valves and regulators to operate all BOP stack functions should be available. Two separate and independent sets of surface and subsea umbilicals should be used, one dedicated to each control pod. Main hydraulic fluid line should be of minimum 1 size. IX. An emergency control system, either acoustic system or remotely operated vehicle (ROV) operated control system should be used in the event that the BOP functions are inoperative due to a failure of the primary control system. Emergency control system should charge and discharge stack mounted accumulator, close at least one ram type preventer, blind shear ram and open Lower Marine Riser Package (LMRP) hydraulic connector. X. The BOP control system should be capable of closing each ram BOPs and opening or closing fail-safe-close valves within 45 seconds. For annular preventer, closing time should not exceed 60 seconds. Time to unlatch the LMRP should be less than 45 seconds. XI. Precharge pressure of accumulator bottle in case of 3000 psi WP unit should be 1000 +/- 100 psi and in case 5000 psi WP unit should be 1500+/- 100 psi. Only Nitrogen should be used for precharge. XII. Separate diverter control panel should be available at rig floor to operate all diverter control functions. Second control panel should be provided in the safe and approachable area away from rig floor. XIII. If diverter control system is not self contained, hydraulic power may be supplied from BOP control system. XIV. The diverter control system should be designed to prohibit closing the diverter packer unless diverting lines have been opened. XV. Air storage backup system should be provided with capability to operate all the pneumatic functions at least twice in the event of loss of rig air pressure. XVI. The drilling BOP shall have two annular preventers. One or both of the annular preventers shall be part of the LMRP. It should be possible to bleed off gas trapped between the preventers in a controlled way. 6.4.3 Deep Water Dri lling Operations For Deep water drilling operations following additional requirements should be met: I. If two or more different size strings are run, blind-shear ram should be able to shear all sizes of string. II. Use of two blind-shear rams is preferred for ensuring the backup seal in case of unplanned disconnect. III. In addition to choke and kill lines, a dedicated boost line shall be provided for riser cleaning with necessary boost line valves above the BOP stack. IV. In the event of full or partial evacuation of mud from the riser, to combat riser collapse, an anti-collapse valve should be provided in the riser system allowing automatic entry of seawater. V. ROV should be able to perform following functions: i. LMRP and wellhead connector unlatch. ii. LMRP and wellhead ring gasket release. iii. Methanol / Glycol injection. iv. Opening and closing of pipe rams and blind-shear rams. v. LMRP and Accumulator Dump. 10 VI. The need to utilize a multiplex BOP control system to meet the closing time requirements should be evaluated for application, if required. VII. The kill-/choke line ID should be verified vis--vis acceptable pressure loss, to allow killing of the well at predefined kill rates. The kill-/choke line should not be less than 88.9 mm (3 inches). VIII. It should be possible to monitor the shut-in casing pressure through the kill line when circulating out an influx by means of the work string / test tubing / tubing. IX. It should be possible to monitor BOP pressure and temperature at surface, through appropriate means. X. It should be possible to flush wellhead connector with antifreeze liquid solution by using the BOP accumulator bottles or with a ROV system or other methods. XI. Detailed riser verification analysis should be performed with actual environment and well data (i.e. weather data, current profiles, rig characteristics etc.) and should be verified by a 3rd party. XII. A simulated riser disconnect test should be conducted considering manageable emergency weather / operational scenarios. XIII. The riser should have the following: current meter; riser inclination measurement devices along the riser; riser tensioning system with an anti-recoil system to prevent riser damage during disconnection; flex joint wear bushing to reduce excessive flex joint wear. riser fill-up valve. XIV. Parameters that affect the stress situation of the riser should be systematically and frequently collected and assessed to provide an optimum rig position that minimizes the effects of static and dynamic loads. XV. Wellhead and riser connector should be equipped with hydrate seal. XVI. During drilling operations, to avoid any damage to drilling equipment in the event of station keeping failure, there should be prescribed emergency disconnect procedures, clearly indicating the point at which disconnect action is to be started. XVII. In general, preparation for disconnect should begin at a distance with reference to well mouth, when it is 2.5 % of water depth and disconnect should be initiated at 5.5 % of water depth. XVIII. Emergency disconnect should include the following: i. Hang up of the drill pipes on pipe rams. ii. Shearing the drill pipe. iii. Effect seal on the wellbore. iv. Disconnect the LMRP. v. Clear the BOP with LMRP. vi. Safely capture the riser. XIX. For monitoring riser angles, flex joint angle reading should be available at the driller console on a real time basis and connected to an alarm on derrick floor. XX. In variance to 0.5 ppg kick margin normally considered, for deep water a variance of upto 0 .2 ppg for conductor casing interval and 0.3 ppg for surface casing interval can be considered. XXI. When using tapered drill pipe string there should be pipe rams to fit each pipe size. Variable bore rams should have sufficient hang off load capacity. XXII. Bending loads on the BOP flanges and connector shall be verified to withstand maximum bending loads (e.g. highest allowable riser angle and highest expected drilling fluid density.) 6.5 Choke and Kill Lines 6.5.1 Choke Lines and Choke Manifold Installation with Surface BOP I. The choke manifold consists of high pressure pipe, fittings, flanges, valves, and manual and/or hydraulic operated adjustable chokes. This manifold may bleed off wellbore pressure at a controlled rate or may stop fluid flow 11 from the wellbore completely, as required. II. For working pressure of 3000 psi and above, flanged, welded or clamped connections should be used on the component subjected to well pressure. III. Choke line from BOP to choke manifold and bleeding line should be of minimum 3 inches nominal diameter. IV. In down stream of choke line alternate flow and flare routes should be provided so that eroded / plugged or malfunctioning parts can be isolated for repair without interrupting flow control. V. When buffer tanks are employed in down stream of chokes, provision should be made to isolate a failure or malfunctioning without interrupting flow. VI. The choke manifold should be placed in a readily accessible location, preferably outside of the rig structure. VII. All the choke manifold valves should be full opening and designed to operate in high pressure gas and drilling fluid service. VIII. All the connections and valves in the upstream of choke should have a working pressure at least equal to the rated working pressure of ram preventer in use. IX. Choke manifold should be pressure tested as per the schedule as fixed for blowout preventer stack in use. X. The spare parts for equipment subject to wear or damage should be readily available. XI. Pressure gauges and sensors compatible to drilling fluid should be installed so that drill pipe and annular pressures may be accurately monitored and readily observed at the station where well control operations are to be conducted. These should be tested / calibrated as per documented schedule. XII. Preventive maintenance of the choke assembly and controls should be performed regularly, checking particularly for corrosion, wear and plugged or damaged lines. XIII. Spare parts requirement as per OEM should be considered. However, minimum spare parts as listed below should be readily available: i. One complete valve for each size installed. ii. Two repair kits for each valve size installed. iii. Parts for manually adjustable chokes, such as flow tips, seat and gate, inserts, packing, gaskets, O-rings, disc assemblies, and wear sleeves. iv. Parts for remotely controlled choke(s). v. Miscellaneous items such as hose, flexible tubing, electrical cable, pressure gauges, small control line valves, fittings and electrical components. XIV. The following are the recommendations for choke installation upto 5000 psi WP rating: i. Use two manually operated adjustable chokes (out of two chokes, use of one remotely operated choke is optional). ii. At least one valve should be installed in upstream of each choke in the manifold. XV. The following are the recommendations for choke installation of 10000 psi WP and above rating: i. One manually operated adjustable choke and at least one remotely operated choke should be installed. If prolonged use of this choke is anticipated, a second remotely operated choke should be used. ii. Two valves should be installed in upstream of each choke in the manifold. iii. The remotely operated choke should be equipped with an emergency backup system such as a manual pump or nitrogen 12 for use in the event rig air becomes unavailable. 6.5.2 Kill Lines and Kil l Manifold Installation with Surface BOP I. The kill line system provides a means of pumping into the wellbore when the normal method of circulating down through the Kelly or drill pipe cannot be employed. The kill line connects the drilling fluid pumps to a side outlet on the BOP stack. II. All lines valves, check valves and flow fittings should have a working pressure at least equal to the rated working pressure of the ram BOPs in use. The equipment should be tested on installation and periodic operation, inspection; testing and maintenance should be performed as per the schedule fixed for the BOP stack in use, unless OEMs recommendations dictate otherwise. III. Line size should be minimum 2 inches nominal diameter. IV. Two full bore valves (manual / HCR) should be installed for up to 3000 psi manifold. Use of check valve is optional. V. Two full bore manual valves and a check valve or one full bore manual and one HCR valve should be used in kill line in 5000 psi and above pressure rating manifold. VI. Spare parts requirement as per OEM should be considered. However, minimum spare parts as listed below should be readily available: i. One complete valve for each size installed. ii. Two repair kits for each valve size utilised. iii. Miscellaneous items such as hose, flexible tubing, electrical cable, pressure gauges etc. 6.5.3 Choke and Kill Lines Installation with Subsea BOP Stack I. Subsea BOP choke and kill lines are connected through choke manifold to permit pumping or flowing through either line. II. Choke and kill line should be of minimum three inches nominal diameter. III. One kill / choke line should be connected to lower most side outlet of BOP. IV. There should be minimum one choke line and one kill line connection above lower ram BOP. V. The ram BOP outlet connected to choke or kill line should have two full opening hydraulically operated fail-safe-close valves adjacent to preventer. VI. Connector pressure sealing elements should be inspected, changed as required, and tested before being placed in service. Periodic pressure testing is recommended during installation. Pressure rating of all lines and sealing elements should be at least equal to the rating of ram BOP. VII. Periodic flushing of choke and kill line should be carried out to avoid plugging since they are normally closed. VIII. Flexible connections required for choke and kill lines should have pressure rating at least equal to the rated working pressure of ram BOP. IX. Spare parts requirement as per OEM should be considered. However, minimum spare parts as listed below should be readily available: i. One complete valve of each size installed. ii. Two repair kits for each valve size in use. iii. Sealing elements for choke and kill lines. 6.6 Wellhead, BOP Equipment and Choke & Kill Lines Instal l ation I. Wellhead equipment should withstand anticipated surface pressures and allow for future remedial operations. Wellhead should be tested on installation. 13 II. Prior to drilling out the casing shoe, the casing should be pressure tested. Pressure test of all casing strings including production casing / liner should be done to ensure integrity of casing. III. When the well head and BOP stack used are of higher working pressure than the required as per design of the specific well, the equipment may not be tested to its rated pressure. IV. When ram type preventers are installed the side outlets should be below the rams. V. All connections, valves, fittings, piping etc. exposed to well pressure, should be flanged or clamped or welded and must have a minimum working pressure equal to the rated working pressure of the preventers. VI. Always install new and clean API ring gaskets. Check for any damage in the ring as well as grooves before use. VII. Correct size bolts/nuts and fittings should be used and tightened to the recommended torque. All connections should be pressure tested before drilling is resumed. VIII. All manually operated valves should be equipped with hand wheels, and always be kept ready for use. IX. Ram type preventers should have locking arrangement manual or auto lock. X. Wellhead side-outlets should not be used for killing purpose, except in case of emergencies. XI. Kill lines should not be used for routine fill up operations. XII. All sharp bends in high pressure lines should be of targeted type. XIII. All choke and kill lines should be as straight as practicable and firmly anchored to prevent excessive whip or vibration. Choke and Kill manifolds should also be anchored. XIV. All control valves of BOP control unit be either in the fully close or open position as required and should not be left in block or neutral position during operations. XV. Control valve of blind / blindshear ram should be protected to avoid unintentional operation from the remote panel. XVI. Recommended oil level should be maintained in the control unit reservoir. XVII. Outlets of all sections of well head should have at least one gate valve. 6.7 Blow out Preventer Testing 6.7.1 Function Test I. All operational components of the BOP equipment systems and diverter (if in use) should be function tested at least once a week to verify the components intended operations. II. The test should be preferably conducted when the drill string is inside casing. III. Both pneumatic and electric pump of accumulator unit should be turned off after recording initial accumulator pressure. IV. All the blow out preventers and hydraulically operated remote valve (HCR) in choke / kill line should be function tested. Closing time of rams and opening time of HCR should be recorded. V. For surface BOP stack closing time should not exceed 30 seconds for each ram preventers and annular preventers smaller than 18" and 45 seconds for annular preventer of 18" and larger size. For sub-sea BOP stack closing time should not exceed 45 seconds for all ram preventers and 60 seconds for annular preventers. VI. Operating response time for choke and kill valves (either open or close) should not exceed the minimum observed ram BOP close response time. 14 VII. Function test should be carried out alternately from main control unit / rig floor panel / auxiliary panel. VIII. Record final accumulator pressures after all the functions. It should not be less than 200 psi above the recommended precharge pressure of accumulator bottles. IX. All the gate valves and blow out preventers should be returned to their original position before resuming operations. X. All the results should be recorded in the prescribed format (Annexure-VII). 6.7.2 Pressure Test I. All blowout prevention components that may be exposed to well pressure should be tested first to a low pressure and then to a high pressure. These include blowout preventer stack, all choke manifold components, upstream of chokes, kill manifold / valves, kelly valves, drill pipe and tubing safety valves and drilling spools (if in use). Pressure test (both low and high) on each component should be of minimum 5 minutes duration, each. All the results should be recorded in the format. (Annexure - VIII) II. Test BOP using cup tester or test plug. III. Before pressure testing of BOP stack, choke and kill manifold should be flushed with clean water. IV. Clean water should be used as test fluid. However for high pressure gas wells, use of inert gas such as N2 (nitrogen) as test fluid is desirable. V. High pressure testing unit with pressure chart recorder be used for pressure testing. VI. Use test stump for sub-sea BOP stack pressure testing. VII. Well control equipment should be pressure tested: a. When installed. b. After setting each casing string. c. Following repairs that require breaking a pressure connection. d. But not less than once every 21 days. VIII. Low pressure test should be carried out at 200-300 psi. IX. Once the equipment passes the low pressure test, it should be tested to high pressure. X. Initial pressure test of blowout preventer stack, manifold, valves etc., should be carried out at the rated working pressure of the preventer stack or well- head whichever is lower. Initial pressure test is defined as those tests that should be performed on location before the well is spudded or before the equipment is put into operational service. XI. Subsequent high pressure tests should be carried out at a pressure greater than maximum anticipated surface pressure. Exception is the annular preventer which should be tested to 70% of its rated pressure or maximum anticipated surface pressure whichever is lower. XII. The pipe used for testing should be of sufficient weight and grade to safely withstand tensile, yield, collapse, or internal pressures. XIII. Precaution should be taken not to expose the casing to pressures in excess of its rated strength. A means should be provided to prevent pressure build up on the casing in the event the test tool leaks (wellhead valve should be kept open when pressure testing with test plug). XIV. Pressure should be applied from the direction in which all the BOPs, choke and kill manifold, FOSV / Kelly cock etc. would experience pressure during kick. 6.8 Minimum Requirements for Well Control Equipment for Workover operations (on land) For workover operations: I. BOP stack should have at least one double or two single ram type preventers - one of which must be 15 equipped with correct size pipe/tubing rams and the other with blind or blind-shear ram. Working pressure rating of BOP stack should exceed anticipated surface pressure. II. Kill line should be of minimum 2 inch size. III. One independent automatic accumulator unit with a control manifold, clearly showing open and closed positions, for preventer(s) to be provided. The accumulator capacity should be adequate for closing all the preventers without recharging accumulators. Unit should be located at safe easily accessible place. IV. The BOP stack should have remote control panel clearly showing open and closed positions for each preventer. This Control Panel should be located near to the drillers position. V. Trip tank should be installed on workover rig deployed for servicing of high pressure/ gas wells for continuous fill up and monitoring the hole during round trips. Indicator to monitor tank level can be either mechanical or digital and clearly visible to driller. VI. Full opening safety valve of drill string / tubing size and matching thread connection should always be available at derrick floor during well servicing. It should be kept ready in 'open' position for use with operating wrench. Operating wrench(s) should be kept at a designated place. VII. Sufficient volume of the workover fluid should be available in reserve during workover operations. VIII. During conventional production testing, well should be perforated with adequate overbalance. IX. After release of the packer the string should be reciprocated, to ensure complete retraction of packer elements, prior to pull out of string. It should be ensured that there is no swabbing action. 7.0 Procedures and Techniques for Well Control (Prevention and Control of Kick) 7.1 Kick Indi cations Indications of kick can be: I. Increase in drilling fluid return rate II. Pit gain or loss III. Changes in flowline temperature IV. Drilling breaks V. Pump pressure decease and pump stroke increase VI. Drilling fluid density reduction VII. Oil show VIII. Gas show 7.2 Prevention and Control of Kick In case of overbalance drilling: I. The planned drilling safety margin is difference between planned drilling fluid weight and estimated pore pressure. II. To maintain primary well control, drilling personnel should ensure that the hydrostatic pressure in the wellbore is always greater than the formation pressure by safety margin. III. The use of trip margin (which is in addition to safety margin) is encouraged to offset the effects of swabbing and equivalent circulating density (ECD). The additional hydrostatic pressure will permit some degree of swabbing without losing primary well control. IV. Successful well control (Blowout prevention programme) includes following elements: a. Training of personnel and drills. b. Monitoring and maintaining drilling fluid system. c. Selection of appropriate well control equipment. d. Installation, maintenance and testing of well control equipment. e. Adoption of established well control procedures. 16 7.2.1 Precautions before Trippi ng Out I. Conditioning of drilling fluid prior to tripping out should be ensured. This should include: a. No indication of influx of formation fluids. b. The drilling fluid density in and out should not differ more than 0.024 gm/cc (0.2 ppg.) in open hole. In cased hole there should not be any difference. II. A trip tank shall be lined up and function tested. Trip sheet shall be ready to be filled during tripping out (Annexure-VI). III. Full opening safety valve(s) with suitable working pressure and with proper connections and size, to fit all drill string connections, must be available on the rig floor. They should be kept ready in 'open' position for use with operating wrench. Operating wrench(s) should be kept at a designated place. IV. An inside BOP, drill pipe float valve or drop in check valve should be available for use whenever stripping is required to be done. V. As far as possible tripping out should be dry. If tripping out is wet, proper mud bucket should be used enabling mud to flow back to the return channel. 7.2.2 Precautions During Tripping Out I. Well should be checked for swabbing during pulling out. If positive, suitable corrective measures such as change in tripping speed, tripping out with pump on, change in drilling fluid properties etc should be taken. II. Trip tank volume should be monitored and same should be recorded in the trip sheet (Annexure -VI). III. If hole is not taking proper amount of mud (as per trip sheet), stop tripping and conduct flow check to ensure whether the well is self-flowing. If positive, shut the well, record the pressures and circulate out the kick by suitable well control method. If no self flow is observed, run back to the bottom and circulate and condition the drilling fluid. IV. Flow checks should be carried out: i. Prior to all trips out of the hole. ii. During first 10 stands. iii. At the casing shoes. iv. Prior to tripping out of drill collars through BOP stack. V. Any time a trip is interrupted, safety valve should be installed on the drill string. 7.2.3 Precautions During Tripping In I. Regular flow checks and monitoring of level in annulus should be done. Where situation requires trip tank may be used to monitor drilling fluid loss/gain. II. Circulation should be given to break gelation of mud as per requirements especially in deep wells and where heavy mud is used. III. With a float valve in the string, drill pipe should be filled up intermittently. 7.2.4 Precautions During Casing Lowering I. Regular flow checks and monitoring of level in annulus should be done and fill up schedule of casing pipe / liner should be followed as per the plan and use clean mud for casing/liner filling. II. Running in speed of casing/liner should be maintained considering allowable surge pressure. 7.2.5 Pre-kick Pl anning I. A plan detailing what actions are to be taken should a kick occur must be available. Plan should consider equipment limitations, casing setting depths, maximum fluid density, pressures that may be encountered, fracture gradients and expected hazards. 17 II. This should also include roles responsibilities of the personnel during kick. III. The following information should be pre-recorded for use in kill sheet preparation: casing data (properties), safe working pressure limit for surface blowout preventer equipment, wellhead, casing string, approved maximum allowable casing pressure (MAASP) and contingency plan, pump rate for killing operation (SCR), system pressure losses, capacities-displacement, mud pump data, drilling fluid mixing capability, trip margin, water depth (offshore), well profile and shut-in method to be used (soft / hard shut in). IV. Record slow circulating rates at 1/3 and 1/2 the pump speed of drilling SPM at: a. the beginning of every shift b. any time the mud weight is changed c. after drilling 500 feet/150 mtrs. of new hole d. after bit change e. after pump repairs f. after each trip due to change in BHA, bit nozzle. V. LOT / PIT after each casing should be known. Whenever LOT / PIT is to be carried out, 2-3 meters of fresh formation should be drilled. VI. Distance from rotary table to blowout preventer (s) be noted and sketch displayed in dog house and Toolpusher's office. VII. Based on the risk assessment of the well and depending upon the situation, well control method to be used should be selected. Plan and procedures for special situations such as casing pressure reaching maximum allowable annular surface pressure (MAASP) should be available at the installation (contingency plan). VIII. Shut in method to be used should also be pre-selected in the kill sheet. IX. Sufficient quantity of drilling fluid weighting materials and chemicals must be stored to meet any kick situation. 7.3 Kick Control Procedures Following are recommended well control procedures for surface stack and sub-sea. 7.3.1 Surface Stack For onshore and bottom-supported offshore installations: A. During Dri lling I. Stop drilling II. Pick up Kelly to position tool joint III. Stop mud pump. IV. Check for self-flow. V. If positive, proceed further to close the well by any one of the following procedures (Refer Table-1). v Soft shut in v Hard shut in TABLE - 1 Sl. No. Soft Shut in Hard Shut in 1. Open hydraulic control valve (HCR valve) / manual valve on choke line. Close Blow out Preventer. (Preferably Annular Preventer) 2. Close Blowout Preventer. Open HCR/Manual valve on choke line when choke is in fully closed position. 3. Gradually close adjustable /remotely operated choke, monitoring casing pressure. Allow pressure to stabilise and record SIDPP, SICP and Pit Gain. 4. Allow the pressure to stabilize and record SIDPP, SICP and Pit gain. ------------- 18 VI. Monitor the casing pressure. If the casing pressure is about to exceed MAASP, follow the contingency plan. VII. Calculate the drilling fluid density required to kill the kick. VIII. Initiate the approved / selected well kill method. IX. Check rig crew duties and stations. X. Review and update the well control worksheet. XI. Check pressures of all annuli of the well. B. During Tripping During tripping whenever flow is observed: I. Position tool joint above rotary table and set pipe on slips. II. Install Full Opening Safety Valve (FOSV) in open position on the drill pipe and close it. III. Close the well following any one of the procedures as per above table. (table - 1) IV. Monitor the casing pressure. If the casing pressure is about to exceed MAASP, follow the contingency plan. V. Calculate the drilling fluid density required to kill the kick. VI. Initiate the approved / selected well kill method. VII. Check rig crew duties and stations. VIII. Review and update the well control worksheet. IX. Check pressures of all annuli of the well. C. When String is out of Hole I. Close blind / blind-shear ram. II. Record shut in pressure. III. Monitor the casing pressure. If the casing pressure is about to exceed maximum allowed (MAASP), follow the contingency plan. IV. Calculate the drilling fluid density to kill the kick. V. Initiate the approved /selected well kill method. VI. Check rig crew duties and stations. VII. Review and update the well control worksheet. VIII. Check pressures on all annuli of the well. 7.3.2 Floating Installations (Sub Sea) A. During Dri lling I. Stop drilling II. Position the tool joint for the BOPs operation. III. Shut down the drilling fluid pump(s). IV. Check the well for flow if it is flowing, follow shut in procedure. V. If the soft shut-in procedure has been selected: open the choke line, close Annular BOP and close the choke. VI. If the hard shut-in procedure has been selected: close Annular BOP and open the choke line with the choke in closed position. VII. Observe the casing pressure, if it exceeds MAASP, follow the contingency plan. VIII. Check for trapped gas pressure. IX. For release of trapped gas, close the uppermost rams below the choke line and close the diverter, open the annular preventer to allow trapped gas 19 to rise, displace riser with kill fluid and close the annular preventer, reopen the ram preventer. X. Adjust the closing pressure on the annular preventer to allow stripping of tool joints. XI. Hang off the drill pipe as follows: a. With a motion compensator: i. Position a tool joint above the hang-off rams leaving the lower Kelly cock. Set the slips on the top joint of drill pipe. ii. Close the lower Kelly cock. iii. Break the Kelly/top drive connection above the lower Kelly cock and put it in the rat hole. iv. Pick up the assembled space-out joint, safety valve, and circulating head with the safety valve closed. Make up the space-out joint on the closed lower Kelly cock. v. Open the lower Kelly cock, remove the slips, and position tool joint above the hang-off rams leaving the safety valve high enough above the floor to be accessible during the maximum expected heave and tide when the selected joint rests on the hang-off rams. vi. Close the hang-off rams. vii. Carefully lower the drill string until the tool joint lands on the closed hang-off rams. Slack off the entire weight of drill string while holding tension on the circulating head with a tension device. viii. Connect the circulating head to the standpipe, open the safety valve. XII. Allow the shut-in pressure to stabilise and record pressures. XIII. Determine the volume of the kick. XIV. Calculate the drilling fluid density required to kill the kick. XV. Select a kill method. XVI. Check rig crew duties and stations. XVII. Review and update well control worksheet. XVIII. Inspect the BOP stack with television, if feasible. B. During Tripping I. Install safety valve. II. Position the tool joint for the BOPs operation. III. Check the well for flow if it is flowing, follow shut in procedure. IV. If the soft shut-in procedure has been selected: open the choke line, close Annular BOP and close the choke. V. If the hard shut-in procedure has been selected: close Annular BOP and open the choke line with the choke in closed position. VI. Observe the casing pressure, if it exceeds MAASP, follow the contingency plan. VII. Check for trapped gas pressure. 20 VIII. For release of trapped gas, close the uppermost rams below the choke line and close the diverter, open the annular preventer to allow trapped gas to rise, displace riser with kill fluid and close the annular preventer, reopen the ram preventer. IX. Adjust the closing pressure on the annular preventer to allow stripping of tool joints. X. Hang off the drill pipe as follows: a. With a motion compensator: i. Position a tool joint above the hang-off rams leaving the safety valve. Pick up the assembled space-out joint, safety valve, and circulating head with the safety valve closed. Make up the space-out joint on the string. ii. Open the safety valve, remove the slips, and position tool joint above the hang-off rams leaving the safety valve high enough above the floor to be accessible during the maximum expected heave and tide when the selected joint rests on the hang-off rams. iii. Close the hang-off rams. iv. Carefully lower the drill string until the tool joint lands on the closed hang-off rams. Slack off the entire weight of drill string while holding tension on the circulating head with a tension device. v. Connect the circulating head to the standpipe, open the safety valve. XI. Allow the shut-in pressure to stablise and record pressures. XII. Determine the volume of the kick. XIII. Calculate the drilling fluid density required to kill the kick. XIV. Select a kill method. XV. Check rig crew duties and stations. XVI. Review and update well control worksheet. XVII. Inspect the BOP stack with television, if feasible. C. When String is Out of Hole I. At the first indication of the well flowing, close the blind / blind-shear rams. II. Open the gate valve on the subsea BOP stack to open the choke line, close the choke line at the surface. III. Record shut-in pressures. Wt. (specific gravity) of fluid in the choke line should be considered for calculating shut-in casing pressure. IV. Record the kick volume. V. Run the drill string in the hole to the top of the BOPs with NRV. VI. Add the hydrostatic pressure of the fluid in the choke line to the surface pressure to determine the pressure below the blind rams. VII. Determine if the pressure below the blind rams can be overbalanced by hydrostatic pressure of the drilling fluid that can be safely contained by the riser. If so, adjust the riser tensioners to support the additional drilling fluid weight and displace the drilling fluid in 21 the riser with drilling fluid of the required density. VIII. Close the diverter. Open the BOPs and watch for flow. If the well does not flow, open the diverter and trip in the hole. IX. If the well starts to flow, close the blind ram preventer, displace the choke and kill lines with heavy drilling fluid, and circulate until the riser contains drilling fluid of the desired density. X. Continue going in the hole. Stop periodically, close the pipe rams, and circulate the riser by pumping down the kill line to maintain the required drilling fluid density in the riser. After well killing and before resuming normal operations, density of drilling fluid should be reviewed to include trip margin above kill mud weight. 8.0 Drills and Training I. The competence with which drilling personnel respond to well control situations and follow correct procedures can be improved by carrying out emergency drills. II. While drilling in H2S / sour gas prone area, detectors shall be installed and breathing apparatus in sufficient quantity and cascade system shall be made available. Crew shall be trained to handle situations in this environment. III. Organisation should assign specific responsibilities to the identified / designated persons, for actions required during an emergency related to well control, which would be part of rig ERP. a. Following drills should be performed: i. Pit drill ii. Trip drill b. To conduct drill, a kick should be simulated by manipulating primary kick indicator such as the pit level indicator or the flow line indicator by raising its float gradually and checking for the alarm. c. The reaction time from float raising to the designated crew member's readiness to start the closing procedure should be recorded and response time should not be more than 60 seconds. d. Total time taken to complete the drill should be recorded and it should not be more than 2 minutes. e. Drill should be initiated without prior warning during routine operation. f. Drill should be conducted once a week with each crew. g. Drill should be initiated at unscheduled times when operations and hole condition permits. 8.1 Pit Dri ll (On bottom) I. Raise alarm by raising mud tank float -automatic or oral. II. Stop drilling / other operation in progress. III. Position tool joint for BOPs ram closing. IV. Stop mud pump V. Secure brake VI. During / after the above steps, as applicable, designated crew should move to assigned positions VII. Check for self flow VIII. Record the response time. Trip Drill (Drill Pipe in BOP) I. Give signal by raising alarm. II. Position tool joint above rotary and set the pipe on slips III. Install full opening safety valve in open position. 22 IV. Close FOSV after installation V. Designated crew members should move to assigned position, during / after the above steps, as applicable. VI. Close BOP VII. Record response time. Note: Trip drill should be carried out preferably when bit is inside the casing. A full opening safety valve for each size and type of connection in the string shall be available on derrick floor, in open position. Safety valves may be clearly marked for size and connection. Trip Drill (Collar in Blowout Preventer) I. Give signal by raising alarm. II. Position upper drill collar box at rotary table and set it on slips. III. Connect a drill pipe joint or stand of drill pipe on drill collar tool joint with change over sub and position drill pipe in BOP. IV. Install FOSV in open position. V. Close FOSV. VI. Close BOP. VII. Record response time. Note: Under actual kick conditions (other than drills) if only one stand of drill collar remains in the hole it would be probably faster to simply pull the last stand and close the blind ram. If numbers of drill collar stands are more and well condition does not permit step III than install FOSV with change over sub on drill collar, close it and close annular preventer. Preparation for step III above should be done in advance prior to starting pull out of drill collar - make one single / stand of drill pipe with drill collar change over sub. Trip Drill (String is out of Hole ) I. Give signal by raising alarm. II. Close blind/ blind-shear ram. III. Record response time. Well Control Training Asstt. Shift Incharge / Asstt. Driller and above supervisory personnel should have valid accredited well control certificate (of the appropriate level). At least one trained person should always be present on derrick floor to observe well for any activity even during shutdown period 9.0 Monitoring System 9.1 Instrumentation Systems I. Driller's console should have gauges and meters including drillo-meter, SPM meters, pump pressure gauge, rotary torque. The Record-o-graph should record parameters like weight, SPM, pump pressure, rotary torque, rate of penetration. Drillers console should be positioned in such a way that driller can see all the gauges without any obstruction. II. Flow rate sensor should be installed for monitoring return mud flow with high / low alarms. III. Mud / pit volume totaliser should be installed for all the reserve and active mud tanks to detect mud tank level's deviation with an accuracy of + one barrel. Mud volume totaliser should have high / low alarm (visual or audible setting). IV. Gas detector should always be available. Gas measurement should be carried out near the point where the mud from the well mouth surfaces (Shale shaker and rig sub structure). 9.2 Trip Tank System I. On a drilling rig, the trip tank shall always be in operation during tripping operation, particularly during pulling out operation, for early detection of a kick. 23 II. The primary purpose of the trip tank is to measure the amount of drilling fluid required to fill the hole while pulling pipe to determine if the drilling fluid volume matches pipe displacement. III. A trip tank is a low-volume calibrated tank which can be isolated from the remainder of the surface drilling fluid system and used to accurately monitor the amount of fluid going into or coming out from the well. A trip tank should be calibrated accurately and should have means for reading the volume contained in the tank at any liquid level. The readout may be direct or remote, preferably both. The size of the tank and readout arrangement should be such that volume changes in the order of can be easily detected. 9.3 Mud Gas Separator (MGS) Atmospheric Mud Gas Separator should be installed. Liquid seal should be maintained to prevent gas blow through shale shaker. Vent line should be away from derrick floor. The rig maintenance and inspection schedule should provide for periodic non-destructive examination of the mud gas separator to verify pressure integrity. This examination may be performed by hydrostatic, ultrasonic, or other examination methods. 9.4 Degasser A degasser should be used to remove entrained gas bubbles in the drilling fluid that are too small to be removed by the mud gas separator. Most degassers use some degree of vacuum to assist in removing the entrained gas. All flare lines should be as long as practical with provision of flaring during varying wind directions. Flare lines should be straight as far as possible and should be securely anchored. Degasser should be function tested at least once a week 10.0 Under Balanced Dri lling Primary well control during Under Balanced Drilling (UBD) is maintained by flow and pressure control. The bottomhole pressure and the reservoir influx is monitored and controlled by means of a closed loop surface system. This system includes rotating control device (RCD), flowline, emergency shutdown valve (ESDV), choke manifold and surface separation system. The following are the recommended equipment for UBD operations: I. The RCD shall be installed above the drilling BOP and shall be capable of sealing the maximum expected wellhead circulating pressure against the rotating work string and containing the maximum expected shut-in wellhead pressure against a stationary work string. The RCD is a drill through device with a rotating seal that is designed to contact and seal against the work string (drill string, casing, completion string, etc.) for the purpose of controlling the pressure and fluid flow to surface. Its function is to contain fluids in the wellbore and divert flow from the wellbore to the surface fluids handling equipment during underbalanced operations (drilling, tripping and running completion equipment). II. The return flowline shall have two valves, one of which shall be remotely operated and fail-safe-close (ESDV). The flowline and the valves shall have a working pressure equal to or greater than the anticipated shut-in wellhead pressure. At least one valve should be installed in the diverter/flow line immediately adjacent to the BOP stack. III. A dedicated UBD choke manifold shall be used to control the flow rate and wellbore pressure, and reduce the pressure at surface to acceptable levels before entering the separation equipment. The choke manifold shall have a working pressure equal to or greater than the anticipated shut-in wellhead pressure. The choke manifold should have two chokes and isolation valves for each choke and 24 flow path. Applied surface backpressure should be kept to a minimum to reduce erosion of chokes and other surface equipment. IV. A surface separation system shall be selected and dimensioned to handle the anticipated fluid/solids in the return flow. Plugging, erosion or wash-outs of surface equipment should not impact the ability to maintain primary well control. V. The drill pipe and casing should be designed for exposure to hydrogen sulphide (H2S) gas. VI. The BOP stack, flow / diverter line, and bleed off and kill lines should be designed for exposure to H2S in accordance with NACE (MR 075) / ISO 15156 specifications. VII. Blind-shear rams should be considered for underbalanced drilling of wells with high Hydrogen Sulphide (H2S) potential. VIII. A stab-in safety valve for the string in use should be available on the rig floor. 10.1 Procedures for UBD I. Procedures for UBD operations should be developed based on risk analysis and risk assessments. These procedures should include: i. Kicking of the well ii. Making connections iii. Live well tripping iv. Trapped pressure in equipment. II. When running a work string under balanced, two NRVs, shall be installed in the string, as deep in the work string as practical and as close together as possible. The NRVs should prevent wellbore fluids from entering into the work string. Installation of additional NRVs should be considered depending on the nature of the operation (ie high-pressure gas). The NRV should have a minimum working pressure rating equal to the maximum expected BHP. III. Snubbing facilities should be used or the well should be killed with a kill weight fluid prior to tripping pipe, if the shut-in or flowing wellhead pressure can produce a pipe light condition and a downhole isolation valve (DIV), a retrievable packer system or similar shut-in device, is not in use or is not functioning as designed. The DIV is a full-opening drill through valve, installed down-hole as an integral part of a casing / liner string, at a depth either below the maximum pipe light depth for the work string being tripped in the underbalanced operation (drill string, casing, completion string, etc.) or at a depth that allows the maximum length of BHA, slotted liner or sand screen required to be safely deployed, without having to snubin or kill the well prior to deployment. DIV should have working pressure rating of more than maximum expected differential pressure after closure. IV. Sufficient kill fluid of required density should be available on site at any time to enable killing of the well in an emergency. V. While still in the design stage, a meeting including all key personnel should be held to discuss the proposed operation so that everyone clearly understands their responsibilities with respect to safety. A key element in planning a safe operation is the site layout. The following considerations should be made when designing the well-site layout: i. Prevailing winds ii. Access to fluids handling equipment iii. Equipment placement iv. High pressure line placement VI. At no time the well should be left open to the rig floor when the well is live. VII. Trapped gas below the float should be removed safely before removing the float from the drill string during pulling out. VIII. If a well is killed prior to tripping, traditional tripping procedures including the completion of trip sheets should be followed. IX. Round the clock supervision by competent persons should be ensured. All personnel involved in operations 25 should be trained in UBD operations, and training should be documented. X. The Well Site Supervisor should have valid accredited well control certificate for underbalanced drilling and well intervention operations. XI. Appropriate PPE should be used by all personnel on the site. XII. A site-specific emergency contingency plan should be prepared to a level of complexity that the operation warrants, prior to any underbalanced drilling taking place. XIII. The following table describes incident scenarios for which well control action procedures should be available (as applicable) to deal with the incidents should they occur (This list is not exhaustive; additional scenarios may be applied based on the actual planned activity): i. Bottomhole or surface pressure and / or flow rates detected which could lead to the pressure rating of the rotating control device (static or dynamic) or the capacity of the surface separation equipment being exceeded. ii. NRV failure, influx into work string during making connection or tripping in live well. iii. Leaking connection below drilling BOP. iv. Leaking rotating control device or flowline before ESDV, seal elements, connection to flowline, drilling BOP or high pressure riser. v. Erosion or wash out of choke. Consider the case where isolation for repair of the choke cannot be achieved. vi. Failure of surface equipment after RCD. This can be leaks or plugged equipment and lines. vii. Work string failure. viii. Emergency shut-in. ix. Emergency well kill. x. Lost circulation. xi. H2S in the well. XIV. When hydrocarbons are being produced or when they are used in the drilling fluid, supplementary fire fighting equipment should be considered. This may require as little as additional hand-held fire extinguishers to as much as having a fire fighting vehicle on-site. XV. Regardless of the concentration of H2S, no sour gas may be released to atmosphere at any time. XVI. Produced fluids containing H2S or drilling fluids contaminated with H2S should not be stored in open tanks. XVII. The flare stack shall be as per regulatory requirements. XVIII. If H2S is expected to be encountered in the well, a monitoring program shall be in place. As a minimum, monitoring stations should include the rig floor, inside the rig substructure adjacent to the BOPs, and near separation vessels and storage or circulating tanks. XIX. A pressurized tank or a tank truck equipped with a functional H2S scrubber should be used for the transportation of sour fluids off location. XX. Adequate provision should be made for the safe storage and / or disposal of produced fluids and drill cuttings. Reservoir liquids should not be stored in an earthen pit. Refer to MOEF guidelines for handling of drilling fluids and drill cuttings (OISD-RP-201) XXI. Explosive potential monitoring should be conducted at all the points where there is a potential of release of combustible vapours to atmosphere. XXII. For wells which contain H2S, drill cuttings should be held in tanks equipped with vapour control. Vapour shall either be vented to a flare stack or through an H2S scrubbing system. Another related technique of UBD is Managed Pressure Drilling (MPD). While UBD is mostly 26 focused on maximizing the performance of the reservoir, MPD is more focused on successfully drilling the well, while minimizing the time and money spent on non-productive time (NPT), in addition to not damaging the formation in the process. While MPD utilizes some of the same surface equipment used in UBD; MPD, particularly in offshore environments, is not intended to produce hydrocarbons while drilling but rather to more precisely manage wellbore pressure and annulus returns while drilling through sections with very narrow margins between reservoir pore pressure and fracture pressure gradients. Any influx incidental to the operation is safely contained using an appropriate process. 11.0 Well Control Equipment Arrangement for HTHP Wells I. The installation should be equipped with: a. A fail-safe-open, remotely operated valve in the overboard line. b. A cement line pressure gauge in the choke panel, a remote camera in the shaker house, with display in the driller's house. c. A choke / kill line glycol injection system. d. High pressure and / or high temperature resistant seals should be installed in choke and kill lines, including flexible line hoses and the choke and kill manifold, packing in the kelly cock / internal BOP, packing / seal in the marine riser. II. Flexible kill-/choke line hoses should be inspected and pressure tested to maximum well design pressure prior to HPHT mode. III. Specification and qualification criteria for equipment and fluids to be used or installed in a HPHT well should be established, with particular emphasis on deterioration of elastomer seals and components as function of temperature / pressure, exposure time and wellbore fluids. 27 12.0 References 1. Alberta Energy and Utilities Board- ID 94-3: Under Balanced Drilling 2. API-RP 16E: Recommended Practices for Design of Control Systems for Drilling Well Control Equipment 3. API-RP 53: Recommended Practices for Blowout Prevention Equipment Systems for Drilling Wells 4. API-RP 59: Recommended Practices for Well Control Operations 5. API-RP 64: Recommended Practices for Diverter Systems Equipment and Operations 6. API-SPEC 16C: Specifications for Choke and Kill Systems 7. API-SPEC 16D: Specifications for Control Systems for Drilling Well Control Equipment 8. API-SPEC 16R: Specification for Marine Drilling Riser Couplings 9. HSE-OTH 512: HPHT Wells: Perspective on Drilling and Completion from the Field 10. IADC: IADC Deepwater Well Control Guidelines 11. IADC: UBD and MPD Operations - HSE Planning Guidelines 28 Abbreviations BOP Blowout preventer LOT Leak off test LMRP Lower marine riser package NRV Non return valve OEM Original equipment manufacturer PIT Pressure integrity test SCR Slow circulating rate UBD Underbalanced drilling WP Rated working pressure 29 Annexure-I Arrangements for 2000 psi Surface BOP Stack 30 Annexure-II Arrangements for 3000 psi and 5000 psi Surface BOP Stack 31 32 33 34 Annexure-VI TRIP SHEET Rig----------------------------------------------- Location / Well ----------------------------------- Date and time--------------------------------- Depth------------------- Sp. Gr. of DF------------- DP Size & Displacement-------------------------DC Size & Displacement------------------- Name of shift incharge / Driller -------------------------------------------------------------------------- Reason for trip---------------------------------------------------------------------------------------- Drill pipe stand Theoretical Volume Displacement / Volume filled in Discrepancy Remarks No. of stand Per _Stand Total Per _Stand Total 1 2 3 4 5 6 7 Note Column 5 - Actual vol. of drilling fluid taken by hole (diff. of trip tank reading ) Column 6 Diff. of column 3 and 5 35 Annexure-VII BOP FUNCTION TEST REPORT AND ACCUMULATOR DRILL RIG : DATE : WELL : BOP STACK DETAIL (as applicable): 1. Annular BOP - 2. Single / Double / Triple ram type BOP - 3. Upper pipe ram size - 4. Lower pipe ram size - S.NO. DESCRIPTION FUNCTION CLOSED/ OPEN TIME Seconds ACCUM-ULATOR INITIAL PRESSURE psi ACCUM-ULATOR FINAL PRESSURE Psi REMARKS 01. Annular preventer 02. Lower pipe ram 03. Upper pipe ram 04. Blind/shear ram 05. Hyd. Valve on choke line 06. Hyd. Valve on kill line 01. Conduct BOP function test/accumulator drill once in a week 02. a) Record initial accumulator pressure b) Turn off both electric and pneumatic pumps c) Close annular and pipe rams one by one and record time to close each preventer d) Open the hydraulic Valve on choke line and kill line e) Open pipe ram to compensate for blind ram close f) Record the final accumulator pressure after each operations g) Turn on electrical/ pneumatic pump and open all the preventer. Record the opening time. 03. Carry out function test alternatively from rig floor panel / auxiliary panel / main control unit. 04. Final accumulator pressure should be not less than 1200 psi or 200 psi above precharge pressure of accumulator bottles. Special attention needed to address the following: SIGNATURE : SIGNATURE : NAME : NAME : SHIFT INCHARGE/ : DIC/TOOL PUSHER : DRILLER Bien plus que des documents. Découvrez tout ce que Scribd a à offrir, dont les livres et les livres audio des principaux éditeurs.Annulez à tout moment.
https://fr.scribd.com/document/226361217/RP-174
CC-MAIN-2019-51
en
refinedweb
public class TreeColumnLayout extends AbstractColumnLayout Layoutused to maintain TreeColumnsizes in a Tree. You can only add the Layout to a container whose only child is the Tree control you want the Layout applied to. Don't assign the layout directly the Tree LAYOUT_DATA computeSize, getColumnTrim, setColumnData flushCache clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public TreeColumnLayout() AbstractColumnLayout composite- a composite widget using this layout flushCache- truemeans flush cached layout values protected int getColumnCount(Scrollable tree) getColumnCountin class AbstractColumnLayout tree- the control protected void setColumnWidths(Scrollable tree, int[] widths) setColumnWidthsin class AbstractColumnLayout tree- the control widths- the widths of the column protected ColumnLayoutData getLayoutData(Scrollable tableTree, int columnIndex) getLayoutDatain class AbstractColumnLayout tableTree- the control columnIndex- the column index protected void updateColumnData(Widget column) updateColumnDatain class AbstractColumnLayout column- the column Copyright (c) 2000, 2015 Eclipse Contributors and others. All rights reserved.Guidelines for using Eclipse APIs.
https://help.eclipse.org/mars/topic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/jface/layout/TreeColumnLayout.html
CC-MAIN-2019-51
en
refinedweb
UI isn't shown when worker thread runs. I have one worker thread which parses XML and passing QString parsed data via signal to QLabel slot setText(); I create thread and xmlparse class instances that runs in my MainWindow constructor like this: MainWindow:MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); startParsing(); } What can cause situation that my UI with label does not appear when application is launched? Hi, Are you sure that startParsing doesn't block the main thread ? Here is my mainwindow.cpp: #include "mainwindow.h" #include "ui_mainwindow.h" #include <xmlparser.h> #include <QThread> MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); startParsing(); } MainWindow::~MainWindow() { delete ui; } void MainWindow::startParsing() { QThread *xml = new QThread; xmlparser *first = new xmlparser(); first->moveToThread(xml); xml->start(); connect(first, SIGNAL(xmlParsed(QString)), ui->label, SLOT(setText(QString))); } Ok then, are you sure that your label is not just empty rather than not visible ? Note that you have two memory leaks here. - JKSH Moderators last edited by What can cause situation that my UI with label does not appear when application is launched? Did you call MainWindow::show()? Did you check that xmlparser correctly emits xmlParsed and that the parameter is not empty ? I solved my problem with not calling parseFunction in its own class constructor but with signals and slots in mainwindow.cpp file: connect(xml, SIGNAL(started()), first, SLOT(parseFunction()));
https://forum.qt.io/topic/62825/ui-isn-t-shown-when-worker-thread-runs
CC-MAIN-2019-51
en
refinedweb
File download and upload support for Django REST framework Project description Overview REST framework files allows you to download a file in the format used to render the response and also allows creation of model instances by uploading a file containing the model fields. Requirements - Python (2.7, 3.5, 3.6) - Django REST framework (3.4, 3.5, 3.6, 3.7, 3.8) Installation Install using pip: pip install djangorestframework-files Example models.py from django.db import models class ABC(models.Model): name = models.CharField(max_length=255) serializers.py from rest_framework import serializers from .models import ABC class ABCSerializer(serializers.ModelSerializer): class Meta: model = ABC fields = '__all__' views.py from rest_framework.parsers import JSONParser, MultiPartParser from rest_framework_files.viewsets import ImportExportModelViewSet from .models import ABC from .serializers import ABCSerializer class ABCViewSet(ImportExportModelViewSet): queryset = ABC.objects.all() serializer_class = ABCSerializer # if filename is not provided, the view name will be used as the filename filename = 'ABC' # renderer classes used to render your content. will determine the file type of the download renderer_classes = (JSONParser, ) parser_classes = (MultiPartParser, ) # parser classes used to parse the content of the uploaded file file_content_parser_classes = (JSONParser, ) Some third party packages that offer media type support: urls.py from rest_framework import routers from .views import ABCViewSet router = routers.ImportExportRouter() router.register(r'abc', ABCViewSet) urlpatterns = router.urls Downloading To download a json file you can go to the url /abc/?format=json. The format query parameter specifies the media type you want your response represented in. To download an xml file, your url would be /abc/?format=xml. For this to work, make sure you have the respective renderers to render your response. Uploading To create model instances from a file, upload a file to the url /abc/. Make sure the content of the file can be parsed by the parsers specified in the file_content_parser_classes or else it will return a HTTP_415_UNSUPPORTED_MEDIA_TYPE error. For sample file examples you can upload, check the assets folder For more examples on how to use the viewsets or generic views, check the test application Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/djangorestframework-files/
CC-MAIN-2019-51
en
refinedweb
leau2001 wrote: >> I made some figure in a loop and i want to close after the >> figure show. >> > Not absolutely sure what you mean, but to produce some > plots and save them in a loop I do > f = figure() for i in range(..): plot(...) savefig(...) > f.clf() # clear figure for re-use close(f) Often times what people are looking for is they want to the figure to pop up on the screen, look at it, have it close, and move on. One way to achieve this is to run mpl in interactive mode and then insert a time.sleep or call input("Press any key for next figure: ") If this is what you are doing, threading becomes important. This is discussed on the web page linked above, and your best bet is to either use the tkagg backend or better yet, use ipython in -pylab mode. Something like import sys from pylab import figure, close, show, nx, ion ion() while 1: fig = figure() ax = fig.add_subplot(111) x, y = nx.mlab.rand(2,30) ax.plot(x,y,'o') fig.canvas.draw() k = raw_input("press any key to continue, q to quit: ") if k.lower().startswith('q'): sys.exit() show()
https://discourse.matplotlib.org/t/how-to-close-a-figure/5307
CC-MAIN-2019-51
en
refinedweb
The comment describes why in detail. This was found because QEMU nevergives up load reservations, the issue is unlikely to manifest on realhardware.Thanks to Carlos Eduardo for finding the bug!Signed-off-by: Palmer Dabbelt <palmer@sifive.com>--- arch/riscv/kernel/entry.S | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+)diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.Sindex 1c1ecc238cfa..e9fc3480e6b4 100644--- a/arch/riscv/kernel/entry.S+++ b/arch/riscv/kernel/entry.S@@ -330,6 +330,24 @@ ENTRY(__switch_to) add a3, a0, a4 add a4, a1, a4)+# error "The offset between ra and ra is non-zero"+#endif+#if (__riscv_xlen == 64)+ sc.d x0, ra, 0(a3)+#else+ sc.w x0, ra, 0(a3)+#endif REG_S sp, TASK_THREAD_SP_RA(a3) REG_S s0, TASK_THREAD_S0_RA(a3) REG_S s1, TASK_THREAD_S1_RA(a3)-- 2.21.0
https://lkml.org/lkml/2019/6/5/979
CC-MAIN-2019-51
en
refinedweb
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. my jira version is jira 5.2.11 Function info: if the reporter in "PR" user group,set custom field required. See below,the code doesn't work import com.atlassian.crowd.embedded.api.User import com.atlassian.jira.ComponentManager import com.atlassian.jira.component.ComponentAccessor import com.atlassian.jira.issue.CustomFieldManager import com.atlassian.jira.issue.MutableIssue ComponentManager componentManager = ComponentManager.getInstance() Object fieldID = getFieldById("customfield_11300") MutableIssue currentIssue = componentManager.getIssueManager().getIssueObject(Long.parseLong(fieldID.value)) FormField pr_field = getFieldById("customfield_11300") def isPR = componentManager.getUserUtil().getGroupNamesForUser(currentIssue.getReporterUser().getDisplayName()).contains("PR") if(!isPR){ pr_field.setHidden(false) pr_field.setRequired(true) } anyone can help? thanks! getGroupNamesForUser won't take a displayName, use "name" instead, or better: currentIssue.getReporterId() Thanks Jamie,but it's still not work. Mybe problem is here,but I'm not sure about this: MutableIssue current.
https://community.atlassian.com/t5/Marketplace-Apps-questions/How-to-get-issue-reporter-group-or-roles-in-Behaviours-Plugin/qaq-p/79163
CC-MAIN-2018-26
en
refinedweb
When it comes to me getting some work done, there is really only one place in the house where that happens. Coincidentally, it's the same place where I get some sleep, hack about with electronics, write code, record videos, take photos of products, etc. I call it my "room", it gets a little cramped, but that's the least of my issues. What I really don't need are interruptions. Creatives and coders would probably best sympathize with me - once you get in the "groove", find your "muse" or get in the "rhythm", the last thing you need is someone opening your door unexpectedly. Even worse is when you're trying to catch some shuteye, and someone opens the door and barges in. My family has a habit of doing this, and it's not helped by the fact I tend to have odd sleeping hours as I'm not "limited" by regular 9-5 engagements, and my hobbies often dictate being awake at odd hours. Anyway, it was getting to breaking point several months ago, and I decided to try and solve it once and for all. I looked around my room to find the PiFace Control and Display I had from the Raspberry Pi New Year's Party Pack RoadTest which remained unused, as well as a Raspberry Pi Model A (original) which I bought out of curiosity, having no Ethernet port and just one USB connector. Great! Virtually everything I need to solve this problem - by making an electronic network-connected door sign which I can change at will from any computer on the network (or beyond, by tunneling through another Raspberry Pi set to expose its SSH to the world). Before I continue further, I must warn everyone that this was a hack borne out of frustration, where time to a solution was the key. As a result, there are known security problems, namely no authentication, user-side validation only and web server running on an account with root privileges. Unless you can guarantee the security and non-maliciousness of the environment where the unit will be deployed, you should NOT use this as an example to set-up your own door sign. Even then, I probably have missed out a few steps, having done this a few months ago, fixing any issues I found on the fly and not writing anything down. It's more an inspiration for others to see what is possible. Raspberry Pi Model A Set-Up Setup of the Raspberry Pi Model A is actually quite confusing because of the limited hardware. It's probably best done interactively, so using an HDMI monitor, USB keyboard and mouse, and USB wireless network adapter. In order to do this, you will need a USB hub, preferably powered. Or, if you're sneaky like I am ... just set up the card on the Model B first, then transfer it over to the Model A once you're ready. The first thing you will need to do all the normal raspi-config configuration. Expanding filesystem, overclocking, enabling SSH, setting locale, timezone, keyboard layout, disable overscan, enable SPI, change hostname, change password. The norm. To simplify wireless configuration, I decided to startx and configure Wi-Fi using the graphical utility, which seems to store any required information in /etc/wpa_supplicant/wpa_supplicant.conf. Once that's done, it's always good to keep yourself up to date with sudo apt-get update and sudo apt-get upgrade. Because we will be working with the pi remotely, and we would like to find it easily, a static IP was allocated by modifying /etc/network/interfaces - in my case, to 192.168.0.36. iface wlan0 inet manual wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf iface default inet static address 192.168.0.36 netmask 255.255.255.0 gateway 192.168.0.1 Not only that, we also need to modify /etc/resolv.conf to have one line, nameserver 192.168.0.1 or whichever nameserver you want. In the case of the latest Raspbian distribution, resolv.conf is generated from resolvconf.conf which needs its local nameserver line commented out, and 127.0.0.1 replaced with your nameserver of choice, so that DNS resolves work properly (otherwise you will have trouble installing or updating things. It's good to reboot your unit again and test that it is reachable via SSH on the IP configured. Another tweak you might want to do is to go back to sudo raspi-config to configure the GPU memory split to be 16Mb - as you will probably only be running in text mode, so freeing RAM for other processes. Now that we've got the configuration workable remotely, you can pull the card and shove it into the Model A along with the one wireless adapter and the PiFace Control and Display module. Then, we need to install the relevant packages. My initial thoughts was that it would be enough to be able to change the sign by SSHing into the box and running a shell script, but that would get tedious very quickly, so I wanted a web interface as well. As a result, I decided to install the support packages for pifacecad and lighttpd as the web server. That's as simple as sudo apt-get install python{,3}-pifacecad lighttpd. At this stage, the PiFace Control and Display module should be working (or might be after a reboot) - a simple Python3 program like the following should write a message onto the screen. #!/usr/bin/env python3 import pifacecad cad = pifacecad.PiFaceCAD() cad.lcd.blink_off() cad.lcd.cursor_off() cad.lcd.clear() cad.lcd.backlight_on() cad.lcd.write("Gough: AWAKE \nCome in!") If that is working, then we are ready to make the interface work. Gluing the Web Server to the PiFace Control and Display I decided to go with lighttpd and the tried and trusted cgi interface to allow the web server to make calls to python. Despite the above example using python3, I decided to go with python (2) instead for this. The first problem I found was the issue of configuring the web server correctly. In the end, my /etc/lighttpd/lighttpd.conf looks like this: server.modules = ( "mod_access", "mod_cgi", "mod_alias", "mod_compress", "mod_redirect", # "mod_rewrite", ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "pi" server.groupname = "pi"" $HTTP["url"] =~ "^/cgi-bin/" { cgi.assign = ( ".py" => "/usr/bin/python" ) } Significant changes include making the server use the pi username and pi group, and the addition of mod_cgi, and the last few lines which configure the .py extension to call /usr/bin/python as the interperter. Lighttpd needs to be restarted with a sudo service lighttpd restart for the changes to take effect (or you could try to use reload, but restart is just as easy). Now we need an interface and the cgi-script to perform the action. I decided to go and write a simple python script for the cgi-handler called pidisplay.py, placed inside the cgi-bin directory under /var/www. Of course, it must be sudo chmod +x pidisplay.py to ensure it is executable. #!/usr/bin/env python import cgi import cgitb import pifacecad cgitb.enable() cad=pifacecad.PiFaceCAD() cad.lcd.blink_off() cad.lcd.cursor_off() cad.lcd.clear() cad.lcd.backlight_on() print "Content-type: text/html\n\n<html>" form=cgi.FieldStorage() if "lineone" not in form: print "<h1>Line 1 is Blank</h1>" else: text=form["lineone"].value print cgi.escape(text) cad.lcd.write(cgi.escape(text)) print "<p>" cad.lcd.write("\n") if "linetwo" not in form: print "<h1>Line 2 is Blank</h1>" else: text=form["linetwo"].value print cgi.escape(text) cad.lcd.write(cgi.escape(text)) print "<p>" print "Screen Updated!<p><h1><a href=\"/\">Go Back</a></h1></html>" The interface was coded as a hand-coded HTML file, called index.html in the /var/www directory. There is a form for a custom message to be submitted, and there is a list of URL-encoded presets, both of which provide the lineone and linetwo values to the pidisplay.py file. <html> <title>Raspberry Pi Door Display</title> <h1>Raspberry Pi Door Display</h1> Enter message and hit submit to be displayed on PiFace Control and Display. <p> <form action="/cgi-bin/pidisplay.py" method="POST"> Line 1: <input type="text" name="lineone" maxlength="16" size="20" value=""><p> Line 2: <input type="text" name="linetwo" maxlength="16" size="20" value=""><p> <input type="submit" value="Submit"> <input type="reset" value="Reset"> </form> <p><h1>Presets:</h1> <ul> <li><a href="/cgi-bin/pidisplay.py?lineone=Gough: SLEEPING &linetwo=Do not disturb">Sleeping</a></li> <li><a href="/cgi-bin/pidisplay.py?lineone=Gough: NAPPING &linetwo=Do not disturb">Napping</a></li> <li><a href="/cgi-bin/pidisplay.py?lineone=Gough: WORKING &linetwo=Do not disturb">Working</a></li> <li><a href="/cgi-bin/pidisplay.py?lineone=Gough: AWAKE &linetwo=Knock to enter">Awake</a></li> <li><a href="/cgi-bin/pidisplay.py?lineone=Gough: ON PHONE &linetwo=Do not disturb">On Phone</a></li> <li><a href="/cgi-bin/pidisplay.py?lineone=Gough: NOT HOME &linetwo=Returning later">Not Home</a></li> </ul> </html> With the files in place, we can give it a go. Mounting it and Using It The Raspberry Model A board was mounted in the base on the old Multicomp Raspberry Pi case. This was then Blu-Tacked to the wall above the light switch just outside my door. The power lead was supplied from a manhole into the roof, where I already had a power supply (XP Power 5v 8A, on formerly on clearance from element14) running several other Raspberry Pis in the roof. The wireless adapter used is one of my favourites for Raspberry Pi - it's a TP-Link TL-WN722N with external antenna connector allowing better antennas to extend range. It's an Atheros based adapter and it's very sensitive. I've been running this for months without any drop-outs! Visiting the web page on a web browser provides the main interface with a form. Clicking on one of the presets, causes the web browser to submit a GET request and sit there for a few seconds while the python file executes. Then it returns the successful status. If we go and find the display, we will see that it has been updated. The interface works equally well from virtually any web browser because it doesn't use anything special, HTML wise. I could easily whip out my phone just to tell everyone that I'm working. It might look relatively straightforward, although I did have to spend some time working around user permissions to try and persuade the web server to execute with root privileges, or at least enough privileges to access the hardware directly. If you find it doesn't work, and results in blank pages, or internal server errors, you're probably running into a permissions issue. It's a lot of computing power for a display sign, which might be equally well achievable with the Arduino Yun and a LCD shield, although I had the Model A and PiFace Control and Display, so I used what I had. Limitations As this was hacked together in under an hour, it's clear that there is no provision to read-out the screen to determine what it is showing. None of the other PiFace Control and Display peripherals are used. No scrolling of the display is supported at this stage. The web server is being run as the pi user which has root privileges making it insecure and possibly prone to compromise in case of bugs. Validation of the text is done on the user-side for length, extreme lengths could result in unpredictable overwriting of the display. No simultaneous access management is provided - simultaneous calls to the .py file will result in a blanked display or corrupted display. No authentication is provided, although HTTP Basic should be fairly easy to set up. No automatic updating occurs, so you must keep an eye on it and update it from time to time to maintain some security. The interface is not particularly good looking, but it works. There are probably many more limitations that I haven't thought of, but in the end, it was a quick hack that works. It's fine if you're using it at home, in a network which you keep secure and safe, away from persistent malicious actors. Conclusion At last, the PiFace Control and Display finds a use around the house. Now I have a door sign I can control from virtually anywhere in the world with my phone and a data connection (Connectbot SSH tunneling + Web browser). Now, as long as I remember to set it to the right status, I can inform anyone who passes by whether it's okay to come in. Now, if only my family would actually READ the sign ...
https://www.element14.com/community/people/lui_gough/blog/2015/07/20/project-raspberry-pi-door-sign-in-under-an-hour
CC-MAIN-2018-26
en
refinedweb
Edit : instead of buffering in Hash and then emitting at cleanup you can use a combiner. Likely slower but easier to code if speed is not your main concern Le 01/03/2015 13:41, Ulul a écrit : > Hi > > I probably misunderstood your question because my impression is that > it's typically a job for a reducer. Emit "local" min and max with two > keys from each mapper and you will easily get gobal min and max in reducer > > Ulul > Le 28/02/2015 14:10, Shahab Yunus a écrit : >> As far as I understand cleanup is called per task. In your case I.e. >> per map task. To get an overall count or measure, you need to >> aggregate it yourself after the job is done. >> >> One way to do that is to use counters and then merge them >> programmatically at the end of the job. >> >> Regards, >> Shahab >> >> On Saturday, February 28, 2015, unmesha sreeveni >> <unmeshabiju@gmail.com <mailto:unmeshabiju@gmail.com>> wrote: >> >> >> I am having an input file, which contains last column as class label >> 7.4 0.29 0.5 1.8 0.042 35 127 0.9937 3.45 0.5 10.2 7 1 >> 10 0.41 0.45 6.2 0.071 6 14 0.99702 3.21 0.49 11.8 7 -1 >> 7.8 0.26 0.27 1.9 0.051 52 195 0.9928 3.23 0.5 10.9 6 1 >> 6.9 0.32 0.3 1.8 0.036 28 117 0.99269 3.24 0.48 11 6 1 >> ................... >> I am trying to get the unique class label of the whole file. >> Inorder to get the same I am doing the below code. >> >> /public class MyMapper extends Mapper<LongWritable, Text, >> IntWritable, FourvalueWritable>{/ >> / Set<String> uniqueLabel = new HashSet();/ >> / >> / >> / public void map(LongWritable key,Text value,Context context){/ >> / //Last column of input is classlabel./ >> / Vector<String> cls = CustomParam.customLabel(line, >> delimiter, classindex); // / >> / uniqueLabel.add(cls.get(0));/ >> / }/ >> / public void cleanup(Context context) throws IOException{/ >> / //find min and max label/ >> / context.getCounter(UpdateCost.MINLABEL).setValue(Long.valueOf(minLabel));/ >> / context.getCounter(UpdateCost.MAXLABEL).setValue(Long.valueOf(maxLabel));/ >> /}/ >> Cleanup is only executed for once. >> >> And after each map whether "Set uniqueLabel = new HashSet();" the >> set get updated,Hope that set get updated for each map? >> Hope I am able to get the uniqueLabel of the whole file in cleanup >> Please suggest if I am wrong. >> >> Thanks in advance. >> >> >
http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-user/201503.mbox/%3C54F30B2D.8020203@ulul.org%3E
CC-MAIN-2018-26
en
refinedweb
Why is routing so complex? If you write JavaScript code, it’s very likely you’ve already encountered some of the existing routing solutions. On the client side, there is a lot of them, depending on which framework you use. On nodejs, the most popular one is express. I’ll be focusing on client side routing, but the same ideas apply to server side routing. Angular, Ember and React have different solutions for routing. But they have one thing in common: they are complex and take time to learn. Let’s look at Ember for example. Here’s a typical router file: Router.map(function() { this.route('about', { path: '/about' }); this.route('posts', function() { this.route('new'); this.route('favorites'); }); }); The first questions that come to mind are why some of the routes have a path while others don’t? Where are the handlers for these routes? We are referring to this inside the callback functions, do we need to bind them? Then we need to learn how to define route handlers. And learn how Ember magically matches a route with its handler based on the name. And learn how to use route hooks to fetch data. And the list goes on. React Router, on the other hand, does less magic and keeps things explicit. But it still suffers from the same API bloat and learning curve problems. <Router> <Route path="/" component={App}> <IndexRoute component={AppIndex} /> <Route path="about" component={About} /> <Route path="posts" component={Posts}> <Route path="new" component={NewPost} /> <Route path="favorite" component={FavoritePosts} /> </Route> </Route> </Router> Similar questions: what other props does <Route> support? What’s an index route, and how is App different from AppIndex? Etc.. Let’s take a step back What’s the main responsibility of a router? In the client side world, the router has the single responsibility of mapping url paths to UI content. It’s as simple as that. Similarly, on server side, the router maps HTTP requests to HTTP responses. Since routing is just a mapping, why not use plain ol’ JavaScript objects? // routes.js { 'about': <About />, 'posts': { 'new': <NewPost />, 'favorites': <FavoritePosts />, } } Well, that look promising. But wait. Those are all static components. What if I need dynamic routing? What if I want to render different things based on some variable? Fortunately, functions are first-class citizens in JS: // routes.js { 'about': <About />, 'posts': { 'favorites': () => isLoggedIn() ? <Favorites /> : <Login />, '{id}': (route) => <Post id={route.params.id} />, } } Alright, but what if I need to fetch some data before I render the component? What if I need to lazy-load the component? Promises come to the rescue! // routes.js { 'about': () => lazyLoad('About').then(About => <About />), 'posts': { 'favorites': () => fetchFavs().then(favs => <Favorites favs={favs} />), } } Now of course putting all these routes in a single file would make a big mess. But because the routes are simple JS objects and functions, they can easily be modularized and split into multiple files: // posts/routes.js export default { 'favorites': ..., '{id}': ..., }; // profile/routes.js export default { 'view': <ProfileView />, 'settings': <ProfileSettings />, }; // routes.js import About from './about'; import PostsRoutes from './posts/routes'; import ProfileRoutes from './profile/routes'; export default { 'about': <About />, 'posts': PostsRoutes, 'profile': ProfileRoutes, }; Thanks for reading! I’d love to hear your thoughts :)
https://medium.com/@mdebbar/why-is-routing-so-complex-fd1b316cda23
CC-MAIN-2018-26
en
refinedweb
in reply to Stepping up from XML::Simple to XML::LibXML The example is an xml fragment, can you show one using a real xml file? One with a default namespace declared... It appears if you have: <library xmlns=""> </library> Then what would the xpath queries look like?) Lots Some Very few None Results (211 votes), past polls
http://www.perlmonks.org/?node_id=712128
CC-MAIN-2016-07
en
refinedweb
Opened 4 years ago Closed 4 years ago #17771 closed Bug (wontfix) weird problem db with autocommit Description Hello, Here is problematic code (standard django setup, mysql backend): import time import os, sys, re sys.path.append(os.path.abspath(os.path.dirname(__file__))+'/..') os.environ['DJANGO_SETTINGS_MODULE'] = 'settings' from django.conf import settings from django.contrib.auth.models import User #from django.db import connection #cursor = connection.cursor() #cursor.execute('SET autocommit = 1') while True: u = User.objects.get(pk=1) print u.first_name time.sleep(1) It displays user first_name each second. On the other hand with mysql client : $ update auth_user set first_name = "foo" where id=1; Value does not update in my loop (wireshark shows the old value too in the mysql packets dumped). If I restart the process, it fetch the correct new value. I can fix the problem by adding the 3 autocommit lines commented out. Problem do not occur on my ubuntu 32b desktop (32bits django 1.3.1 / MySQL-python 1.2.3, mysql 5.1.58) nor a debian squeeze server (32bits mysql 5.1.49). Problem occurs on a 64 bits debian squeeze server (64bits mysql 5.1.49) and a ubuntu 64 server (64 bits mysql 5.1.41). Thanks. Change History (4) comment:1 Changed 4 years ago by meister <admin@…> - Needs documentation unset - Needs tests unset - Patch needs improvement unset comment:2 follow-up: ↓ 3 Changed 4 years ago by akaariai The problem seems to be that you are running in a transaction with repeatable read semantics. I don't think Django supports autocommit for MySQL. comment:3 in reply to: ↑ 2 ; follow-up: ↓ 4 Changed 4 years ago by meister <admin@…> comment:4 in reply to: ↑ 3 Changed 4 years ago by meister <admin@…> - Resolution set to wontfix - Status changed from new to closed Replying to meister <admin@…>: The problem seems to be that you are running in a transaction with repeatable read semantics. I don't think Django supports autocommit for MySQL. It doesn't explain why my code works on some environments and do not work on others... It works when using MyISAM and creates some problem with InnoDB. My problem is explained in. A easier way to reproduce it : aaa
https://code.djangoproject.com/ticket/17771
CC-MAIN-2016-07
en
refinedweb
- OSI-Approved Open Source (154) - BSD License (17) - Apache License V2.0 (10) - MIT License (9) - Mozilla Public License 1.1 (7) - Mozilla Public License 1.0 (6) - Artistic License (5) - Apache Software License (3) - Academic Free License (2) - Common Public License 1.0 (2) - PHP License (2) - wxWindows Library Licence (2) - zlib/libpng License (2) - Adaptive Public License (1) - Apple Public Source License (1) - Common Development and Distribution License (1) - Public Domain (11) - Other License (4) - Modern (154) - Windows (154) - Grouping and Descriptive Categories (106) - Linux (86) - BSD (51) - Mac (47) - Other Operating Systems (22) GTK+ for Windows Runtime Environment The files required to run GTK+ applications on Windows4,792.134 JsUnit JsUnit is a unit testing framework for client-side JavaScript in the tradition of the XUnit frameworks. Development began in 2001. As of 11/28/2009, development has moved to GitHub: weekly downloads Battle Tanks Fast 2d tank arcade game with multiplayer and split-screen modes.48 weekly downloads Teuton Preinstalled Environment (TPE) Alternative PE to rescue, hack or troubleshoot your machine or network ODBTP - Open Database Transport Protocol ODBTP (Open Database Transport Protocol) provides remote ODBC access to databases for client applications where local ODBC access is unavailable or inadequate.18 weekly downloads Class Viewer for Java Lightweight quick reference tool. View public class info.69 weekly downloads Media XW Extensions to Windows operating systems for providing better support for various popular media formats.63 weekly downloads sdlBasic A easy basic in order to make games in 2d style amos for linux and windows The basic it comprises a module runtime, ide examples and games AutoGen: The Automated Program Generator AutoGen is designed to generate text files containing repetitive text with varied substitutions. Its goal is to simplify the maintenance of programs that contain large amounts of repetitious text, especially when needed in parallel tables
http://sourceforge.net/directory/os:mswin_2000/license:lgpl/license:gpl/
CC-MAIN-2016-07
en
refinedweb
Hello, I am a student and I'm working on a problem. I have written other small class and method assignments without a problem, but I'm getting confused as to what's wrong with my code. I'm doing a simple assignment where I'm asked to create a class, and then populate and display it in another. I can compile without any errors, so it makes it a little harder to troubleshoot. Here is my code from the first class. public class PayrollML { private String name; //employee's first and last name private int idNumber; //Employee's ID number private double rate; //Employee's hourly rate private double hours; //employee's hours worked public void setName( String name ) { name = (name != null ? name : "No name" ); } public void setIdNumber( int idNumber ) { idNumber = ( idNumber >=0? idNumber : 0 ); } public void setRate( double rate ) { rate = ( rate >= 0? rate :0 ); } public void setHours( double hours ) { hours = ( hours >= 0? hours : 0 ); } public String getName() { return name; } public int getIdNumber() { return idNumber; } public double getRate() { return rate; } public double getHours() { return hours; } } Here is the second class to create and instance, populate it and then display the information. import javax.swing.*; class PayrollMLInput { public static void main( String [] args ) //fill in the information for the employees { String input; String payName; int payIdNumber; double payRate; double payHours; double payGross; //creates an object from the payroll class. PayrollML Payee1 = new PayrollML(); [COLOR="Orange"]// I assume this area is where the problem lies. Setting the data. I cannot tell for sure, since it all compiles without errors.[/COLOR] [COLOR="Red"]input = JOptionPane.showInputDialog( "Please enter employee name."); payName = input; Payee1.setName(payName); input = JOptionPane.showInputDialog( "Please enter employee ID."); payIdNumber = Integer.parseInt(input); Payee1.setIdNumber(payIdNumber); input = JOptionPane.showInputDialog( "Please enter employee rate of pay in dollars per hour."); payRate = Double.parseDouble(input); Payee1.setRate(payRate); input = JOptionPane.showInputDialog( "Please enter employee hours worked."); payHours = Double.parseDouble(input); Payee1.setHours(payHours); payGross = (Payee1.getRate() * Payee1.getHours());[/COLOR] [COLOR="Orange"]//Unless this is the issue and I'm not calling it correctly. Either way, I cannot tell for sure where my error is. To me it looks like I've named the class and parameters correctly.[/COLOR] JOptionPane.showMessageDialog( null, "Employee is " + Payee1.getName() + "\n" + "Id Number: " + Payee1.getIdNumber() + "\n" + "Makes " + Payee1.getRate() + " Per Hour\n" + "Worked " + Payee1.getHours() + " hours this week\n" + "Earned " + payGross + " dollars"); System.exit(0); } } When I run the second program, after all the JOptionPane promts are completed, I get nulls and zero's for the displayed data. Any help in pointing out my error would be greatly appreciated. Thank you.
http://www.javaprogrammingforums.com/whats-wrong-my-code/4872-class-not-passing-information.html
CC-MAIN-2016-07
en
refinedweb
From commits-return-16306-apmail-commons-commits-archive=commons.apache.org@commons.apache.org Tue Mar 01 21:19:22 2011 Return-Path:. Initializer classes deal with the creation of objects in a multi-threaded +environment. There are several variants of initializer implementations serving +different purposes. For instance, there are a couple of concrete initializers +supporting lazy initialization of objects in a safe way. Another example is + BackgroundInitializer which allows pushing the creation of an +expensive object to a background thread while the application can continue with +the execution of other tasks. Here is an example of the usage of BackgroundInitializer +which creates an EntityManagerFactory object: +public class DBInitializer extends BackgroundInitialize<EntityManagerFactory> { + protected EntityManagerFactory initialize() { + return Persistence.createEntityManagerFactory("mypersistenceunit"); + } +} ++ An application creates an instance of the DBInitializer class +and calls its start() method. When it later needs access to the + EntityManagerFactory created by the initializer it calls the + get() method; get() returns the object produced by the +initializer if it is already available or blocks if necessary until initialization +is complete. Alternatively a convenience method of the ConcurrentUtils +class can be used to obtain the object from the initializer which hides the +checked exception declared by get(): +DBInitializer init = new DBInitializer(); +init.start(); + +// now do some other stuff + +EntityManagerFactory factory = ConcurrentUtils.initializeUnchecked(init); ++ Comprehensive documentation about the concurrent package is +available in the user guide. A common complaint with StringEscapeUtils was that its escapeXml and escapeHtml methods should not be escaping non-ASCII characters. We agreed and made the change while creating a modular approach to let users define their own escaping constructs. The simplest way to show this is to look at the code that implements escapeXml:
http://mail-archives.apache.org/mod_mbox/commons-commits/201103.mbox/raw/%3C20110301211859.3E75023889FD@eris.apache.org%3E
CC-MAIN-2016-07
en
refinedweb
This is mostly a fairly minor update, just a handful of bug fixes I wanted to get out there. Notably though, it does have the much-requested auto complete support in the API. Auto complete itself is now implemented via a plugin, and you can choose to either hook into that, or show a separate completions menu (via the new method view.showCompletions). A short example of hooking into the existing auto complete command: from AutoComplete import AutoCompleteCommand def AddGreeting(view, pos, prefix, completions): return "Hello!"] + completions AutoCompleteCommand.completionCallbacks'AddGreeting'] = AddGreeting This will add "Hello!" as the first available auto complete suggestion. Freaking awesome! Thanks Jon Can you elaborate on bug fixes in this release? They're listed in the changelog at: the last 4 items mentioned are bug fixes. Nice, my bad I forgot to check that page, I was on the phone when posting that Really looking forward to The changed autocompletion makes me happy as you removed the dependecy on trailing punctuation. This was sometimes annoying when programming PL/SQL. Many thanks either way you do it, make it optional. In a really LARGE project I wouldn't want a bunch of sublime-snippets files reflecting all the ctags from that project, it would easily get to the 10k snippet files in a sec lol...On the other hand I would like for the ctags to pop down and show me all the definitions and allow me to select whichever Now, if youre thinking of doing insertInlineSnippet when selecting the definition (function) from the drop down then I don't see a problem why not have the snippet dynamically generated for you. (without creating sublime-snippet files tho)
https://forum.sublimetext.com/t/20090530-beta/189
CC-MAIN-2016-07
en
refinedweb
< Main Index JBoss AS Tools News > Seam code completion and validation now also supports the Seam 2 notion of import's. import This allow the wizard to correctly identify where the needed datasource and driver libraries need to go. Hibernate support is now enabled by default on war, ejb and test projects generated by the Seam Web Project wizard. This enables all the HQL code completion and validation inside Java files. Code completion just requires the Hibernate Console configuration to be opened/created, validation requires the Session Factory to be opened/created. When the Seam Wizards (New Entity, Action, etc.) completes, it now (if necessary) automatically touch the right descriptors to get the new artifacts redeployed on the server. EL code completion now support more of the enhancements to EL available in Seam, e.g. size, values, keySet and more are now available in code completion for collections and will not be flagged during validation. size values keySet
http://docs.jboss.org/tools/whatsnew/seam/seam-news-1.0.0.cr1.html
CC-MAIN-2016-07
en
refinedweb
I need some help. I need to write a program that will take a file that has a list of numbers and will find the largest number, the smallest number, the average, the sum, and the number of values on the list. It also needs a menu. This is what I have so far: #include <iostream> #include <fstream> using namespace std; int main() { ifstream inputFile; //file stream object int choice; // menu choice int number; // to hold a value from file int total; //to hold a value from file int large; // to hold largest value from file int small; // to hold smallest value from file int sum; // to hold sum of the values from file int average; // to hold the average values from file int quantity; // to hold the number of values entered inputFile.open("TopicDin.txt"); // Open the file do { //display the menu and get a choice cout << "Make a selection from the list:"; cout << "1. Get the largest value"; cout << "2. Get the smallest value"; cout << "3. Get the sum of the values"; cout << "4. Get the average"; cout << "5. Get the number of values entered"; cout << "6. End this program"; cout << "Enter your choice: "; cin >> choice; //Validate the menu selection while (choice < 1 || choice > 6) { cout << "Please enter 1, 2, 3, 4, 5, or 6: "; cin >> choice } // Respond to the user's menu selection switch (choice) { case 1: inFile << number cout << large; break; case 2; cout << small; break; case 3; inFile << number do number += number while(number != " "); cout << number; break; case 4; cout << average; break; case 5; cout << quantity; break; case 6; cout << "Program ending. \n"; break; } } while (choice != 6) return 0; } I'm just stuck on how to evaluate the list to find these numbers. Mod edit - Fixed code tags ~BetaWar
http://www.dreamincode.net/forums/topic/68191-find-the-largest-number-the-smallest-number-the-average-the-sum/page__p__441240
CC-MAIN-2016-07
en
refinedweb
pyConditions 0.0.1 Guava Like Preconditions in Python. Guava like precondition enforcing for Python. Has been tested against: - 2.6 - 2.7 - 3.2 - 3.3 - pypy Decorate functions with preconditions so that your code documents itself and at the same time removes the boilerplate code that is typically required when checking parameters. An Example: def divideAby1or10( a, b ): if not ( 1 <= b <= 10 ): <raise some error> else: return a / b Simply becomes the following: from pyconditions.pre import Pre pre = Pre() @pre.between( "b", 1, 10 ) def divideAbyB( a, b ) return a / b In the above example the precondition pre.between ensures that the b variable is between (1, 10) inclusive. If it is not then a PyCondition exception is thrown with the error message detailing what went wrong. More Examples: from pyconditions.pre import Pre pre = Pre() @pre.notNone( "a" ) @pre.between( "a", "a", "n" ) @pre.notNone( "b" ) @pre.between( "b", "n", "z" ) def concat( a, b ): return a + b The above ensures that the variables a and b are never None and that a is between ( ?a?, ?n? ) inclusively and b is between ( ?n?, ?z? ) inclusively. from pyconditions.pre import Pre pre = Pre() BASES = [ 2, 3, 4 ] @pre.custom( a, lambda x: x in BASES ) @pre.custom( b, lambda x: x % 2 == 0 ) def weirdMethod( a, b ): return a ** b Using the custom precondition you are able to pass in any function that receives a single parameter and perform whatever condition checking you need. - Downloads (All Versions): - 162 downloads in the last day - 495 downloads in the last week - 2754 downloads in the last month - Author: Sean Reed - Maintainer: Sean Reed - Keywords: preconditions conditions assertion decorators - License: LICENSE.txt - Categories - Package Index Owner: streed - DOAP record: pyConditions-0.0.1.xml
https://pypi.python.org/pypi/pyConditions/0.0.1
CC-MAIN-2016-07
en
refinedweb
Larry Franks and Brian Swan on Open Source and Device Development in the Cloud In part 1, I presented an overview of testing on Windows Azure. One thing that I missed in the original article is that while there’s no testing framework for Ruby that exposes a web front-end, it’s perfectly possible to have your test results saved out to a file. For this example, I’m using RSpec and formatting the output as HTML. I’m also using the deployment sample I mentioned in a previous post. For the test, I created a test for a user class, user_spec.rb. This is stored in the ‘\WorkerRole\app\spec’ folder, along with spec_helper.rb. Spec_helper.rb just has an include for the class, require_relative '../user' since the user.rb file is in the app folder. require_relative '../user' The contents of the user_spec.rb file are: require 'spec_helper' describe User do before :each do @user = User.new "Larry", "Franks", "larry.franks@somewhere.com" end describe "#new" do it "takes user's first name, last name, and e-mail, and then returns a User object" do @user.should be_an_instance_of User end end describe "#firstname" do it "returns the user's first name" do @user.firstname.should == "Larry" end end describe "#lastname" do it "returns the last user's last name" do @user.lastname.should == "Franks" end end describe "#email" do it "returns the user's email" do @user.email.should == "larry.franks@somewhere.com" end end end So really all this does is create an User and verify you can set a few properties. The user.rb file that contains the class being tested looks like this: class User attr_accessor :firstname, :lastname, :email def initialize firstname, lastname, email @firstname = firstname @lastname = lastname end end In order to run this test once the project is deployed to Windows Azure, I created a batch file, runTests.cmd, which is located in the application root: ‘\WorkerRole\app\runTests.cmd’. This file contains the following: REM Strip the trailing backslash (if present) if %RUBY_PATH:~-1%==\ SET RUBY_PATH=%RUBY_PATH:~0,-1% cd /d "%~dp0" set PATH=%PATH%;%RUBY_PATH%\bin;%RUBY_PATH%\lib\ruby\gems\1.9.1\bin; cd app call rspec -fh -opublic\testresults.html Note that this adds an additional \bin directory to the path; the one in \lib\ruby\gems\1.9.1\bin. For some reason, this is where the gem batch files/executables are installed when running on Windows Azure. Bundler ends up in the %RUBY_PATH%\bin, but everything installed by Bundler ends up in the 1.9.1\bin folder. The runTests.cmd is also added as a in the ServiceDefinition.csdef file. It should be added as the last task in the section so that it runs after Ruby, DevKit, and Bundler have be installed and processed gems in the Gemfile. The entry I’m using is: <Task commandLine="runTests.cmd" executionContext="elevated"> <Environment> <Variable name="RUBY_PATH"> <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/LocalResources/LocalResource[@name='ruby']/@path" /> </Variable> </Environment> </Task> Going back to the runTests.cmd file, note that the last statement runs rspec, -fh instructs it to format the output as HTML, and -o instructs it to save the output in the public\testresults.html folder (under ’\WorkerRole\app). Since I’m using Thin as my web server, the public directory is the default for static files. After deploying the project to Windows Azure and waiting for it to start, I can then browse to to see the test results. Note that by default this is publicly viewable to anyone who knows the filename, so if you don't want anyone seeing your test output you'll need to secure it. Maybe set up a specific path in app.rb that returns the file only when a certain users logs in. Also, this isn't something you need to run all the time. Run your tests in staging until you're satisfied everything is working correctly, then remove the runTests.cmd and associated <Task> for subsequent deployments. So running tests in Windows Azure turns out to not be that hard, it’s just a matter of packaging the tests as part of the application, then invoking the test package through a batch file as a startup task. In this example I save the output to a file that I can remotely open in the browser. But what about when my tests are failing and I want to actually play around with the code to fix them and not have to redeploy to the Azure platform to test my changes? I’ll cover that next time by showing you how to enable remote desktop for your deployment.
http://blogs.msdn.com/b/silverlining/archive/2012/01/16/testing-ruby-applications-on-windows-azure-part-2-output-to-file.aspx
CC-MAIN-2016-07
en
refinedweb
UFDC Home | Help | RSS TABLE OF CONTENTS HIDE Title Page Acknowledgement Table of Contents List of Tables List of Figures Abstract Introduction Concepts of economic and political... Protohistoric Timucuan and Spanish... Archaeological contexts of Spanish-Indian... Structural remains and material... Archaeological indications of social... Conclusion : Spanish-Indian... A : excavation data and detailed... B : complete raw data for lithic... C : analysis of corncobs from the... Bibliography Bibliography Biographical sketch Back Matter Group Title: Political and economic interactions between Spaniards and Indians : archeological and ethnohistorical perspectives of the missions system in Florida Title: Political and economic interactions between Spaniards and Indians CITATION PAGE IMAGE ZOOMABLE Full Citation STANDARD VIEW MARC VIEW Permanent Link: Material Information Title: Political and economic interactions between Spaniards and Indians archeological and ethnohistorical perspectives of the mission system in Florida Physical Description: xvii, 366 leaves : ill. ; 28 cm. Language: English Creator: Loucks, Lana Jill, 1953- Publication Date: 1979 Subjects Subject: Timucua Indians -- Missions ( lcsh )Indians of North America -- Missions -- Florida ( lcsh )Acculturation ( lcsh )Excavations (Archaeology) -- Florida ( lcsh )Anthropology thesis Ph. D ( lcsh )Dissertations, Academic -- Anthropology -- UF ( lcsh ) Genre: bibliography ( marcgt )non-fiction ( marcgt ) Notes Thesis: Thesis--University of Florida. Bibliography: Bibliography: leaves 349-364. Statement of Responsibility: by Lana Jill Loucks. General Note: Typescript. General Note: Vita. Record Information Bibliographic ID: UF00025915 Volume ID: VID00001 Source Institution: University of Florida Holding Location: University of Florida Rights Management: All rights reserved, Board of Trustees of the University of Florida Resource Identifier: aleph - 000092942oclc - 06025225notis - AAK8351 Table of Contents Title Page Title Page Acknowledgement Page ii Page iii Page iv Page v Page vi Page vii Page viii Table of Contents Page ix Page x List of Tables Page xi Page xii List of Figures Page xiii Page xiv Abstract Page xv Page xvi Page xvii Introduction Page 1 Page 2 Page 3 Page 4 Page 5 Page 6 Page 7 Page 8 Page 9 Page 10 Page 11 Concepts of economic and political organization Page 12 Page 13 Page 14 Page 15 Page 16 Page 17 Page 18 Page 19 Page 20 Protohistoric Timucuan and Spanish mission period economics Archaeological contexts of Spanish-Indian life at Florida missions Structural remains and material culture at baptizing spring Archaeological indications of social and economic relationships Conclusion : Spanish-Indian interaction Page 317 Page 318 Page 319 Page 320 Page 321 Page 322 Page 323 Page 324 Page 325 Page 326 Page 327 Page 328 A : excavation data and detailed feature descriptions Page 329 Page 330 Page 331 Page 332 Page 333 Page 334 Page 335 Page 336 B : complete raw data for lithic artifacts Page 337 Page 338 Page 339 C : analysis of corncobs from the baptizing spring site, Florida Page 340 Page 341 Page 342 Page 343 Page 344 Page 345 Page 346 Page 347 Page 348 Bibliography Page 349 Bibliography Page 350 Page 351 Page 352 Page 353 Page 354 Page 355 Page 356 Page 357 Page 358 Page 359 Page 360 Page 361 Page 362 Page 363 Page 364 Biographical sketch Page 365 Page 366 Page 367 Back Matter Back Matter 1 Back Matter 2 Full Text POLITICAL AND ECONOMIC INTERACTIONS BETWEEN SPANIARDS AND INDIANS: ARCHEOLOGICAL AND ETHNOHISTORICAL PERSPECTIVES OF THE MISSION SYSTEM IN FLORIDA BY LANA JILL LOUCKS A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 1979 ACKNOWLEDGMENTS After the astringency of dissertation style, writing acknowledg- ments is relatively pleasurable. As one counts up the various persons and organizations one wishes to thank, the realization dawns that it is wise to write one's dissertation as early as possible lest the enumeration become totally unmanageable. I would like to express my gratitude to Owens-Illinois, Inc., ow- ners of the property on which this research was carried out. Mr. Harry Bumgarner, Manager of Southern Woodlands, has been particularly cooperative in allowing me and many others to work on Owens-Illinois property. The Wentworth Foundation of Clearwater, Florida, Mr. William Goza, President, funded the 1976 field research performed by Dr. Jerald T. Milanich and a University of Florida Archeological Field School. The National Endowment for the Humanities funded the 1978 field research and the grant has supported myself and another graduate student during the analysis and writing period. There are several Suwannee County residents who, perhaps unwittingly, have provided information which I have used in this dissertation. I would like to thank Mr. Lynne Johnson, Mr. Edmond Montgomery, Mr. Howe Land, and Mr. Leon (Lex) McKeithen. Mr. McKeithen was extremely helpful in providing information and making introductions. He also allowed the 1978 crew to live on his property in Columbia County. I thank Mr. Rick Stokell most heartily for serving with me as the 1977 survey crew. The days spent finding no sites, the thrashing through ii smilax thickets in the late afternoons, and the cautious trudging through rattlesnake territory would have been unbearable without his com- panionship, dedication, and interest. Several people put in a day or two on the survey, and I thank them, but Rick was there through prover- bial thick (undergrowth) and thin (site distribution). The 1978 field crew who participated at Baptizing Spring was small, enough that I can thank them individually. William Easton, map librarian and hockey manager at Illinois State University, Vicki Bagnell, Wade Hannah, and Woody Meiszner comprised the more or less permanent crew. David Stern, Patricia Vazquez, Renee Andrews, and TammieHearn were the faithful weekend volunteers who kept coming back time after time and who made the extra trips back and forth to Gainesville well worth my while. I owe a special thanks to Woody Meiszner, another graduate student, who was my "right-hand man" in the field and who, with his back- ground as an accountant, offered advice on personnel management and time budgeting (none of which I think I ever implemented). He was also a good friend whose inquiring mind kept my own thought running at a lively pace. Ms. Virginia Hanson also contributed to the field work but, more importantly, continued the analysis of ceramics which I had started. I owe her a debt of gratitude for her perserverence and commend her strong belief in the responsibility for carrying out one's commitments. At the end of one's structured educational career, the benefits owed to numerous professors come rushing back in an overwhelming flood of memories. There is not enough space to thank all of them. Very generally, I would like to thank the faculty and staff of the Department of Anthropology and at the Florida State Museum. Those whom I have ever come in contact with have been very good to me and I hope that I may be iii a credit to their time and efforts. Dr. Art Hanson inspired my interest in economic anthropology and spent extra time discussing my project with me during its incipient stages. Dr. Leslie Sue Lieberman has been not only my employer but my friend as well. I sometimes believe that she rescued my sanity when the hours.at night became too long. Dr. Prudence Rice is a demanding professor, a fact I can appreciate since it makes one's accomplishments that much more satisfying. She has brought ceramic technology to the Department and Museum, and I hope they both remain. Dr. Jack Ewel, fondly remembered from my course in ecosystems, opened new vistas and the other students in that class were such that I was sorry to see the last field trip end. Dr. William Maples, Curator of Social Sciences at the Florida State Museum, has always kept my interest in human osteology and forensic science at a keen level. He has allowed me several privileges at the Museum and his caustic remarks regarding the ineptitude of archeologists have kept me amused; I have too much res- pect for him to suffer indignation so I have simply tried harder not to be ignorant. Mrs. Lydia Deakin, our principal secretary and over-worked trouble- shooter, has gone out of her way to help me in many situations and has proved to be a most benevolent conspirator. She has also taken a per- sonal interest in my life and times, and my gratitude is best expressed in saying that I cannot express it. The members of my committee will always be special people, if only because they comprise my committee. Dr. Michael Gannon has been very interested in this project and provided me with information on what historians expect to have as information. He also made me realize that persons not familiar with Florida might even want to read this report. iv Dr. Cannon also bolstered my spirits by being the first to say that he thought my work did make a contribution to Florida mission research. Dr. Elizabeth Wing allowed me freedom in the Zooarcheological Laboratory of the Florida State Museum. I do not feel that I know her well but the inferences are clear that she is a remarkable person. She has always taken an interest in my work, considered my schemes and ideas, has volunteered information, and stimulated my curiosity. One can expect to work hard to please Dr. Wing, but it is extremely rewarding to do so. I have very special feelings for Dr. Kathleen Deagan as she was the first archeologist I ever worked with. Her field techniques are meticulous, her intellect keen, and her personality marvelous. She has been such a pleasure to be associated with. I known that anything I, or others, have written will receive honest and thoughtful criticism and consideration. Dr. Jerald Milanich provided me with my first introduction to Florida prehistory and I've been hooked ever since. He found money for me to carry out the survey and assented to our living at the Florida State Museum field camp during our summer season. Conversations with him, especially the longer ones, are always thought-provoking and stimulating. He is an excellent critic (a fact I usually appreciate two or three days after the initial shock) and having him on my com- mittee has been invaluable. Everytime one of Dr. Charles Fairbanks' students graduates, there is a real problem in trying to say something about him that has not al- ready been said. Like all others, I find myself in this quandry. As my chairman, he has read completely each chapter as it issued from my typewriter. His knowledge on a myriad of subjects is astounding (and v sometimes depressing). He takes personal and professional interest in every student who comes to his office. For Dr. Fairbanks, office hours tend to be a mere formality and I sometimes think we, his students, often take advantage of his generosity and patience. Of all the things I have learned from him, the most important have been concerned with professional ethics, responsibility to one's colleagues, and how to be a teacher. I am grateful to have known him and to have worked under his guidance. There are numerous peers who have enlightened my viewpoint and added several dimensions to my personality. We are all cohorts who have lived in a basement somewhere and I want to thank them for providing entertainment and moral and intellectual support, not to mention know- ledge in areas where I am lacking. Robin Smith, Nicholas Honerkamp, Theresa Singleton, Betsy Reitz, Nina Borremanns, Brenda Siglar-Lavelle, Sue Mullins, Ann Cordell, Mimi Saffer, Virginia Hanson, Malinda Stafford, Jere Moore, Gerry Evans, and Arlene Fradkin are only the most important few that immediately come to mind. I would particularly like to acknowledge Robin Smith as a person whose intellect and sensitivity I admire and whose friendship I cherish. Arlene Fradkin voluntarily analyzed the faunal material from the summer season at Baptizing Spring. Betsy Reitz found, and took it upon herself to analyze, faunal material from another mission site as comparative data. Betsy has finished her dissertation a little ahead of me and I have benefited by being able to compare "notes" on the processes and frustrations involved. She is a veritable wealth of information regarding faunal analysis and excells in numerous other areas. I admire her greatly and I have been the one to profit from our conversations. Brenda Siglar-Lavelle and I have had many conversations concerned with our research, which overlap after a fashion, and just as many that have had very little to do with the academic side of graduate life. I usually come away impressed with my own ignorance and, luckily, stimulated to do something about it. She is a very bright and gutsy lady and deserves more recognition than I can possibly provide. Two former doctoral students, who successfully completed their de- grees and have moved on to other "pastures," provided me their minds as great reverberating sounding boards. Ray Crook, with whom I think I was somewhat harsh concerning the relevance of archeology, gave me plat- ters of food for thought. Tim Kohler, a very special friend and con- fidante, is a man whose consideration, thoughtfulness, humor, generosity, and intellectual brilliance I can never adequately acknowledge. He helped me through some very hard times and rejoiced in my minor triumphs. A distance of 4000 miles has not made him any less accessible, even though I have gotten him out of bed on Sunday mornings to discuss corn (Zea mays) over the telephone. It ought to be obvious to anyone who is particularly fond of mul- ling over "acknowledgements" that there are an inordinate number of good people and scholars all gathered in the same arena. One's parents and family, if one is lucky, are supportive, generous, kind, and interested. I am extremely fortunate in having parents who have dragged me out of the basement lab for weekend excursions, who have kept me provisioned with food (and sometimes money just when I needed it most) throughout my graduate student existence, and most importantly, have been interested in what I've been doing. Connie and Bob Loucks have always been extremely proud of me, praising me beyond my blushing worth. vii- 1 would like to say that I have been extremely proud of them, for they are industrious, caring people who excell in their own fields and in- terests. My brothers have worked industriously and without compensation to keep my car in running order. We have never talked very much but there is not always the need (and it is embarrassing) when siblings become as close as we have grown. I think of them a great deal. I also would like to say, with regard to my relatives still in Canada, that I am just as pleased as punch that my graduation has been a topic of conversation and pride among them. I am very glad to be one of the clan, for an anthropologist and an archeologist can appreciate what it means to have time depth in one's life. In the face of all these acknowledgements, it is somewhat presump- tuous to assume final responsibility for this dissertation. No matter what others have given, however, it has all come together (or flown apart) within my own head. The final product is my own responsibility and any deficiencies are of my own making. Writing a dissertation, or any major work, is like painting a good picture: the result is usually a surprise to the artist who feels that it must have been executed by someone else, viii TABLE OF CONTENTS ACKNOWLEDGEMENTS . . . . .. .ii LIST OF TABLES . . . . . . xi LIST OF FIGURES . . . . . . xiii ABSTRACT . . . . . .. . CHAPTER ONE: INTRODUCTION . . . Acculturation . . . . Archeological Acculturation Studies . Specific Goals and Assumption of this Study CHAPTER TWO: CONCEPTS OF ECONOMIC ANTHROPOLOGY AND POLITICAL ORGANIZATION . . . Modes of Exchange . . . . . Exchange Spheres . . . . . Economics, Prestige, and Power . . . . Economic Archeology .. . . . . . CHAPTER THREE: PROTOHISTORIC TIMUCUAN AND SPANISH MISSION PERIOD ECONOMICS . . . Modelling Timucuan Economics . . . . Peninsular Economic and Demographic Conditions (1482-1700) . . . . Spanish-Indian Interaction (1564-1650) . . . Priests, Soldiers, Civilians, and Indians: 1650-1675 . 1675 1704 . . . . . . Economic Interactions During the Mission Period . . Hypotheses . . . . . . CHAPTER FOUR: ARCHEOLOGICAL CONTEXTS OF SPANISH-INDIAN LIFE AT FLORIDA MISSIONS . . Mission Archeology (1948-1977) . . Interpretations, Inferences, and Hypotheses of Previous Research . . . . The Utina . . . . . . Baptizing Spring . . . . . CHAPTER FIVE: STRUCTURAL REMAINS AND MATERIAL CULTURE AT BAPTIZING SPRING .. Structures at Baptizing Spring . . Lithic Artifacts . . . . Spanish Artifacts . . . . xv 1 2 6 8 12 13 15 16 18 21 32 35 39 50 57 63 72 80 80 91 97 100 127 129 150 167 I I 1 I I I f I I I I Indian Manufactured Ceramics . . ... .185 Faunal Remains . . . . ... .221 Floral Remains . . . . ... .233 CHAPTER SIX: ARCHEOLOGICAL INDICATORS OF SOCIAL AND ECONOMIC RELATIONSHIPS . . 237 Ceramic Diversity . . .. . 238 Similarity and Correlations . . 251 Distribution of Non-ceramic Prestige Goods ... .267 Weapons and Subsistence .. . . 268 Artifact and Structure Associations . ... .281 Sites Adjacent to Baptizing Spring . . 288 Comparison of Mission Period Sites . ... .293 CHAPTER SEVEN: CONCLUSIONS: SPANISH-INDIAN INTERACTION . . . ... .317 APPENDICES A. EXCAVATION DATA AND DETAILED FEATURE DESCRIPTIONS . . . .329 B. COMPLETE RAW DATA FOR LITHIC.ARTIFACTS ... .337 C. ANALYSIS OF CORNCOBS FROM THE BAPTIZING SPRING SITE, FLORIDA ..... .... . .. 340 BIBLIOGRAPHY . . . ... .350 BIOGRAPHICAL SKETCH . . . . ... .365 LIST OF TABLES Table 1. Gifts and Trade Goods Exchanged between Indians and Europeans . . . ... .43 Table 2. Flora Local to Baptizing Spring Vicinity . .106 Table 3. Worked Lithic Tools . . . . 158 Table 4. Utilized Lithic Tools. . . . .163 Table 5. Debitage by Form Group . . . 166 Table 6. Non-ceramic Spanish Artifacts . . ... .168 Table 7. General Distribution of Identifiable Spanish Ceramics . . . . 175 Table 8. South's Mean Ceramic Date Formula . ... 183 Table 9. Raw and Relative Frequencies of Aboriginal Ceramics 186 Table 10. Summary of Lip and Rim Forms for Selected Ceramics 210 Table 11. Species and Classes Represented in Structures A and B, Aggregated Spanish Area (A+B), and the Village: Number and % by Fragments . ... 224 Table 12. Class Percentage by MNI of Fauna . ... 226 Table 13. Summary Descriptive Statistics from 1979 (Kohler, Appendix C) Analysis of Carbonized Corncobs ... .235 Table 14. Aboriginal Ceramic Categories Used in Calculation of Shannon-Weaver Diversity Index . . .. 248 Table 15. Aboriginal Ceramic Diversity . . ... .250 Table 16. Weighted Ceramic Group/Type Counts . ... .257 Table 17. ANOVA Table for One-way Analysis of Variance between Spanish and Indian Structures ... . .. 260 Table 18. F Values of One-way Analysis of Variance between Structure Pair A-C and Pair A-D by Ceramic Type/Group 262 Table 19. ANOVA Table for One-way Analysis of Variance between Structure C and Structure D . . .. 263 xi Table 20 Table 21 Table 22. Table 23. Table 24. Table 25. Table 26. Table 27. Table 28. Table 29. . White-tailed Deer (Odocoileus virginianus) Element Distribution . . . . . Faunal Species and Elements from Spanish Structures (White-tailed Deer excluded) and Village . . . Worked and Utilized Lithic Artifacts from Structures C and D . . . . . Identifiable Aboriginal Ceramics Collected from the Surface of Sites Adjacent to Baptizing Spring . Distribution of Spanish (or European) Ceramics versus Aboriginal Ceramics.at Three Mission Period Sites . . . . . . SClassified Majolica Types and Diversity for Nine Florida Mission or Visita Sites . . . Aboriginal Ceramics from Eight Florida Mission Period Sites: Aggregated by Design . . . Cultures Represented by Identifiable Aboriginal Ceramics at the Eight Florida Mission Period Sites Non-ceramic Spanish Artifacts Compared between Spanish Mission Period Sites in Florida . . Floral and Faunal Remains Preserved at the Different Mission Sites Reported in Florida . 271 273 285 290 295 297 301 303 307 312 LIST OF FIGURES Figure 1. General Geomorphological Areas of Florida and Location of Certain Eastern and Western Timucuan Tribes and the Apalache . . .... 22 Figure 2. Hypothetical Flow Chart of Prehistoric/Protohistoric Timucuan Economic System . . ... .36 Figure 3. Location of Selected Excavated Mission Period Sites 82 Figure 4. Contour Map of Vicinity Around Baptizing Spring .101 Figure 5. Sites Adjacent to Baptizing Spring . ... 110 Figure 6. Baptizing Spring Site Plan .. . . .. .120 Figure 7. 1978 Excavations and Location of Transit Stations and Bench Marks . . . . . 123 Excavation and Floor Plan of Structure B Excavation and Floor Plan of Structure A Excavation and Floor Plan of Structure D Excavation and Floor Plan of Structure C Clay-lined Feature . . . Profile of Clay-lined Feature . . Cultural Features in Central Portion of T Simplified Examples of Use Wear . . Generalized Lithic Artifact Forms . Coral Core Gouging Tool . . . Copper and Glass Ornaments . . Religious Medallion Found in Structure C Ichtucknee Blue on White Plate . . Santo Domingo Blue on White Handled Bowl xiii r . 131 . . 136 . . 141 . . 144 . . 147 . . 147 ench #1 148 . . 152 . . 155 . 165 . . 171 . 173 . . 177 . . 179 Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure 22. Figure 23. Figure 24. Figure 25. Figure 26. Figure 27. Figure 28. Figure 29. Figure 30. Figure 31. Figure 32. Figure 33. Figure 34. Figure 35. Lip Profiles . . . . . 191 Surface-Scraped and Impressed Ceramics ...... 194 Loop Cross Motif Complicated Stamped Ceramics .... 196 Solid Cross Motif Complicated Stamped Ceramics 197 Rectilinear Complicated Stamped Design Motifs 198 Curvilinear Complicated Stamped Design Motifs 201 Curvilinear Complicated Stamped Design Motif and Cross-Incised Sherd . . ... .203 Identifiable Paddle Variations: Groups of More than One Sherd Each for Cross Motif Complicated Stamped . . . . . 207 Jefferson Ware Pinched Rims . . ... .212 Miller Plain Bowl from Structure A . ... 217 Colono-Indian Ceramic Forms . . .. .219 Colono-Indian Ceramic Sherds: Basal Profiles .. 220 Partial Pig (Sus scrofa) Carcass . . .. .229 Bone Counters or Gaming Pieces . . .. 232 xiv Abstract of Dissertation Presented to the Graduate Council of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy POLITICAL AND ECONOMIC INTERACTIONS BETWEEN SPANIARDS AND INDIANS: ARCHEOLOGICAL AND ETHNOHISTORICAL PERSPECTIVES OF THE MISSION SYSTEM IN FLORIDA By Lana Jill Loucks June 1979 Chairman: Charles H. Fairbanks Major Department: Anthropology There has been no published archeological research which has inves- tigated both Spanish and Indian sectors of mission villages in Florida. It has either been impossible to distinguish these areas or the reports of such possible investigations have been very preliminary. This study was designed specifically to examine acculturation processes and Spanish- Indian interaction during the mission period in northern Florida (ca. 1606-1704). Concepts from economic anthropology and organizational theory were employed in examining ethnohistoric data in order to for- mulate a model of political and economic change in Timucuan society. Hypotheses relevant to archeological investigation were generated on the basis of this model. Prior to Spanish arrival, the Timucuan politico-economic system appears to have been based largely on balanced, reciprocal transactions and share-out and mobilization forms of redistribution. Early Spanish- Indian interactions seem to have conformed to this system but, as time xv progressed, interactions became increasingly unbalanced. Owing to dramatic demographic disruptions and the decreasing ability of Spaniards to meet native economic and behavioral expectations, the mission system declined rapidly. Ultimate collapse of the Florida mission system was probably due more to internal factors than to external ones. Archeological research at Baptizing Spring, a Utina mission site in Suwannee County, Florida, was carried out to investigate hypotheses relating to Spanish endorsement and perpetuation of native politico- economic roles. This site may have been the early 17th century mission of San Augustin de Urica (ca. 1610-1656?). Patterns of Spanish and aboriginal artifact distribution between two aboriginal and two Spanish structures suggest that Indians obtained primarily ornamental items from Spaniards. European-origin and aboriginal prestige goods, hypothetically identified in the model and through previous research, were found to cluster in one of the Indian dwelling areas. This suggests that native prestige goods maintained their symbolic significance and that European goods of similar types provided Spanish reinforcement of aboriginal roles and status. In ad- dition, it was found that Indians had access to introduced domesticates but that these may have been restricted to high-status individuals. Ar- tifact assemblages differed significantly between Indian-Indian and Indian-Spanish structural areas suggesting that Spaniards had restricted access to certain food resources, non-local, and locally manufactured goods. Data also suggest that demographic upheaval and ,population shifts may be represented in the archeological record. The Baptizing Spring site was compared to other excavated mission period sites in Florida. On the basis of these comparative data, it xvi appears that this mission site did not enjoy the relatively greater wealth of larger, more important missions in Apalache and coastal Northeast Florida. Necessary information is not available from these other mission period sites to substantiate or reject the hypothesis that introduced technological items were dispersed among Indians rather than restricted to Spanish or Spanish-supervised usage at haciendas, ranches, and missions. Such items were not found among artifacts at Baptizing Spring where basically traditional technological, subsistence, and social patterns appear to have been retained. The only evidence of the presence of European weapons -- firearms -- was recovered from the postulated high-status Indian dwelling. xvii CHAPTER ONE INTRODUCTION Research concerned with Spanish-Indian interaction in Florida has suffered from a lack of clearly stated theoretical basis. The study of acculturation is usually mentioned as a working objective but by itself acculturation is little more than a general term which describes a par- ticular kind of culture change. It is the processes, the means by which change is initiated and reactions to these means, that dictate the direc- tion of culture change. There is little doubt that a concerted program of directed change brought native-Floridians into the Spanish colonial system. The degree to which Indians were acculturated, however, has been argued and the actual kinds of interactions which took place have not been examined in detail. A Spanish mission site was discovered in Suwannee County, Florida, in 1976 following clearing and bedding activities for pine planting by Owens-Illinois, Inc.. Considerable exposure of the site left it open to local collectors,who are extremely active in this area,and to erosion. Hasty excavation of the two presumed Spanish building areas was performed by Dr. Jerald T. Milanich of the Florida State Museum. In the ensuing two year period, fairly limited documentary research was undertaken with the intent of continuing excavation at the site. In 1977, a survey of the surrounding area (Loucks 1978a) revealed six sites within 500 m of the mission. These sites were partially surface collected using various sampling techniques as it was hoped that they could be temporally and functionally linked with the mission site. 1 Dr. Charles H. Fairbanks of the Department of Anthropology, University of Florida, applied for and received a grant from the National Endowment for the Humanities. Further excavation in the aboriginal sec- tor.of the mission village was planned in order to examine the lifeways at a Spanish-Utina mission and the material correlates of acculturation. The aims in the proposal were to construct a general picture of the shared influences on material culture of both Spaniards and Indians and to examine the processes by which acculturation was accomplished. Excavations in both Spanish and Indian living areas had never been car- ried out at a single mission site in Florida, therefore no statements could be made concerning the functioning of a mission as a whole unit. This dissertation focuses on the Baptizing Spring site (8 Su 65) as the testing ground for certain hypotheses concerning interactions between Spaniards and Indians. The theoretical orientation derives largely from anthropological economics and its related fields of interaction, social exchange, and organizational theory. Acculturation Conceptually, acculturation entails both processes and results of contact between cultures. In practice, it is difficult to study because to do so requires an holistic approach. This is especially true when formulating models to implement directed culture change. Acculturation studies have been associated primarily with British and American functionalism (Plog 1977:26). American interest was sparked by the growing conviction that diffusion did not fully explain sociocultural change. In England, the problem was enhanced through an awareness of forced cultural changes in colonization efforts. In 1936 the American Anthropological Association held formal discussions regarding the subject's suitability for anthropological investigation. Agreement on central issues was necessitated by involvement of anthropologists in American Indian administrative problems. Later, World War II provided impetus to acculturation awareness as forced cul- ture contacts occurred and post-war issues of decolonization had to be faced (Bee 1974:94, 95). In view of the contemporary concern with applied anthropology at most research institutions, it is difficult to realize that it was necessary to formally recognize contact culture change as an appropriate topic for anthropological attention. Acculturation studies have figured in sociocultural research for at least forty years; in anthropology these studies have been primarily ethnohistorical and ethnological in nature. Such works include Bohannan and Plog's (1967) Beyond the Frontier, Everett Rogers' (1969) Modernization Among Peasants, and the well-known volumes by Linton (1940), Foster (1960), and Spicer (1961) which spurred and provided concepts for acculturation study. Many studies have concentrated on the pressing problems brought about by economic development: Nash's Machine Age Maya (1958) and Salisbury's (1962) study of technological change in New Guinea are two such examples. Impacts of political and economic change and the in- troduction of new technologies, health care and education programs, changed food crops and material goods have all been studied either before or after the fact. Directed change, both at home and abroad, is a major governmental preoccupation. R.L. Bee (1974:98-106) has summarized four distinct facets of accul- turation studies: cultural systems, contact situation, conjunctive 4 relations, and acculturation processes, Each culture system participating in contact situations exists as a separate, independent entity prior to contact. Within these systems, certain properties act to maintain independence. Physical or "subtle" boundary-maintaining mechanisms exist, internal structure is flexible within a culturally prescribed range, and self-correcting mechanisms affect the ways in which forces of conflict are balanced by forces of cohesion. The contact situation, as defined by Bee (1974:102), involves ecological and demographic parameters which influence the outcome of acculturation. Technological capabilities and environmental limitations of the recipient group are major features that determine which tech- nologies and goods will be accepted. ,If new techniques and practices are not adopted, the explanation may be that the cost of doing so is too great, rather than that the recipient's behavior is too conservative (Schneider 1974:192). Demographic variables concern the number of people or groups involved in the interaction, their ages, and their sex. In some contact situations, interaction is limited to males in a certain age group (e.g. fur traders). In other situations, primary interactions may be between males of the superordinate group and females of the subor- dinate group. Such was the case in St. Augustine where Spanish men, largely soldiers, married Indian women (Deagan 1974). Bee's "conjunctive relations" (1974:102-103) are composed of two as- pects: (1) structural limitations and (2) "filtering" of information. The-former refers to the limitations placed on interactions by the con- text of the interaction, be it religious, economic, militaristic, or a combination of these. Viewing contact in this manner enables the definition of paired relationships such as "buyer-seller" and "missionary- convert." The recognition of these paired relationships facilitates the study of acculturation using a transactional orientation which can simplify model formation. The second aspect is similar to features in Foster's description of conquest culture situations. Only a small part of the totality of traits and complexes that comprise the donor (superordinate) culture are introduced. These are further diminished in the geographical region of the recipient (subordinate) culture (Foster 19.60:227). Priests, for instance, participated in a limited part of Spanish culture. Regular order priests acted within monastic spheres entirely different from the public sphere of the secular priests. Each group had their dif- ferent tasks and roles defined by the Church. In Florida, as in other parts of New Spain, military and secular officials added a further dimen- sion of Spanish culture which was restricted to males of differing ethnic and economic backgrounds. The recipient culture also may present only a partial rendering of the total system. Certain activities may be hidden from outsiders or only superficially represented. The final facet of acculturation studies involves the processes themselves, several of which have been subsumed under the general categories of diffusion, evaluation, and integration (Bee 1974:104). Different responses of populations to contact situations are seen in terms of a typology of processes or outcomes: cultural creativity, cultural disintegration, reactive adaptation, progressive adjustment (fusion and assimilation), and stabalized pluralism (Plog 1977:29). Spanish colonization was a directed contact encounter: "societies [were] interlocked in such a way that participants in one culture [were] subject not only to sanctions in their own system but also to those operative in the other system" (Spicer 1961:520). Directed con- tact is characterized by effective control of some type and degree by members of one society over members of the other with certain behavioral changes sought by the superordinate group. Changes which occur, however, are determined by both cultural systems (Spicer 1961:520). Archeological Acculturation Studies Plog (1974:8) has argued that the area in which archeologists are best able to employ their talents is the study of change. Basically, four paradigms have dominated this field: evolutionism, cultural ecology, behavioralism, and acculturation (Plog 1977:25). In prehistoric archeology, culture contact studies have been approached through the effects of trade and conquest/population movements. It has been dif- ficult, however, to distinguish changes brought about by different kinds of contact. A particularly appropriate example concerns the appearance of complicated stamped ceramics that reflect Georgia design motifs and styles in northern Florida during the late prehistoric/protohistoric period. It is not known whether the appearance of these ceramics is related to diffusion of techniques, trade, or actual population mixing (Milanich 1978:75). The study of acculturation processes can be carried out at sites of known contact situations but such studies have been relatively few. Con- sidering the rich colonial history of the United States, this gap in archeological research is somewhat surprising. One can guess, however, that there is some feeling that contact sites are less exotic than prehistoric sites and more bothersome than strictly colonial European or American sites. Particularly because archeologists attempt to understand cultural process from the examlnntion of material objects, the European-Indian sites would seem to provide excellent opportunities for the study of culture change. These sites hold the physical results of two or more very different cultures coming in contact and coexisting for a usually discoverable period of time. The introduction of goods can be related to their function and the contact situation (e.g. French fur traders who had sporadic contact with Indians and were not interes- ted in precipitating specific social changes versus Spanish missionaries who had very definite plans for changing Indian life). Thus, hypotheses concerning their impact and the cultural processes which accompanied introduction and acceptance can be made. Supplied with a substantial historical and anthropological background, these hypotheses can be for- mulated prior to field research and tested. Perhaps contact sites have received less attention than fully prehistoric sites because historic archeology is of relatively recent interest. Many historical archeologists have yet to agree on, or realize, what it is that they should or could be doing (Moran 1979). There is also a theoretical dichotomy between those who think historic archeology should be historical versus those who feel it should be an- thropological. A recent symposium on acculturation studies held at the 1979 Society for Historical Archaeology Conference revealed that, with few exceptions, these archeologists are still describing material cul- ture, reading documents and probate inventories, and making little or no attempt to view their findings in anthropological terms or to offer processual interpretations. Exceptions included Keeler's (1979) attempt to apply systems theory to changes among the Chinook Indians (although he wasn't exactly sure how to go about it nor what to do with his data), Baker's (1979) study of Colono-Indian pottery and Catawba culture change, and Brown's (1979) study of French and Indian interaction in the Lower Mississippi valley. One of the few historical archeological works which has proposed and tested hypotheses of acculturation processes is that by Deagan (1974) wherein she examined the role of Indian women, married to Spaniards, as the primary agents and affectors of both Indian and Spanish material culture change. Specific Goals and Assumptions of this Study It is invalid to assume that two transacting groups reach an agreement on the basis of identical understandings, values, and expec- tations (Salisbury 1976:42): ". common membership in a single moral community can be seen as providing the sanctions that prevent the terms [of a transaction or interaction] from becoming too disadvantageous for the less powerful" (Salisbury 1976:44). The major assumption of this study is that two groups with different cultural and value systems have differing expectations of interaction behavior. In situations marked by disparity of power and cultural complexity, the donor group changes its behavior in some degree but the major changes occur in the recipient group's behavior (Foster 1960:7). If, however, cultural complexity and power are not greatly disparate one might expect less behavioral change and greater conflict as both groups act to maintain their own systems. Conflict will arise when either side refuses to yield over a situation where values and behavioral expectations clash. Some changes will be superficial if practices and beliefs of both groups are similar. A relevant example is the substitution of Catholic saints and religious figures for aboriginal ones in Mesoamerica. On the surface, Catholicism replaced native religion yet Amerindian statuary, beliefs, and behavior remained, for the most part, unchanged. If negative reinforcement is a factor, the behavior in question may simply "go underground" and appear to have been removed as in the case of kiva ceremonialism among the Rio Grande Pueblo (Dozier 1961:95). The working hypothesis of this study is that Spanish and Indian behavior and expectations of behavior on the part of each group did not change and that this lack of change created conflict and contributed strongly to the internal col- lapse of the mission system in Florida. It was earlier stated that the study of acculturation requires an holistic approach. Archeological and historical information, however, present only a fragmentary picture of past cultures and it is usually impossible to perceive every aspect of a cultural system. Since economics ties together political, religious, economic, and social or- ganization, an anthropological economic approach was adopted. Another factor which dictated this approach is the obvious truism that artifacts and their distribution are the physical results of economic activity: production, transaction, distribution, and consumption. Viewing contact situations in terms of paired relationships (see above) also involves economic theory which deals specifically with interpersonal and inter- group relationships. The following chapter develops the theoretical basis -- derived from economic anthropology and organization theory -- for the hypotheses. Chapter Three presents ethnohistoric data on the Timucua during the early contact period and throughout the mission period. Economic and political conditions in Spain are briefly discussed and models of pre-contact Timucuan economic systems and mission period interactions are proposed. Changes, or lack thereof, In inter;cti onal behavior and expectations throughout the Franciscan residency at the Florida mission (1573-1704) are discussed. Finally, hypotheses formulated from the documentary evidence and theoretical data are presented at the end of this chapter. Chapter Four reviews mission period archeology in Florida and offers a discussion of inferences and conclusions reached by previous inves- tigators. Information known about the Utina Indians is presented and the Baptizing Spring site is very tentatively identified as a documented mission of the first half of the 17th century. In addition, an overview of the 1977 survey near Baptizing Spring and excavation data from the 1976 and 1978 field seasons are discussed. Chapters Five and Six detail structural and artifactual data (ex- cluding material from surface collection at Baptizing Spring) from the mission site and survey sites. The mission data are described in Chapter Five and interpreted in light of the hypotheses in Chapter Six. Also in this latter chapter, the survey sites and other mission period sites are compared to Baptizing Spring. Chapter Seven presents a brief summary of the goals, hypotheses, and tested outcomes of the research project. A description of Spanish-Indian interaction as perceived archeologically at mission sites, especially at Baptizing Spring, is presented. The ethnohistorical analysis presented in Chapter Three is an in- tegral part of this thesis since it established the research framework employed in the study. Only selected aspects, however, are testable in an archeological situation. Through documentary analysis it was found that (1) economic and political controls were major cohesive factors of the Floridamission system, and (2) the mission system in Florida col- lapsed largely because of internal dissension brought about by the 11 failure of Spanish agents to meet Indian expectations of "proper" behavior and their economic demands, not because of external forces in the form of Yamassee and Carolinian raiders. The archeological thrust of this research, also based on documentary evidence, was that Indians and Spaniards attempted to maintain traditional political subsystems by differentiating access rights to European goods. CHAPTER TWO CONCEPTS OF ECONOMIC ANTHROPOLOGY AND POLITICAL ORGANIZATION A material transaction is usually a momentary episode in a continuous social relation. The relation exerts gover- nance: the flow of goods is constrained by, is part of, a status etiquette (Sahlins 1965:139). The above statement embraces the essence of economic anthropology: the study of exchange embedded in the study of social relationships between groups or individuals. Herein lies the primary difference between economists and anthropologists. The former deal largely with material goods and services -- measurable entities -- while recognizing the importance of unmeasurable social "preferences." The latter em- phasize the intangible social aspects of exchange. As stated by Firth (1970:4), the material dimension of an economy is a basic feature but the significance of an economy lies in the transactions of which it is composed and in the type of relationships which these transactions create, express, sustain, and modify. Although many economists working in anthropology downplay "social invisibles" (Pryor 1977:95) such as love, prestige, and status, many anthropologists working in economics agree that these intangibles are just as important as quantifiable commodities. A recent development along these lines is the appearance of what has been dubbed "transactional" or "social exchange" theory. Social exchange theorists include human animate values along with inanimate and animate non-human objects in their analyses (Schneider 1974:20). Social exchange describes a transaction of material or social value in return for obligations expressive of subordination (subservience, deference, clientship, or respect) or alliance manifested by expressions of respect and friendliness if the social exchanges off-set each other (Schneider 1974:148). The outcome, then, is determined by the value of the material or social element exchanged. Some of the distance between economists and anthropologists can be lessened if the distinction between material and social is replaced by the more general idea of "property" where property is defined as rights in things rather than things themselves. If this is done, economics would be definable as the study of allocation of property (Schneider 1974:148, 152). Economics, however, is more than allocation. It also entails management, production, distribution, and consumption of resources. Social resources, in terms of access to goods and services (Wilmsen 1972:2) as well as relationships, are just as critical as natural resources. Modes of Exchange Since Sahlins (1972) defined and popularized the three states of reciprocal interaction, the terms and their descriptive foundations have been argued and reworded ad nauseum. It is probably true that no major theoretical strides have resulted and that reciprocity is basically conceived of in the same light as previously. True to his economic background, Pryor defines reciprocity as exchange in which the forces of supply and demand are masked (as opposed to market exchange where these forces are overt). He precludes possible balancing with "social invisibles" and limits reciprocal interactions to situations including counterflows of goods and services of more or less equal value (Pryor 1974:186). Sahlins invited argument primarily by describing a "negative" reciprocal transaction since in doing so he contradicted the very meaning of reciprocity: flow and counterflow. His selection of the term negative, however, pertained to the social context and function of a particular type of transaction, "the attempt to get something for nothing with impunity" (Sahlins 1972:195). Examples of such behavior include theft, gambling, stealing, and bargaining. Schneider (1974:154) attempted to describe negative reciprocity in more lucid terms as ex- changes which lack governing norms. Even this is incorrect, however, as there are socially prescribed situations in which negative reciproc- ity is acceptable or unacceptable. In this study the concept of negative reciprocity will be preserved intact, recognizing the terminological ambiguity but accepting it as a concept with which most anthropologists, even opponents, are familiar. Generalized reciprocity is subject to norms which dictate sharing of wealth and resources without resort to rational calculation of value or gain (Schneider 1974:154). The unmodified form would describe "free gift giving" and other variants include generosity, hospitality, and helpfulness, in which there is neither immediate nor future expectation of return (Sahlins 1972:193). Return to the giver, however, consists of .the social theorists' manifestations of subservience, indebtedness, or alliance. The "true" mode of reciprocity, balanced transactions, is simply exchange with its implied characteristic of counterflow of goods and services from one party to another (Pryor 1977:27). Balancing con- notes exchange of equally valued elements but it must be remembered that "balance" depends on the range of socially accepted exchange ratios. Cultural norms serve to ensure peaceful and honorable behavior in trans- actions (Schneider 1974:154). Balanced reciprocity is also subject to value and time limits which may terminate further interaction pos- sibilities (simultaneous exchange of the same type of goods) or may guarantee future exchange -- time-lapse between counterflows of unequally valued goods (Sahlins 1972:194-195). 'Some economic anthropologists feel it is preferable to view "balanced reciprocity" as successive transactions (Salisbury 1976:48). Exchange Spheres Exchange or transactional spheres are composed of differing material items and/or services and may be further distinguished by differing modes of exchange. Each sphere is distinct from each other sphere by virtue of the goods or services it encompasses and the exchange modes operative within it. Cultural classification of material items into subsistence and prestige categories usually indicates the presence of at least two different spheres (Bohannan and Dalton 1965:5-6). Prestige sphere is a phrase covering a multitude of individual and group transactions, ceremonies, and goods which are "honorific" because they symbolize position, status, rank, reputation, and power (Dalton 1971a:14). Items in a prestige sphere are segregated from transactions concerning ordinary goods such as those within a subsistence sphere (e.g. foodstuffs) except in emergencies such as famine when valuables may be sold to outsiders (Dalton 1971a:15). In the latter case, prestige goods may become "devalued" as other necessary goods suffer crucial scarcity. The significant characteristic of exchange spheres is that, under usual circumstances, only goods within the same sphere are exchanged. It seems to be universal that various spheres are hierarchically ranked on the basis of moral evaluation. Institutionalized situations exist in which spheres are "over-ridden, situations in which items are 'con- verted' from one sphere to another." Conversions are regarded as morally good or bad, converting "up" or "down," rather than as skillful or un- skillful (Bohannan and Dalton 1965:8). Economics, Prestige, and Power Probably the most important "social invisible," and the one which Pryor believes he has shown to have inadequate causative power in deter- mining economic activity, is prestige. The position of individuals in power is established, continued, and constantly reinforced by prestige that derives from elaborate display and consumption of economically valuable goods (Herskovits 1965:462). This belief embodies the economic act of conspicuous consumption yet Herskovits emphasizes intrinsic value rather than social value, and the two are not always synonomous. Dalton (1971a:14) maintains that prestige goods are "intensely social because they rearrange [emphasis mine] one's position in society, one's rights and obligations." This is tantamount to saying that it is goods which decide status and role rather than one's access to prestige goods which validates rank and prestige (Schneider 1974:147). Recognizing patterns indicative of differential access to and distribution of goods is a common goal of archeologists studying ranked societies. Although in- heritance patterns may accord individuals rights to certain goods, it is these rights which validate position and the goods themselves function as symbols of these rights of access. The right of acquisition determines the nature of the result, not the acquisition or ownership per se. Two types of politico-economic interactions need to be discussed but it is first necessary to distinguish between power and authority. Power entails the ability to forcefully control or influence a second party and this power resides in control of valued items (Emerson in Hall 1972:205). Power relations arise out of both positively and negatively balanced exchanges and also out of unbalanced transactions and open conflict (Whyte 1971:172). Authority, on the other hand, lacks force: directives or orders are followed because of the belief that they ought to be followed (Hall 1972:207). Authority, then, is positively reinforced by society while power is negatively reinforced by the governing group or individual. Prestige goods validating power and authority may be different: goods exacted through tribute payments on fear of punishment for failure to render symbolize power whereas other prestige goods accorded on the basis of respect, rank, or in- heritance rights reinforce authority. The effective establishment of authority obviates a need for overt sanction in daily activities since authority is sustained by creating social obligations. If a superior commands voluntary obedience from subordinates he need not induce them to obey by promising rewards or threatening punishment. "Use of sanctions undermines authority" (Blau 1971:160,161). Authority involves the exercise of social control which rests on willing compliance of subordinates with certain directives of superiors (Blau 1971:158). The linkage between authoritarian control and economic activities is succinctly provided by Mary Douglas' concept of licensing in which authority serves to protect vulnerable areas of an economy. Political-economic "license," although often tacit (i.e. unsanctioned), creates monopoly advantages for those who receive the benefits of it, both superiors and subordinates (Douglas 1970:131). In her words, "both parties become bound in a patron-client relation sustained by the strong interests of each in the continuance of the system'." Economic Archeology Previous archeological and prehistoric economic studies have dealt primarily with ecological and geographical models of interaction. The European "school," in general, includes (1) the development of agricul- ture, (2) settlement pattern and land use at different periods, (3) seasonality, and (4) trade and its motivation among current themes in archeology (Sieveking 1977:xv). Social exchange models used are, those derived from geographical theory: central place, locational, and network analyses (Sieveking 1977:xxi). Higgs' edited volume Paleoeconomy (1975) equates economy with resource exploitation and, although topics include ethology and human exploitation behavior, the articles concen- trate on environmental description and exploitation, site catchment, sub- sistence, settlement patterning, and territorial and ethological analysis of animal resources. North American and Mesoamerican interests in prehistoric economics also have concentrated on trade networks and resource utilization. The sophistication of analytical techniques such as neutron activation and petrographic analysis have enabled delineation of interregional trade networks but the inability or unwillingness to hypothesize and test behavioral elements of exchange from presence and distribution of ar- tifacts has resulted in the exclusion of a basic feature of any economic system. Granted, it is often difficult if not impossible to extract human behavior from material remains, but this is the proposed goal of numerous archeologists. It is no longer valid to offer excuses on the basis of lack of models when such studies as Salisbury's (1962) on technological change in New Guinea, Barth's (1970) "Economic Spheres in Darfur," Bohannan's work on the Tiv (1955), and Dalton's numerous studies of market systems, to mention a very few, provide several case studies of economic processes and concepts illustrated by changes and patterns in material culture. This is particularly true when ethnohistoric data are available on which to build hypotheses con- cerning economic systems. It is pointless to list and discuss the numerous archeological endeavors in describing and modelling economic interactions. A brief glance through American Antiquity, archeological textbooks, and other sources reveals that much of the work done has been concerned with trade. Perhaps the advocation of regional studies and the increasing number of surveys have been influencing factors. Rarely, however, is one able to find articles which deal with actual social and behavioral attributes of economic interaction. Exceptions include a considerable amount of ,work done on the significance of the distribution of prestige goods. Peebles (1974) was able to define several status groups on the basis of differential distribution of elite goods associated with burials.and on the basis of burial placement in ceremonial center mounds, smaller mounds in villages, and beneath house floors. Settlement patterns and features have also been used to define periods of conquest and expansion and to infer levels of economic development (Sears 1968:147). In the southeastern United States, William Sears used attributes of artifacts, particularly ceramics, to propose presence of craft specialization which reflected wealth and organization within societies (Sears 1961:22) and status hierarchies reflected in "sacred and secular" dichotomization (Sears 1973). Recently, Kohler (1978) used changing patterns of trade/ elite and utilitarian ceramic distribution to delineate different status-associated living areas within a Weeden Island period ceremonial center village. In historical archeology, Otto (1975) measured ceramic type and vessel form diversity, as well as differences in house plan and diet, to show correlations between status and access to goods. Social and economic implications of differential interaction are not usually studied; rather, they are taken as given. A study by DeGarmo (1977:157), however, concentrated on discovering social groupings as defined by archeological measures of variability in behavior. He used distribution of certain artifacts to delineate production and dis- tribution groups within a single settlement and to identify three inter- pretive "possibilities" relative to manufacture and consumption of goods. Even going this far, the behavioral correlates and social significance were not discussed. In Chapter Three it will be seen how ethnohistoric data can be used to place past cultural economic systems into anthropological perspective and how models of economic behavior and culture change can be con- structed. The nature of the sites under study and the existence of his- torical records facilitates the kind of analysis advocated above and it is recognized that this approach is not always possible. Given more interest in social exchange theory by historical ,archeologists, it may someday be possible to apply formulated models to prehistoric sites. CHAPTER THREE PROTOHISTORIC TIMUCUAN AND SPANISH MISSION PERIOD ECONOMICS Many subgroups composed the larger Timucua group. The Saturiwa and Agua Dulce were included among the Eastern Timucua and the Yustega, Utina, Potano, and Ocale comprised the Western Timucua. Primary dif- ferences between the various tribes appear to have derived from environ- mental situation. Eastern Timucua occupied lower, marshier, and geologically younger (less fertile) soils than did their inland counter- parts. The coastal saltmarsh (especially along the northeastern Florida coast) and estuarine habitats, however, were fertile beyond any natural soil configuration in Florida. Western Timucua inhabited more fertile soil districts in the Central Highland region, a strip which corresponds roughly with the 100 foot contour (Figure 1). North Florida aboriginal political organization was a chiefdom (as defined by Service 1975:80) characterized by hereditary inequality, primogeniture, permanent leadership, and hierarchical authority. Chief- doms have been identified as redistributive societies (Service 1962:144). A patron-client relationship is well-established between superordinates and subordinates and the former concentrate power independent of that allocated by the general populace (Adams 1975:228). The concept of redistribution can be described in terms of centric, or focused, trans- fers (unbalanced) characterized by the high degree to which they radiate to or from a single individual or single community-wide institution. This community-wide focal point is the distinguishing feature of centric 21. L.... .APAL CHE U, El COASTAL LOWLAND E CENTRAL HIGHLAND D TALLAHASSEE RED HILLS .' '.. Figure 1. General Geomorphological Areas of Florida and Location of Certain Eastern and Western Timucuan Tribes and the Apalache. transfers which can be one-way or two-way. Centric transfers are usually regressive in that goods and services flow from the poorer to the richer (Pryor 1977:34, 250, 280, 286). Recently, the concept of redistribution has been separated into four organizational forms col- lectively viewed in the past as "redistribution." Briefly, these are: 1. levelling mechanisms - 2. householding 3. share-out 4. mobilization institutionalized behavior that counteracts the concentration of wealth by individuals or groups (e.g. ceremonial obligations, potlaching); these mechanisms have no single formal structure but are distributive in their effects pooling and general consumption of goods produced under division. of labor characteristic of a domestic unit allocation of goods produced by cooperative labor to participants and owners of the factors of production recruitment of goods and services for the benefit of a group not coterminus with the contributing members (Earle 1977:215) To "share-out" can be added the allocation of goods to an "insurer," one who insures, at least in the minds of.the people, present and future yields on production. Redistribution in the form of mobilization is basic to ranked and stratified societies and should be interpreted as an essential mechanism used to finance the political and private ac- tivities of the elite population (Earle 1977:216, 227). As will be shown, Timucuan society manifested both share-out and mobilization redis- tribution wherein goods, services, and information were the "goods" redistributed. Timucuan social attributes included clan distinction, linked clans, and warrior/non-warrior distinction (Garcilaso de la Vega 1962:15; Swanton 1922:369). There were a limited number of primary chiefs whose influence was regional and a greater number of secondary, village chiefs. Both were generally referred to as "caciques" although tribal af- filiation was often designated through use of the most powerful cacique's name. "Nobles" were set apart from "commoners" by dress, behavior, and location of dwellings within a village. Copper ornaments, featherhead- dresses, and tatooing were common symbols of high status. Feather head- dresses also distinguished warriors from non-wariors during times of war (Garcilaso de la Vega 1962:15; Le Moyne in Bennett 1968:24). High- ranking individuals were carried on litters during state affairs and special benches or shelters were prepared for them when they alighted (Le Moyne in Bennett 1968:93). According to Garcilaso de la Vega (1962: 170-171), a cacique's residence was larger than others and placed on a natural or artificial mound. Nearest to him, sometimes her, and around a central plaza lived other high-status persons. Lower status families lived further away from the central area. Unfortunately, Garcilaso de la Vega should be invoked with caution since his information was not first hand. Among the Eastern Timucua, however, Le Moyne (in Bennett 1968:62) described a similar village patterning although the cacique's dwelling was centrally located within the village; higher status in- dividuals did live nearest him. Public meetings which were presided over by the cacique, shamans, and elders, have been described in detail elsewhere (see Swanton 1922: 359; Le Moyne in Bennett 1968:60). Prescribed seating arrangements aid formalized order of presentation and ritual drinking of cassina (Ilex vomitoria) were characteristic of these meetings. A cacique enjoyed considerable power and authority; few early accounts failed to note his "nobility," eloquence, and pride. Le Moyne, Laudonniere, and later, Father Pareja (Milanich and Sturtevant 1972), bore witness to his ability to.command tribute and obedience through fear of punishment. The Timucua were semi-sedentary, central-based horticulturalists and hunters and gatherers. Two crops of maize, the primary vegetable staple, were planted each year during the late spring and summer. Other produce included beans, gourds and other squashes. Maize was grown in communally farmed fields under direction of the cacique or his rep- resentative. Other crops were grown in gardens adjacent to individual dwellings (Ribault 1964:73). All, or most, villagers worked to clear, sow, and harvest the "cacique's field." Swidden techniques of clearing were employed and fields were used for consecutive plantings until fer- tility declined below productive levels (Covington and Falcones 1963: 148; Laudonniere in Sauer 1971:205). Late fall and winter months were spent in the forests, hunting and foraging (Le Moyne in Bennett,1968:44). Wild foods such as nuts, persimmons, wild plums, berries, and others probably added considerably to both winter and spring-summer diets. Foods were dried and/or smoked to be saved for winter rationing. Early explorers, biased by the need to understand political under- pinnings, generally concentrated on interactions between different status groups, principally on the level of elite versus subordinate. When no rank was differentiated and the Indians were treated as an ethnic entity, Father Escobedo (writing ca. 1589-1600) noted that within a village, Indians treated each other with generosity (Covington and Falcones 1963: 143, 148, 151). This reflects an ideal (Eastern) Timucuan concep- tualization of behavior since stealing was common although supposed to go undetected (Le Challeux in Lorant 1946:94, 96). To'refuse a request was dishonorable; food was freely distributed among the "poor"; a cacique must never act "greedy" (Covington and Falcones 1963:143). Nothing is known about kinship ties and the obligations entailed. Generosity may have pertained to all goods but, acknowledging the res- tricted possession of prestige items, it is probable that generosity operated only in terms of food, and, possibly, basic utilitarian goods. Food is the one material good most usually linked with generalized reciprocity and hospitality (Sahlins 1972:216). Food was given freely to outsiders, Europeans, or was traded (Le Moyne in Lorant 1946:36; Ribault 1964:77, 81). Exchange of food for non-food items may.have been restricted to interactions between outsiders and villagers as it is often considered improper to make such exchanges with one's kin. Items given freely between kin do not carry the same significance as items given outside the kin realm. Food distribution is particularly sensitive to injunctions for or against its sharing and trading (Sahlins 1972:216). Reciprocity, especially in its generalized form, reflects the imbeddedness of particular transactions in long- term relationships (Salisbury 1976:44) and blood ties may be stronger than simply long-term relational ties. It is noteworthy that foods prepared for winter provisioning were not available to the French at any rate of exchange (Le Moyne in Swanton 1922:359). Food exchange with outsiders may have been restricted to occasions when villagers lacked calculable reason to conserve or when a show of hospitality was politically expedient. There is little doubt, however, that Indians sought to gain from provisioning the French. Le Moyne (in Bennett 1968: 98) reported that they stopped bringing in provisions as soon as they realized that the French had no more goods to exchange. Father Pareja's 1613 Confessionario (Milanich and Sturtevant 1972), although written after the French had come and gone from Florida and the Spaniards had been established for roughly 50 years, has been used as a valued source of ethnographic information regarding aboriginal practices and behavior. Since the main purpose of Pareja's book was to provide questions which would reveal the continuation of non- Christian, Indian practices, the author considers it as a pertinent source to be used in this section. It is interesting that after more than half a century of contact with Europeans, the Eastern Timucua still retained many of their beliefs and continued in many of their obviously incompatible roles (e.g. sorcerers, shamans). A second "native" trait was gambling/stealing (Milanich and Stur- tevant 1972:33; Le Challeux in Lorant 1946:94, 96), both of which fall under Sahlins' definition of negative reciprocity. Gambling, however, is a neutral transfer and does not systematically affect the distribution of goods in society toward or away from greater equality (Pryor 1977: 255). The major cost of these transactions was prestige loss on one side and prestige gain on the other (Covington and Falcones 1963:148- 149). Neither stealing nor gambling was considered immoral by the Timucua. Either activity may be seen as a means of earning prestige, particularly during peace times when excellence in battle was not an open means of attaining it. The actual winnings or material gains were symbolic of the acheived prestige. Ownership of certain items may have been shared by kinsmen with bundles of rights attached. One does not know which goods were stolen consistently and which were not, nor what social ties linked culprit to supposed victim. The question of stealing may be one of European ethnocentricity rather than negative reciprocity. Elite versus non-elite interactions include those transactions between patient/client and curer/sorcerer and commoner and elite. The former are included under the assumption that sorcerers definitely, and curers and herbalists possibly, participated in a higher ranking than the average villager. Remittance for curing and spell-casting, although insured and inflated through threat of witchcraft, may be viewed as one flow in a balanced reciprocity (see Milanich and Stur- tevant 1972:30, 31). The initial flow issued from the curer or sorcerer in the form of the service or "good" purchased -- health, a marriage ceremony, or a spell. The test of balanced reciprocity is intolerance to one-way flow (Sahlins 1972:195). Intolerance is obvious on the part of the sorcerer or curer but must be hypothesized for the client. Presumably a client or his relatives could avenge a job poorly done: a spell or cure that failed or exacerbated the situation. A sorcerer could suffer prestige and clientele loss or be threatened by a competitor. It is difficult to believe that negative reinforcement was one-way. A cacique, with the inherited authority to receive tribute and obedience, and power to obtain it if challenged, reciprocated through the management of production and share-out redistribution. Supernatural confirmation of allocative rights supported through his shaman and social acceptance by his people allowed the cacique control over public granaries (Milanich and Sturtevant 1972:23-26, 31, 34). The authority structure, composed of chief and shaman, not only organized and directed labor in horticultural production but also ensured fertility in return for obedience and part of the yield. Control over maize fields and pub- lic storehouses would have reinforced chiefly position. The presence of public fields and granaries fostered village solidarity and subsidized labor, war efforts, feasting, provisioning of the "poor," and entertaining guests. Shamans who "tasted" the first corn and prayed over the lakes, arrows, forests, and fields (Milanich and Sturtevant 1972:23-26) further reinforced the dependence of low- status individuals on the elite. Furthermore, since a shaman or chief alone could open the granary, dependence was doubly insured. This mutually agreed upon interdependence constitutes licensing, as defined in Chapter Two (Douglas 1970). Additional services performed by:the cacique included alliance formation, arbitration of disputes, revenging war deaths, arranging marriages, and organizing war efforts. For these corporate utilities and because of his importance as a leader, provider, and distributor, the cacique could expect respect, obedience, and yearly tribute pay- ments of "pearls and other moneys made of shell and chamois [dressed hides,]" (Canzo 1600). Escobedo stated that a cacique was supposed to be generous but, as Sahlins (1972:210) notes, in chiefly redistribution the flow between chief and people is fragmented into independent, small transactions. A cacique may accumulate many goods but is required to give out more or. less. Accounts of aboriginal distribution do not indicate lower limits being exceeded to the point that a cacique lost his authority. The situation at the end of the mission period, however, suggests that such limits were recognized by Indians and that failure to distribute quantities of goods within prescribed limits could result in loss of position and concomitant authority. The loss of ability to en- force, however, had no little effect on loss of authority. Interregional interactions are poorly known. Garcilaso de la Vega (1962:253-254) gave the impression that there was a special group of long-distance traders dealing in common and/or elite goods. Ribault (1964:74-75) mentions getting gold and silver in trade with Indians south of the mouth of the St. Johns River but these metals could have been scavenged from shipwrecks. Le Moyne (in Bennett 1968:104-105) knew that the Caloosa near Tampa Bay were getting precious metals from wrecked treasure fleet vessels but contended on the basis of what he was told by the Saturiwa and Utina, that the Eastern Florida tribes received gold and silver from Indians in the Appalachian Mountains (Le Moyne in Bennett 1968:95, 99). He does not, however, mention long- distance traders among the Indians. It has usually been recognized that much, if not all, of the gold and silver obtained by Indians came from salvaging wrecks (Bushnell 1978:45). Since copper was a prehistoric trade item, however, there exists the possibility that some of the precious metals traded to Europeans did come from distant places. Whether or not there was an organized group of specialized traders can not be determined. In order for such a group of traders to have existed, there would have had to have been pan-Southeastern sanction of their activities so that safe passage through hostile territories would be insured. The prehistoric and early historic Southeastern Indians were well-known for their propensity for belligerent behavior. There are several pos- sible explanations: (1) these "merchants" were actually links in a sys- tem of trade partnerships; (2) as outsiders, not belonging to a par- ticular tribe, they existed outside the realm of inter-tribal hostility; (3) their activities provided scarce and valued items; and (4) traders 31 might have acted as spies. None of these need be mutually exclusive and quite possibly there were several reasons why traders were allowed to travel through many different regions; however the third reason is probably the most important. Mobilian, a trade jargon or kind of lingua franca, was reportedly spoken by all the tribes east of the Mis- sissippi. The Apalache were the only Florida tribe listed among those using the language (Haas 1975:257-258). The presence of a trade language would certainly suggest that trade interactions occurred over the Gulf States, at least, and there is no reason to think that such activities would be absent. It is interesting that the Apalache were the only group mentioned as speaking the language. Could this be an oversight, or lack of information, or could the Apalache have excercised some con- trol over trade goods coming into Florida proper? It certainly suggests that when the Indians told Le Moyne the gold and silver came from: "Apalatcy" (Le Moyne in Bennett 1968:84) they could very well have meant that it came from Apalache not the Appalachian Mountains as Bennett (1968) interpreted. The problem does remain, however, that there are no moun- tains in Apalache (western Florida). One final level of interaction that can be examined is loosely termed transactions between human and supernatural, the propitiation of natural elements which provided sustenance being the primary example. One of the most, if not the most, important shamanistic functions was the insurance of successful yields from lakes, fields, forests, etc. For their services, shamans received half the catch of fish, the first deer killed, the first corn, and so forth (Milanich and Sturtevant 1972: 23-26). These transactions represent a cyclical flow wherein shamans acted as intermediary agents between humans and supernatural forces. Supplication of the latter was returned as yield in resources to the people who returned part to the shaman in recognition of his role, thus regenerating and maintaining the cycle. Modelling Timucuan Economics On the whole, aboriginal modes of exchange related to political and maintenance organization appear to have been characterized by balanced reciprocity. Generalized reciprocity was typical within kin/ village contexts and seems to have been a feature of initiatory inter- actions between Indians and Europeans as well. The question arises as to whether or not "free" exchange was precipitated by mutual good wishes or if Indians were merely attempting to supplicate recognized superior power. In the latter case, then, the "gift" would have been balanced by intangible elements such as peace and freedom from ret- ribution. Trade with the French, at least, seems to have held no awe for the Eastern Timucua who had no qualms about refusing to trade. From documentary sources, the only goods which were consistently des- cribed as gifts given by Indians were foodstuffs and possibly other items used in every-day household activities. Negative reciprocity was reflected in activities such as gambling, possibly stealing, and most assuredly warfare. The ultimate motivation was the garnering of material items symbolizing prestige. War booty would not only add to a man's wealth but would also have added to his status. In all other activities exchange was more or less balanced, characterized by two-way flow of material and/or non-material elements. Determination of exchange spheres can only be hypothetical although it is highly probable that such spheres existed (cf. Bohannan and Dalton 1965:6). The obvious division is between subsistence and prestige/ luxury goods. The former would include all foodstuffs as well as food procurement and processing items such as hoes, digging sticks, bows and arrows, fishing paraphernalia, axes, knives, grinding stones, pot- tery vessels for cooking and storage. Weapons, which have dual func- tions in warfare and food procurement, might entail special injunctions concerning dispensation. Even when given in balanced transactions only goods within this sphere would be exchanged under normal circumstances. In emergency situations (e.g. bad harvest, crop destruction during war- fare), non-food items may be exchanged to obtain food or, if food is plentiful it might be traded with outsiders to acquire desirable ar- ticles which may indicate prestige ("conversion up"). The last case might cause the outsider to "lose face" while augmenting prestige for the Indian. Goods in the hypothetical prestige sphere would have included fish- bone counters (only on the East Coast?) and "green and red stones" (greenstone and hematite?) -- gamblingwinnings (Le Challeux in Lorant 1946:94) -- pearls, chamois, shell, cassina, feathers, metal ornaments, and litters. Although they were not exchanged, litters are included because they were important symbols of high status. Tobacco, which was smoked in curing ceremonies (Le Moyne in Bennett 1968:42) and on other ritual occasions, may also be included as a prestige item although res- triction of its usage is not certain. The same may be said for cassina; it was a ritual substance used in political meetings, ceremonies, and as harvest payments. Both maize and cassina could be classified as ritual items, the latter because it represented the cacique's authority, village solidarity, and supernatural favor. Prestige items were of at least three kinds: some goods reflected authority position in that they were restricted to high-status in- dividuals who had rights of access by virtue of their ascribed status. In so far as cassina and tobacco were ritual substances, they may have been associated with high-status usage more so than with usage by com- moners and, therefore, might be included under "authority" prestige goods. Other prestige items, principally those received by the cacique as tribute, reflected power. Lastly, acquisition of certain goods such as gambling winnings and war booty, including slaves and scalps, sym- bolized achieved status. Distribution of achieved status goods might not exhibit unequal distribution among the populace since, theoretically, anyone could gamble or kill. Presumably, however, goods acquired as war booty would be restricted to males, possibly within a certain age group. Copper ornaments were available only through trade networks and were restricted to high-status individuals ("authority-prestige goods"). Pearls, shell dippers, and dressed hides must have been physically available to anyone but were eventually accumulated by the cacique via tribute payments -- "power-prestige goods" associated with mobilization redistribution. Feathers were also available, in the physical sense, to anyone but usage was restricted to elite persons and warriors, represen- ting both ascribed and achieved rank. Feather headdresses were worn by warriors only (?) during wartime; at other times they would have been used by high-status groups. Perhaps during war the latter displayed ad- ditional symbols of status associated with their inherited positions. Certainly, behavior would have set elites apart from common warriors. The only good which definitely would have been limited due to non-local availability was raw or ornamental copper and possibly gold and silver. Except on the east coast of Florida, pearls would also have been non- local in origin. Figure 2 illustrates a simplistic view of material flows. In this diagram, traders are included parenthetically to indicate that their status as a group is uncertain. It is obvious that some prestige items and tribute goods had to be acquired by all individuals, or only those adults (males?) in a position to get them, but that these were monopolized by the cacique thus preventing competitive accumulation. These goods would be assigned prestige value only after acquisition by the cacique. Prior to that event they did not allocate prestige to the tributary. Additionally, these tribute goods plus other high-status items such as ceremonial whelk dippers, cassina, and gold and silver could be traded within or between regions allowing the cacique and other elites direct control of trade in luxury items. Wealth inheritance would have given nephews (and sons) of the elite a congenital advantage. Primogeniture would preclude dispersal of ac- cumulated wealth, concentrating material wealth and prestige within a relatively small group. "Young nobles" may also have had special oppor- tunities to build their positions as it was fairly common for heirs of caciques to act as special messengers and to acquire prestige through special "noble deeds" (se Garcilaso de la Vega 1962:125-126, 145, 154- 155). Peninsular Economic and Demographic Conditions (1482-1700) Between the years 1482 and 1700 Spain suffered serious population decline and major demographic changes. During the reign of Ferdinand and Isabella, all areas experienced considerable losses through emi- grationwith the exception of Castile where population increased (Vicens Vives 1969:291). Nationwide population drop over the 200 year period has been estimated at 30% to 40% -- from nine or ten million to six KEY: B WAR PLUNDER H RESTRICTED HIGH STATUS GOODS N NON-LOCAL GOODS O ORGANIZATION P PRESTIGE GOODS PR PROTECTION R PROVISION T TRIBUTE Figure 2. Hypothetical Flow Chart of Prehistoric/Protohistoric Timucuan Economic System. million (Moses 1894:125) -- or a loss of three million between the end of the 16th century and 1723 (Davies 1961:158). Excepting Andalucia, all major Castilian cities experienced serious population reduction from 1594 to 1646. By the latter date, almost all these cities had lost at least half of their population, many as much as 75%, most of whom moved to inland cities particularly in northern Spain. By 1680, Seville had undergone serious demographic and economic retrogression (Davies 1961:157). There were several reasons for this drastic population reduction, emigration being the major cause usually cited. Not only were some of the "best elements" being drawn off to Spanish armies and conquest (Davies 1964:23), but forced emigration of Moors and Jews, who were the primary agricultural workers and craftsmen and financiers, had ad- ditional impact on demographic and economic conditions. There are no reliable estimates of the number of conversos who left Spain in the aftermath of the Inquisition. Suggested figures put the total for the whole country at 500,000: 150,000 Jews and 300,000 Moriscos after the revolt of 1502 (Vicens Vives 1969:291). More than 80% of the Spanish population were peasants; urban workers constituted 10-12%, urban middle class merchants, citizens, and ecclesiastics 3-5%, and less than 2% nobility. Peasants, unable to make a living from the soil, moved to urban centers to become beggars and vagrants (Davies 1964:273; Vicens Vives 1969:293). Movement to cities and emigration of Moors precipitated a shortage of agricultural workers. Spaniards, who despised agricultural work as a job previously performed by the Moors, refused to take up the task (Davies 1964:273). The ranks of public officials and ecclesiastics swelled and in the mid-1600s the government declared it would no longer support the in- creasing numbers of priests and monks (Davies 1961:102). Circa 1500, 1.5% of the population (the nobility) owned 97% of the Peninsula and during the 16th century the religious class, about 2% of the national population, monopolized almost half of the national income (Vicens Vives 1969:293, 340). Until roughly 1540, sheep raising for wool and the textile indus- try dominated economic perogatives (Davies 1964:23; Vicens Vives 1969: 302). They became so important that a-special management board was es- tablished and agricultural production was severely hampered by national interests, loss of land to pasture, and more enticing incentives to con- centrate on sheep raising. Textile manufacture flourished in the beginning of the 16th century but around 1540 began to decline. Impor- tation of gold and silver from America had caused significant price in- creases (400% during the 16th century) creating the desire to buy in foreign markets and a concomitant decline in the quality of Spanish- produced goods (Davies 1964:266; Moses 1894:129). Added to this, the flow of bullion from the New World also began to drop off during the first quarter of the 1500s (Davies 1964:263). By the middle of the cen- tury, Spain no longer exported textiles but actually needed to import them to meet her own demands (Moses 1894:129). Perhaps related to this, the importation of hides from Buenos Aires, Cuba, and other parts of New Spain, became important (Davies 1961:150; Moses 1965:267; Vicens Vives 1969:357, 403). Spain then exported these hides or leather to other countries (Vicens Vives 1969:357, 358-359). Philip III began debasing coinage, which had already been debased to copper, in 1600. He further reduced the weight of coins in 1602 and required that all payments be made with copper. By 1605 very little sil- ver was to be found anywhere in Spain and the premium on it rose so high that continental trade was stifled (Davies 1964:266-267). During the reign of Charles II (1665-1700), Spain was sunk in deep economic depression. Very little was sent to America during the last decade of the 17th century except wine (which could not legally be made in the New World). Many goods were exported to the colonies from foreign countries under pretext of coming from Spain. These goods primarily included wax, spices, paper, cloth, and mercury. American exports to Spain consisted of hides, "chinaware" (Aztecan and Chinese), grain, tobacco, tropical drugs, copper, and mostly gold and silver (Barozzi e Berchet in Davies 1961:149-150). Spanish-Indian Interaction (1564-1650) The topic of Spanish and Indian interactions is the hardest to arrange since insights to economic and social transactions are scattered over numerous sources, both primary and secondary. It appears, however, that three general periods can be defined on the basis of topics covered by those sources which were reviewed. Letters and cedulas included in the Ethnohistory Index (P.K. Yonge Library of Florida History) that con- tain information on Indians concentrate on gifts made to Indians prior to about 1650. After that date, very little mention of "gifts" is made at all. During the last two periods used here, 1650-1675 and 1675-1704, priests, visitadores, and government officials spent more paper des- cribing actual situations and conflicts between Indians and Spaniards. Data are presented topically and chronologically In an attempt to show continuation of or changes in policy and attitudes. It was impossible to confine discussion to the Timucua, especially after the mid-1600s, since references to this group are relatively few. Latter 17th century accounts are basically concerned with the Apalache in western Florida (for reasons that will be discussed later). Until the early 1600s, con- tact with Western Timucua was sporadic; missions had not been established, therefore description of early interactions must be garnered from sources describing the Guale and Eastern Timucua. Although the Indian groups differed to some extent, it does not seem likely that Spanish policy would have been enacted differentially. The beginning date of the construction of Fort Caroline by the French (1564) near the mouth of the St. Johns River is given as the starting date for this evaluation since the French settlement spurred Spain to the first successful attempt at colonization. Most data, however, will extend from 1573 onward, after the Franciscans took over the mission field from the Jesuits. Pope Alexander VI issued a papal bull in 1501 giving the Crown a grant of ecclesiastical tithes in all newly found regions under the con- dition that sovereigns made themselves responsible for the introduction of Catholicism and maintenance of the Church and for the instruction and conversion of Native Americans. In 1508, Pope Julius II issued another bull conferring full patronage on Ferdinand and his successors (Haring 1963:167). It is popularly well-known that Juan Ponce de Leon landed in Florida and officially proclaimed it as property of the Spanish Crown in 1513. Two of the most famous entradas, that of Panfilo Narvaez (1528) and that of Hlernando de Soto (1539), brought Spaniards in contact with interior tribes. The results were disastrous, especially those which arose out of de Soto's policy of brutalizing the natives and destroying their villages and fields. Pedro Menendez de Aviles, who expulsed the French (1565) and became the first long-term governor of Florida (1565-1574), undertook the colonization of Florida for several reasons, the most important of which were the promise of economic gain and increased social position. Revenue to the Crown and the priveleged adelantado, good defensive location against enemies, and guaranteed profit from trade and agricul- ture were among the primary reasons for establishing the colony (Lyon 1976:45). The adelantado's agreement with the Crown included his res- ponsibility tobring natives to the Christian faith and loyal obedience of the king. The 1563 ordinances indicated that an adelantado was al- lowed to create two-generation repartimientos of Indians in each es- tablished village. They also provided that three-generation encomiendas could be granted to other settlers in areas aside from ports or main towns (Lyon 1976:50). Since the French had conceived good relationships with the Indians of the northeastern Florida coast, tensions were high between Indians and Spaniards. When the latter took control of Florida in the mid-l s, relations were primarily based on trade. Menendez was, however, to receive tribute from caciques in the name of the king. Serious evangelization was to wait for a more propitious time (Lyon 1976:118- 119).. Missions were to serve dual purposes along the Florida frontier: they were to be agricultural and religious schools (Haring 1963:183) as well as nodes in a defensive network which, it was thought, would serve as a buffer against French and British encroachment from the north. Indians were to supply the labor force necessary to construct physical defenses (such as the fort in St. Augustine), roads, and bridges. They would also provide the bulk of the subsistence support. Promotion of self-sufficiency supported by native cultivation was among the primary objectives (Rogel in Gannon 1967:33). There is no doubt that Spaniards regarded their duties to Church and God with utmost respect. Population decline in Spain (and the threat of Protestantism) made the duty of conversion more pressing in order to maintain the Catholic religion as an important source of power and enlightenment. The fact remains that Church and State were closely aligned and shared access to a great deal of potential wealth. Mission Indians provided bodies and souls which could encourage the realization of that wealth. Gifts and Trade Since the earliest peaceful attempts of the Spanish to win over the Florida natives, gifts had been offered as tokens of their friendship. Father Luis Cancer de Barbastro offered gifts which, although of little value to Europeans, "were highly prized by them [Indians] and much ap- preciated" (in Cannon 1967:11). Spaniards gave gifts not only to open relationships but also to placate (Geiger 1936:3). Brother-in-law and chronicler of Menendez, Solis de Meras (1923:184) reported that Guale Indians who arrived in St. Augustine in 1566 to receive gifts and food went away declaring war if they did not receive them. Expenses for gifts to both.Christian and "heathen" Indians were authorized by cedulas in 1593 and 1615 (Father Moreno 1654). Table 1 summarizes gifts and trade goods given and received by Indians. 43 Table 1. Gifts and Trade Goods Ixchanged between Indians and Europeans. Gifts Sources clothes, flour, tools blankets, knives, fish hooks, scissors, hatchets, glass beads, sickles mirrors, knives, scissors, bells, and "things highly prized" garments, beads, hatchets, machetes, (given to "principal Indians") corn, hoes, (given to Indians south of St. Augustine "to increase their estimation of us") Governo Canzo, 1597 Covington and Falcones, 1963 (Father Escobedo, original ca, 1589-1600) Solis de Meras, 1923 (original ca. 1566) Solis de Meras, 1923 Governor Ibarra, 1605 European Trade Goods jewelry, knives, scissors, axes Covington and Falcones, 1963 Indian Gifts* deer hides (painted and unpainted), meal, little cakes, roots (sassafrass?), gold, silver, copper, pearls, beans, fish, shellfish, meat maize (flour, roasted, ears), smoked meat, wild roots (medicinal and other), metals Captain Ribault, 1964 (original, 1562) Le Moyne in Bennett, 1968 (chronicler for Laudonni'ere, original, 1591) Indian Trade Goods ambergris, maize, smoked meat, fish Covington and Falcones, 1963 * Many of these gifts were also traded items. It appears that gifts were sometimes, if not always, given to caciques (Canzo 1597, 1599; Cedulario San Lorenzo 1593). Whether or not the cacique distributed these goods among his village is not known. 'Presumably, some of the goods were at least distributed to other elite individuals. Caciques, as leaders of the villages, received special Spanish attention. The goods given to "principal Indians" listed by Solis de Meras (1923:148, 127) differ from those he indicated as general gifts. There is only one specific reference among the numerous sources reviewed which indicated that caciques, those who were obedient and good converts, received special compensation. In this case, Governor Canzo (1600) awarded 150 ducats to Dofa Maria, cacica of Nombre de Dios just north of St. Augustine, and 200 ducats to Don Juan, cacique of San Pedro on Cumberland Island, Georgia. It is impossible to ascertain how important cash actually was to Indians. Father Escobedo, stationed at Nombre de Dios from 1589 until about 1600, wrote that food was scarce and "unfaithful" Indians took advantage of festival days to hunt and then sell ducks (1 real), turkeys (15.5 gold reales), and rabbits (2 reales) to those who had stayed in- doors. Leather "moccasins" made to order out of deerskin sold for three ounces of Mexican silver (Covington and Falcones 1963:143, 144). Escobedo also ranked Eastern Timucuan preferences for trade goods: fish hooks, axes or hatchets, knives, scissors (in descending order). Glass beads apparently "delighted" all Indians (Covington and Falcones 1963: 145, 146). Labor and Taxes Writing for Governor Salinas, Ramirez (1622) noted that it was cus- tomary for Indians to come to St. Augustine from Guale to cultivate the "savannas." Caciques were required to send up to 50 Indians from each village, dependent on its size. Soldiers were sent to issue orders for labor and were supposed to provide necessary provisions and passage money for the journey. Upon arrival in St. Augustine, Indians were to receive gifts and to be paid (probably in goods) for their work. If caciques did not send Indians, Salinas warned, they were to be severely punished. Thus, the colonial system of repartimiento also found a place in Florida. In 1637, Governor Horryutiner reported that Indians were required to carry provisions for the priests from St. Augustine to Apalache (Matter 1972:253). The use of Indian labor continued through the mission period even though the Crown constantly ordered against it. Native Floridians retained the practice of paying tribute to caciques in traditional goods and, in addition, they were required to pay tribute (or tithes) in corn to the government (Bushnell 1978:38; Royal Officials 1605). The corn, and probably other foodstuffs, was used to provision soldiers stationed at St. Augustine (Governor Marques, 1579, in Conner 1930:229). Prior to 1600, each "friendly" Indian and cacique was taxed one arroba (roughly 25 pounds) of maize per year. In 1600, this tax was reduced to six ears of corn because of the hardship caused by the earlier tax and the poverty of the Indians (Canzo 1600). Mission Politics and Economics Each mission priest lived at a more or less centrally located, primary village doctrinea) and administered to nearby sub-stations visitsas. The priest either visited these outlying villages to teach doctrine and perform baptisms (Geiger 1937:69), or Indians came into the doctrine on Saturday evenings, or evenings before holy days, and stayed overnight to hear mass the next day (Geiger 1936:14). Villages that lacked a resident priest supposedly competed with each other to build the best church and residence for a future priest in the event that they should ever be allotted one (Ore 1936:104, 107). Franciscans, as mendicants, were dependent on alms begged from the community for their support (Ore 1936:79). Due to the general poverty of Florida, especially in its early settlement period, priests were supported by the Crown along with the soldiers and secular officials. The situado, royal subsidy, was shipped from Spain, Cuba or the Mexican Peninsula. Perishables were generally of poor quality and spoiled; all subsidy goods were extremely expensive. The reality' of Florida poverty has been questioned by many and it is generally felt that reports issuing from priests, treasurers, and governors to the Crown were exaggerated in attempts to obtain more goods. Bushnell (1978) has studied the St. Augustinian and general north Floridian'economic conditions in detail and the kinds of goods she cites, particularly those enjoyed by the hidalgos, do not hint at an overwhelming poverty. Recent zooarcheological research carried out on St. Augustinian material also suggest that poverty might have been exaggerated (Reitz 1979). If the data from St. Augustine have been intentionally biased, it is also possible that information regarding poverty at the missions might also have been overstated. Certainly, the level at which the Spaniards were used to living was drastically different in Florida and the fact that they probably went without things to which they were ac- costumed may have prompted many feelings of poverty. As long as there were Indians to hunt and farm, food should not have been very scarce. Since Spaniards were, in general, uninspired over agricultural duties, the mobilization of Indian labor to provide for garrison, town- dwelling, and mission personnel was extremely important. Although livestock management was more in line with the Peninsular activities they were accustomed to, farming does not appear to have been a par- ticular skill enjoyed by Spaniards. Neither, it seems, were they satisfied with the nature of Native Floridian farming techniques. Governor Salinas (1620) asked the king for permission to import 20-30 Indians from Honduras or New Spain who would teach the Indians how to farm. Four Franciscans also asked for about 30 people to settle and farm near the priests (Pesquera et al. 1621). Periodically, from 1620 through the 1670s, governors made requests to the Crown to fur- nish them with Indians from Campeche or Honduras who would either teach the natives to grow indigo and cochineal, cash crops, or grow them themselves (Consejo de Indias 1623; Cedulario, Madrid 1623; Francisco de la Guerra y de la Vega 1673; Cedulario, Madrid 1673). According to Bushnell (1978), these were merely proposals and the enter- prise was never funded or carried out. Whatever the truth of the matter may be, most accounts impress the reader with an overall subsistence sufficiency. In later periods, there was enough maize, beef, and hides to allow exportation to Havana and illicit trade. Cedularios of 1641 and 1663 awarded regular clergy annual sub- sidies of flour, wine, oil, vinegar, salt, blankets, robes, dishes, candles, paper, and other items (Tepaske 1964:179). Father Pareja's (in Ore 1936:105, 107) famous and astringent letters, however, attest to extreme poverty at the missions. Furnishings for the church were ob- tained despite poverty because Indians brought deerskins to buy wax (candles) and to pay for burial of their dead. At some missions, he reported, pigs and arrobas of maize were used to purchase small bells. Hides brought to the priests were probably sold or traded (by the priest) to obtain requisite fixtures. Judging from the list of Spanish imports from America and exports to other countries, hides and leather goods were probably a good medium of exchange, especially if cash was not on hand or simply not allowed to be used by Indians. On the east coast of Florida, south of St. Augustine, ambergris was collected by Indians and/ or representatives of the governor. This substance turned a very high profit in Spain and also for the Indians who traded it. In St. Augustine, at least, ambergris was definitely used instead of cash money (Bushnell 1978:43). Priests supplemented royal subsidies with alms of maize, beans, or toasted flour received from Indians (Ore' 1936:105). In many cases, everything priests obtained from their charges was justified as alms since Franciscans were subject to vows of poverty. Soon after the Spaniards began intensive missionizing activities, native populations were "reduced" into centralized villages, either missions or visits. Centralization of Indian populations was a major objective from the outset (Geiger 1936:16) and was necessary not only to provide control over the Indians but also to provide a conveniently available repartimiento labor pool. By 1597 Guale mission Indians were peaceful in the aftermath of death and destruction of villages and fields precipitated by the revolt of mission Indians along the Georgia Coast. Governor Marques informed the Audencia de Santo Domingo that he hoped Indians would become good Christians but that adults, who had their own religion and did not want to convert, were preventing their children from being taught (Connor 1930:224-229). Within 20 years this situation had altered drastically and at S;n Junn del Puerto at the monLth of the St. Johns River, Father Pareja (1602), reported that natives assisted at high mass and vespers. All was not well between priests and their converts, however, nor between caciques and their villagers. Pareja (1602), who served in the Florida mission field for almost 40 years, asked the Crown to order governors to threaten to punish Indians so that natives would do as their caciques told them. Once caciques became Christians, their subor- dinates no longer obeyed them. The general consensus on the friars' part was that it was the duty of the governor to punish Indians. Father Lopez (1605) stated that the priests should be seen as "loving fathers," not the governors. The continuous rivalry between secular and religious personnel and the inability to divide jurisdiction (a fault built into the Spanish system) was to plague Florida throughout the mission period. The one major change in political organization was the breakdown of tribal level organization characterized previously by intervillage alliances (Milanich 1978:67). After 1633, Milanich states, there were no references to major regional chiefs, only to caciques of individual villages. At the Timucuan missions visited by Rebolledo in 1657, how- ever, it appeared that visit or village caciques were subordinate to caciques of primary regional villages where missions were located (Pearson 1968:97). The position of the cacique was maintained and priests attempted to enhance this role since the chief could be an im- portant avenue through which to work conversion (Pearson 1968:67). Settlement and Demographic Changes Aside from the temporary shift of work details to St. Augustine and reduction, both important factors, there is little mention in the documents of this period which concern .demographic change. Pareja (in Geiger 1937:145) wrote that some Potano had left their own villages to settle in Christian communities and it is possible that centralization was forced or made highly desirable by promises of economic, and religious, benefits. The role of disease and epidemics has not been widely reported but this may be a bias resulting from selection of sources. Between 1614 and 1617 epidemics brought about many deaths and completely depopulated some villages (Geiger 1937:251). According to friars' estimates, which may be exaggerated, half the Indian population in Florida was killed (Bushnell 1978:19). Priests, Soldiers, Civilians, and Indians: 1650-1675 This is a rather arbitrarily assigned period but it begins at about the same time as major political and economic problems in Spain were occurring and it ends roughly at the time when Spanish Florida had to turn its attention to British and Indian ally encroachments. A further dimension was added to frontier Florida economy around 1655 when civilian or military cattle ranching had begun to be important. The cattle industry reportedly did not become extensive until around 1700 (Arnade 1965:6) but earlier accounts of cattle ranches, and the problems they were creating, do exist. Gifts and Trade The Royal Treasurer Jose' de Prado (Moreno 1654) advocated that gifts to Indians be eliminated because they constituted a heavy drain on St. Augustine funds. He suggested, instead, that Indians should be fed when in St. Augustine or when sick but that gifts only be given when a new governor was installed. There is a notable lack of accounts of gift- giving which may be a reflection of decreased amounts of gifts to give or simply documentary sample bias. More interesting, in any event, are the other exchanges Indians participated in. Indians had many items to offer which Spaniards wanted: sassafrass (which brought a good price in Spain), ambergris, deer and buffalo (?) skins, nut oil, bear grease, tobacco, canoes, storage containers,and, most of all, food. Indians wanted whatever the Spaniards had: weapons, construction and cul- tivating tools, nails, cloth, blankets, bells, beads, church ornaments, and rum (Bushnell 1978:13). The problem was to supply enough of what the Indians wanted. Governor Rebolledo had 60,000 pounds of pig iron beaten into tools to barter for ambergris with the coastal Indians south of St. Augustine. When the Indians offered him more of this precious substance, he melted cannons and arquebuses. For 600 ounces of ambergris (worth 15,000 pesos), Rebolledo gave Indians 500 pesos worth of iron in the form of hoes, an anchor, mortars, cannons, muskets and arquebuses which the governor claimed were worthless (Bushnell 1978:13, 43). Sol- diers also traded muskets to the Indians (Bushnell 1978:13). Just after the Timucuan revolt, Rebolledo made a visitation of Timucua and Apalache (in 1657) in order to report on present conditions and to determine the cause of the revolt. In Apalache, priests had required Indians to go to Apalachicola and "Chactos" territories, both hostile to Apalache, to trade for skins and other "esteemed items." Indians complained that no payment for this service was made except to the cacique (Pearson 1968:71, 84). What exactly the priests did with these goods was not explained'but Indians were suspicious and claimed that friars prevented-them from selling their goods to ships' crews to earn money. Priests then bought Indian goods (probably foodstuffs and skins) at low prices and turned a good profit by selling them to soldiers (Pearson 1968:73). Similar complaints were made at San Martin de Tomoli and San Joseph de Ocuya in'Apalache. Father Juan de Paredes (San Martin) took excess yields from a plot cultivated for him to provide for laborers on the church and other Indians and shipped most of the food out of the province. Of course, Spaniards expected that the missions would provide for the ranches and military but Indians resented not only losing their produce but also having to transport it without being paid. Father Sanchez (San Joseph) simply took part of the harvest ostensibly to buy ornaments and other things for the church, none of which were ever seen (Pearson 1968:96, 98). Soldiers and Indians appeared to enjoy good relations, much to the chagrin of the missionaries. Indians felt obliged to offer food and shelter to soldiers (or Indians) passing through their villages and all claimed they did this voluntarily, an act for which they were punished and humiliated by friars (Pearson 1968:72, 80, 92). It must be remem- bered that these complaints leveled at missionaries and the praise for the military were presented to the governor. One might suspect bias, protective on the part of the Indians, or sheer embellishment by Rebolledo himself for benefit of his position and laying the blame for the revolt on a group other than the military. Labor and Taxes Manuel, the cacique of the Yustega village of Asile'in 1651, ex- pressed unhappiness with the Spaniards in general: military officials tried to take their land and they were forced to work on plantations and cattle ranches without compensation (Milanich 1978:65). This grievance occurred over and over: either forced by the clergy to carry trading goods or private property (Pearson 1968, above; Moreno 1654), ordered to fix roads and build bridges (Pearson 1968:11), forced by sol- diers to carry goods to St. Augustine (Pearson 1968:157), or forced to work on the castillo in St. Augustine. In 1651, Governor Benito Ruiz stated that Indians in Apalache were fleeing into the woods because they were being required to carry goods and to labor for the haciendas. A few years later, Governor Diego de Rebolledo wrote that he considered the use of Indian labor to be a practical necessity even though the Crown forbade the use of Indian bearers (Matter 1972:256, 258). The use of Indian labor to work fields in St.-Augustine was also continued. Repartimiento Indians cleared land and planted the communal and private maize fields with digging sticks and hoes. In St. Augustine, everyone who was important had their "service Indians" (Bushnell 1978:184). In addition to working gardens and plots, Indians were used to "fill gaps" in the infantry. Governor Francisco de la Guerra y de la Vega (1673) wrote that 200 Indians from Apalache were brought to the capitol; 50-55 from Timucua stayed until the end of October and 45-50 from Guale were also conscripted. Caciques, although they organized the work crews, were exempted from all such services (Bushnell 1978:49). Indians at San Luis de Xinaica claimed that priests sometimes came to the village and requisitioned Indians without permission. This caused hardship since it took away people essential to the economic livelihood of the village. The cacique requested that Franciscans use Indian labor only with permission from and under supervision of the village cacique for fear that failure to do so would undermine the cacique's authority (Pearson 1968:87). Other stories of misfortune reached Rebolledo throughout Apalache. An Indian from San Juan de Aspalaga, who had been too ill to carry a vessel to the Timucuan village of Arapaja, had sent someone else and was whipped by the priest who had asked him to go (Pearson 1968:94). It was this kind of behavior, Rebolledo asserted, that had precipitated the uprising in Timucua (Pearson 1968:116-124, 141, 152). During the 1656. rebellion, however, Timucuans had killed both priests and soldiers and had burnt churches (Pearson 1968:143). Other sources lay the blame on Spanish rancheros whose cattle were destroying fields and who forced Indians to work on their properties (Arnade 1965:6). This author failed to come across any specific references to tax payments but it is possible that some of the "loads" carried to St. Augustine represented a tithe or tax of some kind. Bushnell (1978) does, however, discuss taxation and tithing in great detail and it is evident that taxes of various kinds were required from everyone. Mission Politics and Economics It has been impossible to discuss the other two sections without reference to political and economic conditions at the missions. Many of the problems arose in Apalache, rather than Timucua, since there were very few Indians left in that latter province. Epidemics between 1649 and 1659, years of famine, and the rebellion had left the Timucua scat- tered. In 1672 there were so few Indians in central Florida that Spaniards gave land away in Timucua to anyone who would open a cattle ranch (Bushnell 1978:20). The Apalache did not receive permanent missions until 1633, 27 years later than Timucua. In the late 1600s, it is apparent that many traditional practices and beliefs were still intact. Several caciques beseeched the governor and priests to permit them to continue playing their ballgame and performing their ceremonies. The degree to which certain traditional activities was allowed or punished appears to have been subject to the personal whim of the soldiers and/or priests in- volved (Father Paiva 1676). Tepaske (1964:194) claimed that most Franciscans overcame traditional native behavior patterns by providing exemplary models of Christianity and personal conduct. In fact, it seems that many.of the friars assigned to Apalache were particularly prone to the administration of physical abuse. Whippings and beatings meted out to elite and com- moner alike humiliated the former and caused them to lose the respect of their subjects (Pearson 1968:83, 93). So much was this a problem, and so important was it to maintain the cacique's status, that Rebolledo ordered that caciques and other elites who broke civil or religious regulations could be punished only by the governor (Pearson 1968:77). One can imagine how the religious felt concerning this usurpation of their jurisdiction. Indians attempted to trade with soldiers and ships' crews putting into port (in Apalache). Their right to do so was unquestioned by the governor although priests forbade the practice and tried to maintain the sale of goods and trade as their own perogative. Some Apalache were trading illicitly, however, with foreign ships after the soldiers were removed from that province in 1648 (Pearson 1968:130). Caciques and friars in both Apalache.and Timucua shipped wheat, rye, and barley to Havana to make a profit on it rather than have it confiscated by the governor for use in St. Augustine (Bushnell 1978:40). At the end of this period, Indian trade with the English,who offered rum and firearms in return for allegiance against the Spaniards, was not uncommon (Tepaske 56 1964:193). During the late 17th-carly 18th century, Spaniards were also involved in illicit trade and the Suwannee River became an important artery for shipping goods out of Florida (Boniface 1968:207). Priests apparently attempted to facilitate conversion and/or strengthen their own positions by appointing "ensigns" of their own choosing to act in native festivals. Constitution XIV of the 1684 diocesan synod (Statutes Relating to Florida n.d.:13) reiterated injunc- tions against Franciscan appointments issued in 1672 and 1678 cedulas. The statute also stated that priests were to disinvolve themselves with Indian confraternities and not bother them about debts during their fes- tivals. Villages had town governments in which the caciques were alcaldes mayors, leaders of the community and festivities (Bushnell 1978:156). Demographic Changes As mentioned above, a series of epidemics (typhus or yellow fever, small pox, and measles) between 1649-1659 had caused significant reduc- tion of population in Timucua (Bushnell 1978:20). The Timucua rebellion had also resulted in death and scattering of populations. In 1675 an estimated 81% of the 10,766 Indians under Spanish rule in Florida were in Apalache (Bushnell 1978:20). Early in this period, the Council of the Indies in Madrid (1654) also noted a decrease in the number of priests in Florida. St. Augustine had experienced an influx of Indians brought to the capitol to work on the castillo and orders were sent out to supply more priests for that city in order to serve the Indians (Cedulario 1673). Governor Pablo de Hita Salazar (1674) reported that Indians brought to the capitol to work on the fort were dying or were needed in their own villages,therefore, he asked the queen, could they import slaves from Cuba to augment and stabalize the work force? Only one piece of evidence regarding village relocation was noted by the author although, presumably, other instances occurred. Rebolledo granted permission for the Timucuan village of Santa Maria to relocate half a league away from their current site because the village was an old one, fields had lost their fertility, harvests were poor, and the forests had been cleared so thoroughly that it was difficult to get fire- wood (Pearson 1968:80). 1675-1704 This final period actually represents a continuation of the preceding one: there was increasing strife, dissension, and dissatisfaction; more Indians were leaving missions, and secular and religious hands were tightly about each other's political throats. The peaceful scenario depicted by Bishop Calderon in 1676 contrasts sharply with most other views and the conviction grows that either certain people chose to closely edit final reports to the Crown and various councils or that Indians (and priests) could be extremely shrewd actors. Part of Calderon's report notes the following: in January Indians burn the undergrowth from their fields in preparation for planting. Wheat is planted in October and harvested in June. In April they begin to sow corn. All work in common to plant the "lands of cacique and of charity" (i.e. alms plots for the priest and "needy widows"). Everything, plant and animal, is given to the cacique to be divided; he keeps the hides and gives the best part of the hunt to the priest "to whom the Indians are greatly subjugated." Indians do not covet riches nor gold or silver and do not use these for money. Rather, they barter. The most wanted and used articles are knives, scissors, axes, spades, small hatchets, large bronze bells, blankets, trinkets, and all woven cloth. Before entering the church, each Indian gives the priest a bundle of firewood or a log (Calderon 1676; also in Wenhold 1936:13). Calderon perceived that all worked in common for the good of the village but mostly for the good of.the priest who had his gardens planted, received the best meats, and had his firewood delivered. Failure to covet riches is probably a reflection of their scarcity in Florida at this time and of the fact that, as good Christians, they were not sup- posed to covet wealth, however wealth was expressed. Bushnell (1978:15) reported that soldiers seldom even saw money and that Indians never used it. This may have been true during the later mission period but during the early part goods had been sold for cash money and cash rewards had been offered. It is extremely unlikely that Indians did not desire wealth although their manner of reckoning it probably differed from the Bishop's. Gifts and Trade Rarely were gifts given except to non-Christian neighboring Indians in attempts to form alliances (Quiroga y Losada 1688). Indians had enough problems trying to retain their property and goods and with the shortage of food and necessities which were not supplied by Floridians (Tepaske 1964:195). These conditions prompted the following orders from Governor Zfii.ga regarding Indian activities in Apalache: (1) Indians had the right to raise swine and fowl, which were not to be taken from them, and to attend the market in St. Augustine to sell bacon, lard, swine, hides, and skins which they raised or acquired; (2) trade with the Apalachicola (Creek) would be allowed only for "customary goods," not British ones (Boyd 1951:31, 34). As in Spain, trade with other countries was theoretically suppressed but carried out none the less. Apalachicola could provide British goods whereas Spaniards could not even supply necessities. Ziiiiga, however, had his own rules concerning trade with the Apalachicola. Horses could be given in exchange only for guns which the English provided. The English, on the other hand, wanted pack horses from the Apalachicola in exchange for the guns. Since one group had to first have what the other group could provide in order to begin the exchange, a stalemate arose creating a great deal of hostility on the part of the Apalachicola. They formed a peace treaty with the Apalache and invited four Indians to their village to cement relations. Three of those four were murdered and then the mission of Santa Fe in Timucua was raided and burned (Zufiiga, 1702, in Boyd 1951:36-37). Labor No specific mention of taxes is made in the documents reviewed but the pattern of forced, uncompensated labor continued (Council of the Indies 1676; Boyd 1951:25, 27, 28, 29; Cabrera 1686; Pearson 1968:194) and more often resulted in Indians leaving the missions to join British allies or to go elsewhere. In 1676, Father Alonso del Moral (1676) asked the king to aid Indians forced to work on the castillo in St. Augustine. He reported that 300 natives from Apalache, Timucua, and Guale were yearly brought to the capitol to work for the Spaniards. The diocesan synod drafted the following statutes regarding Indian labor in 1684: Many Spaniards, negroes, and mulattoes residing in St. Augustine and other missions detain married Indian men in their houses, who have their wives in other places or who have gone to St. Augustine to work or dig but are detained later to serve them this should not be done because married persons should cohabit. The wretched Indians, for being so, are none the less Christians [nnd as such must be allowed to hear mass !ilid not work on days of obligation. 1 [This was addres- sed to] persons having Indians on their estates, even as hired laborers (Statutes Relating to Florida n.d.: 5, 6-8). The major concerns of the synod were aptly expressed: married people should live together and Christians must attend mass and observe regulations. The tone is somewhat less than sympathetic. Mission Politics and Economics In 1682, Bishop Juan de Palacios of Cuba asked the Crown to place the missions in the hands of Jesuits or Dominicans because the Franciscans "must be begged to fill parish and castillo [positions] in St. Augustine. Also they always want some benefits as well" (Juan de Palacios 1682). Governor Quiroga y Losada (1690) described some benefits enjoyed by mis- sion friars: "priests lack for nothing because Indians sow their cornfields, wheatlands, tobacco tracts they raise their chickens and fatten their swine. [Indians] don't pay ovenciones [tithes?] in money, but make up for this in deer, bear, cinola, otter and other types of hides." Quiroga y Losada concurred with Calderon that missionaries did seem to reap the greater part of material benefits and then continued to make demands. Father Martorell in Apalache required his villages to plant one-half or a whole arroba yield (of maize?) for each mission priest. Later he insisted on four, six, or eight arrobas and the Indians under his jurisdiction fled the village. In response, Governor Cabrera (1687) ordered that Indians could give whatever they wanted to the priest but they should not see this as an obligation. Secular and religious authorities continued to clash over disputed jurisdiction. Priests were subject "under pain of being chastized" to outlaw ballgames (Statutes Relating to Florida n.d.:4) and keep strict control over native festivities (Calderon 1676). Problems arose because a lieutenant told Indians at San Joesph de Ocuya they could dance all night as was their custom. The friar stated that Indians knew that by giving soldiers tacalos de caecina cassinaa?), janepas, chickens, and watermelons, as they did with said lieutenant, they could "be let to live" (Cabrera 1682). In an attempt to pave over fractional disputes, Governor Zuniga insisted Indians owed allegiance and obligations to the Franciscans. All converted Indians must have crucifixes and images of saints on the walls of their huts. Indians must obey the commands of friars and attend to their needs. No Indian could marry unless first pledged to support his perspective bride. Indians could plant only those lands designated by the friars (Zuniga, 1702, in Tepaske 1964:194). In return, Zuniga promised to provide for widows and orphans, to pay all labor done by Indians in St. Augustine, and to give all Indians full hearing before punishing them for their crimes (Tepaske 1964:194). As might be expected, the setting down of rules did little to affect actual changes. The first good evidence of political organizational upheaval occurs in documents of the 1670s. In Guale, Apalache, and Timucua individuals were claiming rights to chieftainships which were disputed by other vil- lagers (see Pearson 1968:206-216, 219, 220, 240) and military visitations were made to the three provinces to make sure that Indians were agreed on their caciques' right to lead, to reinstate those with legitimate claims, and to see that rndhians obeyed their cactqques. In TLmucua, the visitador Sergeant Major Domingo de Leturiondo created the office of cacique for a man who would take his own and other families to a place a good distance away in order to settle a town (Pearson 1968:273, 274). An additional burden and responsibility was added to mission set- tlements after 1675 when Yuchi slavers launched raids into Apalache and northern Timucua and Indians were given arquebuses and ammunition to go in pursuit (Pearson 1968:189). Slave raids on villages continued and were taken up by Yamassees, British allies, in the 1680s through the final annhialation of the missions in 1704. In 1686 soldiers, officers and officials, and even caciques were issued weapons as private property (Bushnell 1978:186). Zufiga ordered that Indians should be provided with all the supplies necessary for war operations (Boyd 1951: 32) but these seem to have been lacking in quantity since Spaniards had to trade with their enemies to obtain firearms. Demography Population movements and depopulation became major problems during the last quarter of the 17th century. Several events which caused the Indians to "flee into the woods" or join British forces have already been mentioned. Other groups were moving into mission districts known or unknown to the Spaniards. One settlement of 248 Tocobaga was dis- covered living on the Basis River in Apalache during the 1677 visitation of Domingo de Leturiondo. It was decided that they could remain (Pearson 1968:256-258). All Florida provinces suffered manpower shortages and Spaniards passed strict laws against caciques allowing single or married men to "wander around creating problems" hy imposing a fine of 12 doeskins or the equivalent (Pearson 1968:246). At San Juan de inacnra (1677-78) on the Suwannee River, Indlains asked for a canoe to use as a ferry since they were supposed to operate one (Boniface 1968:177-178) and depended on it for their livelihood. All able-bodied men had left because the work was too hard and there was never enough food. Only 20 men remained in the entire village (Pearson 1968:276-277) and they had not had a resident priest for a long time. Caciques were enjoined not to allow these wanderers to settle.in their villages although this rule was lifted for San Antonio de Bacuqua in Apalache which was in sore need of extra men (Pearson 1968:259). Economic Interactions During the Mission Period In Chapter One it was stated that introduced cultural elements are reinterpreted within the conceptual and value systems of the recipient culture and that two parties do not approach transactions with the same understandings and expectations. It was proposed that if two cultures did not differ greatly in their cultural complexity and power that it would be difficult for the conquering party to evoke behavioral changes and both cultures would tend to maintain their respective conceptual sys- tems. Structurally, Spanish and Timucuan political and religious systems were similar: both observed mutual reinforcing of political and religious institutions (in fact, political and religious roles were inseparable); political organization was hierarchical; wealth and status were determined through descent; both leaders invoked power and authority to control their subordinates; elite goods were accessible to a few; and tributes, tithes, and/or taxes were exacted by politico-religious institutions. The primary difference, aside from scale, was politico-economic. Florida chiefdoms were redistributive in two senses: the elite mobilized goods and services for the benefit of the elite but basic goods, especially food, were "shared out." The Spanish monarchy, on the other hand, consumed massive amounts of elite goods but did not itself par- ticipate in insuring subsistence support for the populace. Spain had a national market economy primarily directed towards protecting and sus- taining the textile industry. Agricultural production for sustenance was not one of its concerns. Traditional Catholic peasants paid tithes, alms, and fines to the Church in produce, cash, or labor in return for church services, sacrements at birth, death and marriage, and emergency subsistence support and refuge in times of famine and war (Dalton 1971a: 21). It was, therefore, the Church's duty to provide services similar to those provided by a cacique and his officials. The major difference between peasant and tribal village economics in the ordinary production of subsistence goods is in the form of land tenure. Non-market land usage is acquired through social relations not through purchase or rental. Socio-political superiors (e.g. caciques) are "stewards of land allocation" who require return payments of material goods, labor, services, and clientage (Dalton 1971b:222-224). Spanish attempts to take or buy land were unsuccessful because, as the cacique of Asile explained in 1651, caciques could not give or sell land since it was owned jointly by sons, nephews, other lesser chiefs, and principal men of the tribe. They could, however, lend it; that is, allocate rights of usage (Milanich 1978:66). Spaniards tried to alter this situation by breaking up intervillage alliances, placing pro-Spanish individuals in positions of influence (Deagan 1974:12), giving control of land allocation to priests, and assigning "hunting preserves" to each village (Pearson 1968:253). For the most part, Spaniards endeavored to maintain the native, structural status quo although failure to grasp social embeddedness of certain practices made this difficult. Prestige acquisition, and there- fore the ability to reinforce status, was a major loss suffered by Indians, particularly the elite. The Guale Revolt (1597) was precipitated when priests imposed monogamy on young'caciques without understanding that having more than-one wife was an indication of wealth and status. Prohibition of gambling, ballgames, and intertribal warfare removed (when they were successful attempts) important avenues to acheiving pres- tige. The fact that Indians bribed soldiers to allow them to have their dances and perform their ceremonies suggests that native Floridians did not, as a whole, become absolute converts. Some priests permitted dances but only under strict supervision and not for all night periods "as was their [Indian] custom." Other efforts employed to maintain political and economic position of the caciques included channelling labor con- scription through the chief and holding him responsible for the behavior of his subordinates. Spaniards also allowed the cacique to receive tribute payments (in hides which were economically important to the Spaniards as exports). Caciques were favored with gifts and probably received-goods which other Indians did not (e.g. firearms). Spaniards upheld the political position of the cacique by making him head of vil- lage, native political affairs and by seeing that caciques were obeyed. That maintenance of the caciques' position was important to caciques as well as to Spaniards is evident from the documents. The most obvious change in Indian economics was their participation in a market/international system and cash economy. Exactly how wide- spread Indian use of cash became is uncertain. It is probable that very few Indians ever had cash and its presence may have been restricted to the earlier period. Money, however, need not be of coin or paper. Any regularly employed medium of exchange is equivalent with what one today thinks of as "money." Common mediums of exchange in Florida ap- pear to have been hides, ambergris (on the east coast), and corn. Amber- gris was not important prehistorically and the desire to acquire this substance was strictly owing to European demands. Likewise, corn was im- portant prehistorically as a ritual and symbolic good but it was not used, for instance, in paying tributes as it was later used for paying tithes and taxes to the Spaniards. Requiringpayments in corn was a means of insuring that the garrison in St. Augustine was fed and it would have created a strong motivation to increase yields (by planting larger fields) ifpunishment was meted out to those who could not pay their taxes. So far, documentary evidence to this effect has not been discovered unless one considers the account of Father Martordll in Apalache. The Indians, however, simply fled from the mission in that case. In Apalache, Eastern Timucua, and probably in Western Timucua, Indians changed from inner-directed production for the village to outer- directed production for the market and garrison. In many cases this production was forced upon them but in some instances it appears to have been by choice since Zidiga encouraged Indians to bring their produce to the market in St. Augustine. Often these goods were confiscated by priests or soldiers, however, so it is not clear what return Indians saw on market goods. Indians, however, were still required to produce for the village and the priest. Cash, markets, kings, cities, and universal religion can destroy reciprocity as delineated by Sahlins (Dalton 1971b:237) but the ideals of reciprocal behavior may remain. In some respects one might consider that Spain worked against its own ends by implementing the policy of indirect rule, allowing caciques to "rule communities as in former times" (Geiger 1937:10). Most certainly, the religious and military factions worked against any common goal of the Spanish colonial empire. Major impacts on Indian life were created and augmented by the coexis- tence of market and superficially redistributive economies coupled with traditional expectations of reciprocal behavior. In the beginning, transactions were more or less generalized but as the practice of "gift giving" declined and increased demands for goods and services without compensation were made on Indians, interac- tions became increasingly unbalanced. Spaniards not only failed to sustain their alliances but also came to rely more heavily on force as a means of imposing their will without offering any returns to Indians. This paper argues against the proposition that religious salvation was enough; to paraphrase, it is also necessary to keep body and soul together. Spanish-introduced food items included wheat, figs, oranges and other citrus, peaches, chickens, pigs, cattle, and (at San Juan del Puerto, at least) sheep. Cattle raising appears to have been largely restricted to ranches whereas pigs and chickens were raised for the priest and, apparently, owned by some Indians at missions. In order to care for livestock and meet new demands put upon them by Spaniards, Indians were required to become sedentary and, probably, spend more time on production than they had prehistorically. Sedentism posed a real threat to continued settlement in any particular area. Except in Apa lache, soil fertility is natural l ly poor In most reg ons of Florida. Continued usage of old fields depleted fertility at an unknown rate. Relocation of missions and villages, however,.did become necessary. The degree to which domesticates figured in Indian diets is un- known. Likewise, it is unknown what access Indians had to European agricultural tools and if new techniques were universally employed. Documentary evidence suggests that large numbers of hoes were distributed to Indians but relative to the number of Indians receiving them, the quantity may not have been substantial. In any event, European hoes were not greatly different from native ones. The basic movements and usage would have been similar. Oxen were present in some areas, par- ticularly Apalache (Daniels 1975), but they may have been restricted to Spanish-owned and operated ranches and haciendas. Documentary evidence also contradicts itself. As late as 1675 Governor de Hita Salazar was still writing about developing agriculture in Apalache and noting that Indians were plowing by hand because oxen and plows had not been in- troduced there (Pearson 1968:186). The fact that Spaniards periodically wanted to import slaves and Mesoamericans to farm and teach Florida Indians how to farm implies that native Floridians never reached the level of agricultural development that Spaniards sought. Calderon's description of agricultural activities in 1676 indicates that actual techniques had changed very little. Communal fields were still worked for the priest and for production of surplus to provide for those in need. Goods preferred as trade items by both Spaniards) and Indians were primarily subsistence-related. Judging from retention of slash-and-burn horticulture and the complaints of poor harvests and famine, it is doubtful that food production increased relative to the augmented number of non-productive consumers. Indians were required to plant and hunt not only for themselves but also for priests, soldiers, Indian laborers working on construction projects, and for trade out- side Florida. Depopulation resulting from epidemics, rebellion, and population drains during sowing and harvesting periods precluded the ability of Indians to meet demands. In St. Augustine as well as in the missions and villages, people complained of insufficient food. Spaniards were in a better position than Indians since they taxed, tithed, and confiscated food from Indians, failing to return any.. (or returning only little) of the yield or profit. If public granaries still existed, under control of priests and/or caciques, they would have been severely pressured. Indians continued to hunt for food and also to obtain hides and skins which were exported items and tribute payment goods and gifts to other Indian tribes. Prehistorically, or at the time of European con- tact, deer skins were collected once a year by caciques. The increased demand for skins and hides on a continuous basis during the historic period may have exerted pressure on deer populations. Additionally, if Indians were indeed restricted in their activities to village hunt- ing preserves, the source of deer, not to mention other animals, would have been rapidly depleted. The fact that priests required Indians to obtain hides from the Apalachicola may be a reflection of decreased animal (or human hunter) populations within village or tribal hunting areas. The introduction of cattle would have provided food resources for soldiers and St. Augustinians, and possibly mission populations, in addition to another source of hides. Cattle probably roamed free range since it is doubtful fences would have been erected over the country- side and Indians had complained of cattle damaging their crops. Cattle population density is unknown but it is conceivable that intermixing with deer populations could have affected not only their food resources but also might have increased the incidence of deer mortality due to increased prevalence of parasitism. Modern researchers have found that deer populations which share range with cattle can be severely affected by parasite population increase, particularly the Lone Star Tick. Infes- tation affects primarily the young and fawn mortality increases sig- nificantly when the two species share the same territory (Hair 1968; Bolte et al 1970). Unfortunately, the historical incidence of tick in- festations in Florida have not been examined by the author. According to Calderon, the village cacique received all of the food which he then redistributed to the villagers, giving the best parts to the priest. During the early period of mission activities, gifts which included flour came to the cacique and priest; these may have been ap- portioned to villagers. Spaniards saw it as their duty to provide for sick Indians, laborers, "orphans and widows" but were unable to do so because of the shortage of locally produced foods and the failure of the royal subsidies. Illicit trade and smuggling out of Florida only served to aggravate the situation. Numerous complaints from Indians concerning nonrestitution of debts or lack of reimbursement for goods and services indicate that they expected to be compensated. Traditional Indian and Spanish practices provided support of the community in crisis situations via the religious figurehead. Since public stores, if such existed, were used to purchase furnishings for churches, were shipped to St. Augustine or Havana, or were used for other purposes, there was no adequate surplus available to Indians. Of course, this situation may have arisen prehlistor call but it is likely that Indians would have been just as dissatisfied with their leaders at that time as they were during the historic period. Priests usurped many of the responsibilities formerly apertaining to caciques and native priests: land allotment, control of surplus food, overseeing communal labor, provisioning of non-producers, and en- dorsing marriages. Bonds between the community and the priest were en- forced with injunctions against "wandering," settling in villages other than one's own, keeping married persons together, and legalizing mar- riages performed only by one's assigned village priest. The most im- portant role.of the cacique became that of middle-man between Indians of his village and the Spaniards. He or she represented Indian com- plaints to visitadores and "wrote" letters (probably composed or writ- ten by priests and signed by Indian caciques), channelled through the priests, to the governor or king. The cacique's position was both politically and economically necessary to the community and the Spaniards but was not necessarily one which was inherited. Status became based on Spanish support and force, not authority. The attempts of Spaniards to see that villagers were agreed upon the right of their cacique to rule, however, may indicate that his position was one which was validated by inherited right. Status and authority, however, were probably still upheld by acquisition of prestige goods but the nature of these goods had shifted to those which symbolized Spanish backing. Priestly authority rested only on divine right; they did not belong to the same moral community (see Chapter Two) and, therefore, depended on physical force and military support to maintain their positions. When the latter was not forthcoming, which it rarely was unless wide- scale revolts threatened the system as a whole, they essentially had no authority. Indians simply left the missions. Lack of military support constantly plagued mission friars in the fulfillment of their religious and civil obligations but serious reper- cussions went unfelt until the economic basis of their power began to flounder. Increasing imbalance of consumption and reciprocity, not to mention outright seizure of Indian property, were important factors in the collapse of the mission system. Caciques would have had an im- portant stake in upholding the mission system because they were politically and economically tied into the Spanish organization. Chiefs, as well as priests, complained over the decreasing supplies of goods and necessities issuing from St. Augustine and, ultimately, the royal sub- sidy. Over time, with loss of wealth and political power, ritual status and authority gradually diminished (see Nash 1966:94). The political and economic "license" simply expired; there was no longer a strong interest in both parties to continuethe system, nor were they able to do so. Hypotheses Of necessity, hypotheses and their implications which are testable at archeological sites must be concerned with physical remains. Material goods, however, are an integral part of any culture; their manufacture and distribution reflect not only social behavior but tech- nology, resource usage, and environmental limitations as well. These variables work together to influence material assemblages associated with cultural systems. Archeological contexts are the result of further processes which have been described in detail by Schiffer (1972). The roles which certain goods played in Spanish-Indian interaction have been presented in the preceding sections of this chapter as they are indicated in the historical documents. There are numerous aspects of the evaluation of acculturation which cannot be archeologically inves- tigated. Even if certain patterns of artifact distribution are en- countered, one cannot be said to have-proved anything (without repetitive testing at other sites), only that hypotheses have not been disproved. By examining the material assemblage at Spanish mission sites, particularly at Baptizing Spring where spatial aspects can be differentiated and compared between Spanish and Indian living areas, it will be possible to see what goods were introduced and how they were distributed. Importantly, questions concerning what goods Indians actually used and had access to can be examined. Analysis of the ar- tifacts themselves may provide insights into changed manufacturing tech- niques and resource utilization. The primary hypotheses to be tested, however, dealt with distribution of particular and grouped artifacts on the basis of the fact that distribution and consumption, as primary economic activities, might be most indicative of social interaction as interpreted from historical documents. A minimum of two exchange spheres was proposed for prehistoric Timucuan economy: a subsistence sphere and a prestige/tribute sphere. The subsistence sphere would include such things as food and food processing, procurement, and storage artifacts. The prestige sphere, characterized by restricted flow to certain individuals, consisted of authoritarian items (headdresses, garments, litters, high-status housing, non-local metals and/or ornaments) and "power" items hides, pearls. Unfortunately, many of these goods will not be preserved in archeological sites. Many of the introduced European goods listed in the documents were subsistence-oriented: domestic animals and plants, axes, hoes, knives, fish hooks, sickles, etc. As mentioned earlier, goods which served as weapons may also be associated with prestige. Additionally, scarce items may serve to indicate prestige and/or favoritism in dis- pensation. Non-subsistence items consisted of clothing, blankets, beads, scissors, bronze bells, and religious paraphernalia. Not specifically mentioned in historic accounts but recovered from archeological sites are olive jar and majolica ceramics, glassware, clay pipes, hardware, thimbles, copper, silver, and gold beads and pendants, brass finger rings, lead beads and musketballs, glass buttons, mirrors, crosses and crucifixes (Smith and Gottlob 1978:13-15). The first readily identifiable indicator of status differentiation may be that of dwelling/building location within the village. If Gar- cilaso de la Vega was correct in describing location of elite dwellings and important buildings around a central plaza and on a slight rise (a pattern which has been identified in the prehistoric, stratified societies of the Southeast) and this pattern was maintained during the mission period as one similar to Spanish town arrangements, then the fol- lowing hypotheses could be put forth. 1. Spanish buildings, as identified through architectural features, would have been located in central areas, possibly on a rise, bordering on a plaza. 2. High-status Indians would have been living nearest the Spanish area. 3. Decreasing status would be positively cor- related with increasing distance from the Spanish buildings and the plaza. 4. Status may be positively correlated with dwelling size and elaborateness; ornamen- tation of walls and use of European hard- ware. Artifacts' significance as prestige indicators may be shown through correlation with aboriginal prestige goods if former high-status in- dividuals maintained their rank and it was inherited by their descendants. Such associations may not be found, however, given the fact that many prehistoric prestige goods will not be preserved.. Restricted distribution and differential access to goods will be assumed to correlate with pres- tige and control. Scarce items, or those which were traded in or directed toward priestly consumption, would be considered prestige goods within the Indian sphere although not necessarily within the Spaniard's prestige sphere. 5. The following trade goods, being similar in form and function to native items, would be classed within the Indian prestige sphere: clothing (especially that with elaborate designs, buttons, etc.), beads, bells, and jewelry. 6. The following goods, although technically subsistence sphere goods, would also be in- cluded within the native prestige sphere because of their coloring, quality, and novelty: storage jars, majolica, and glass- ware. a. Prestige items had restricted distribution and/or.were limited in quantity. Test 1. Distribution of prestige trade goods within archeological contexts will be non-random, concentrated in high-status areas. Test 2. Prestige goods will be fewer in number than subsistence goods and native-manufactured goods in the Indian living areas. 7. European trade goods associated with prestige will have supplanted aboriginal prestige items. 70. If Indian patterns of reckoning prestige and its accouterments were retained, then native prestige goods or European equivalents will be found in high-status living areas within the Indian sector of the village. 8. Indian goods retained within the prestige sphere will be those which were also valued by Europeans such as.hides, precious or semi-precious metals, pearls, and high- status housing. Test 1. Aboriginal and historic prestige items will be found within the same household units. Test 2. European prestige items may be more numerous than prehistoric ones. In order to have maintained or obtained rank within the new Catholic-based hierarchy, Indians would have to have been good Christian converts. If, as is common, religious medals and other symbolic parapher- nalia were awarded for learning and observing catechism: 9. Religious items may be found more often in conjunction with non-sacred prestige items within high-status dwellings. a. These items, if limited in quantity, will tend to be concentrated in high-status areas within the Indian village. With regard to directional flow of non-food goods from Indians to priests and Spanish government to priests: 10. If more Indian goods were given to priests than European goods were to Indians, the ratio of European to Indian goods would be higher for Spaniards than for Indians, and 11. Cumulative total of goods per person would be greater for priests, declining with decreasing status. a. European goods distributed among Indians may have increased significance as prestige items. Otto (1975:161, 219), working with material from a Georgia Sea Island plantation, proposed that artifact diversity would be correlated with different status groups such as.slaves, overseers, and planters. In particular, he examined the variety of ceramic types and forms and faunal assemblages in three midden areas of these different groups. Kohler (1978:27-29) re-examined Otto's data and calculated an index of diversity for each of the plantation middens. He then hypothesized and tested the idea that in prehistoric sites ceramic type diversity would be greater in high-status middens than in.lower status middens. The op- posite was found to be true at the plantation site. The reason for dif- ferent diversity measures of artifact assemblages was defined as differen- tial access to goods. On the basis of these data and the assumption of differential access to goods, one might expect the diversity of ceramic types to be higher in the Spanish living area than in Indian living areas. Priests, with greater access to Spanish ceramics, might acquire "sets" whereas Indians would have to either obtain cast-offs from priests -- representing smaller proportions of a greater number of sets -- or buy their own ceramics during periodic trips to the market in St. Augustine. Another possibility which yields the same results is that Indians only obtained sherds, rather than whole vessels, and that these were used as ornaments (Seaberg 1955:147), gaming discs, or were simply collected for their color and novelty. Actual numbers of sherds of a single type would be greater in the Spanish area if ceramics owned by priests were broken there. In either case, one might make the following hypotheses: 12. Indians, with an eye for variety in their col- lection of ceramics and/or sherds, will have higher diversity of majolica types than will priests who would have owned whole vessels (yielding more sherds of a single type) and/ or preferred matching pieces over a variety of types. 13. If Indians were receiving majolica sherds there will be a low frequency of sherds rep- resenting any single vessel and sherds from a single vessel may be scattered over a wide area. a. If majolica was a high-status indicator among the Indian population, there will be a greater number of these sherds in high-status Indian areas. Kohler (1978:31-32, 198-199) predicted and found positive cor- relation between higher ceramic diversity and elite status areas at a Weeden Island ceremonial site (McKeithen site) in Columbia County, Florida. His hypothesis was based on the assumption that elite in- dividuals had greater access to trade and high-status goods within a chiefdom. During the mission period it might also be expected that high-status Indians would have greater diversity of native-manufactured goods within the Indian living area. In addition, if priests preferred certain designs or forms of native-manufactured ceramics or if certain individuals were producing vessels for their consumption, one might predict that aboriginal ceramics in the mission buildings would exhibit lower diversity than in the rest of the village. 14. Aboriginal ceramic type diversity will be greater in the Indian sector than in the Spanish sector. Each of the hypotheses related to production and distribution of subsistence goods has its null counterpart which will not be included in the text but will be implied. 15. Spanish subsistence sphere goods were acces- sible to all Indians regardless of status. 16. Introduced European food items such as cows, pigs, chickens, peaches, oranges, etc., would have been restricted among Indians. a. The above goods might have been available only to priests who had greater access to them through shipments from St. Augustine or by demanding them as tithes/alms. b. Cattle may not have been used as food resources if they were not raised at mis- sions or if their consumption was primarily intended for soldiers and St. Augustine where the market and slaughter house were. 17. Priests and high-status Indians would have received the best part (meatiest, most ten- der) of hunted game plus proportionately more of the domesticates than would lower status individuals. 18. With their monopoly over production and alms payments, priests' diets would have included more European foods, been less diverse, and of better nutritional value than diets of Indians. 19. If livestock raised by Indians went primarily to priests and/or soldiers, chickens, pigs, and cattle remains will be poorly represented in or absent from Indian dwelling areas. The next chapter will present a review of previous archeological research carried out at Florida mission period sites, most of which concerns missions in northwest Florida (Apalache). It will also in- clude research carried out in Suwannee County which is pertinent to this study and descriptive data regarding methodology and the history of excavations at the Baptizing Spring site. CHAPTER FOUR ARCHEOLOGICAL CONTEXTS OF SPANISH-INDIAN LIFE AT FLORIDA MISSIONS Archeological data from other mission sites in Florida will be examined in depth relative to findings at the Baptizing Spring site in Chapter Six. This chapter presents a brief review of published works relevant to mission archeology, summaries of previous hypotheses and conclusions based on those data. The 1977 survey and excavation data from Baptizing Spring are also presented. Mission Archeology (1948-1977) The earliest archeologically constructive interest in Florida mis- sions was exhibited by Hale G. Smith. He defined and gave material sub- stance to two historical archeological periods then called St. Augustine (1565-1750) and Leon-Jefferson (1650-1725) (Smith 1948:313-319). These periods had artifactual, temporal, and geographical parameters: the St. Augustine period included the founding of that city and the ensuing years until the extirpation of most Indians residing near the capitol. This period applied only to the eastern portion of north Florida from the St. Johns River eastward to the Atlantic coast. Ceramic types, on which most period definitions are initially based, included the St. Johns chalky wares and San Marcos ceramics plus Spanish ceramics. The Leon-Jefferson period covered the time of mission activity in the Apalache province (actually beginning ca. 1633) and, in fact, derived its definition from excavation of the Scott Miller site near Tallahassee in Jefferson County. Again, the period was defined on the basis of 80 material culture: Spanish ceramics and trade goods and the aboriginal ceramic types Mission Red Filmed, Miller Plain, Aucilla Incised, Lamar- like Bold Incised, Leon Check Stamped, Jefferson Ware Plain and Comp- licated Stamped types, gritty plain, and Alachua Cob Marked. The geographical parameters of these two periods left a great void between the Aucilla and St. Johns Rivers. Between 1955 and 1976, this void has begun to be filled but even considering that fourteen mission sites have been excavated in northern Florida, there remains a con- siderable lack of information. A general problem has been incomplete investigation within mission villages (concentration on Spanish living areas and cemeteries) or a common inability to ascertain exactly what part of a village, of unknown size, was being excavated. Apalache Scott Miller, the first excavated mission site in Florida, is located approximately 37.0 km southeast of Tallahassee (Figure 3). It is situated in an area marked with numerous limestone sinks, roughly 14.5 km west of the Aucilla River and 4.8 km north of the Wacissa River. The site iteslf is at a high elevation (for Florida), 76 to 91 m above mean sea level (AMSL), on a plateau in the Tallahassee Red Hills physiographic region. About 3 km south of the site, the land drops off sharply into the low, swampy, sandy Gulf Coastal Plain (Smith 1951:109- 110). The presence of burnt red clay wall and floor rubble in a freshly plowed field made distinction of the two mission building remains unmis- takable. It was, therefore, these two areas and an intervening borrow pit that received the brunt of the investigation. On the basis of location with respect to natural features and other known mission sites, Scott Miller was tentatively identified as San .4-~ ol//ahassee , augus/ine GULF OF MEXICO 0 45 90 UTINA TERRITORY Figure 3. Location of Selected Excavated Mission Period Sites and Approximate Location of Utina Tribe. KM '. o.I Francisco de Oconee (Smith 1951:112). Smith noted that the entire 20 acre (8.1 hectnre) field showed surface evidence of occupation but trenching failed to disclose other building remains or evidence of a palisade. The remainder of Smith's report concentrates on architectural features of the two Spanish buildings, artifact assemblages from the three major excavation blocks, and a description of the Leon-Jefferson period in terms of material culture. The fort and mission of San Luis, 3.2 km west of Tallahassee, were tested to locate remains of the fort (Griffin 1951:139, 143). The material'assemblage was similar to that at Scott Miller although propor- tions of ceramic types differed (Griffin 1951:155). As at Scott Miller, the primary goals were to label the site with a Spanish name so that dis- tances to other sites could be plotted and to describe the material com- plex. Interpretation of acculturation situation was cursory although both Smith and Griffin viewed this as of primary importance. In 1966 the Florida Division of Archives, History, and Records Management (FDAHRM) received approval to establish a mission study pro- gram. Field research began in 1968 and continued for about four years. During that'period, five missions were discovered in the Apalache area (San Lorenzo de Ivitachuco, San Joseph de Ocuya, San Pedro de Patali, San Antonio de Bacuqua?, and San Damian de Escambi) and two in the Wes- tern Timucuan area (San Miguel de Asile and San Pedro y San Pablo de Potohiriba)(Jones 1970a:1,3). San Damian (ca. 1633-1704) was partially excavated in 1969. Portions of a burned, wooden building and a cemetery containing approximately 143 burials were located (Jones 1970a:3; 1970b: 1). Within the building area, a large variety of brass and iron tools and a broken bell were recovered (Jones 1970a:3). Contact Us | Permissions | Preferences | Technical Aspects | Statistics | Internal | Privacy Policy © 2004 - 2010 University of Florida George A. Smathers Libraries.All rights reserved. Acceptable Use, Copyright, and Disclaimer Statement Last updated October 10, 2010 - - mvs
http://ufdc.ufl.edu/UF00025915/00001
CC-MAIN-2016-07
en
refinedweb
#include <iostream> #include <conio.h> using namespace std; //structure declaration; struct shares { string CompanyName; int Number_of_shares; double Unit_price_of_shares; }; //function prototype; void enterCompanyName (shares shares[]); void printshares (shares shars[]); const int sizeOfArray = 50; int main() { shares shares[sizeOfArray]; int amount = i //You will need to ask the user how many they want to //input the shares; for (int i = 0; i < 0; i++) { cin >> shares[i].Number_of_shares; // number of shares or unit price } //Display totalshares cout << "The shares entered are: \n" << printshares; _getch(); return 0; } //Functions //Prints out the array of company void printshares(shares shares[]) { for(int i = 0; i < i; i++) { cout << shares[i].Unit_price_of_shares; } cout << endl; } this is my code and l am getting an error code c2065 and c2143, can someone tell me what is wrong with the building This post has been edited by Salem_c: 05 October 2012 - 09:05 PM Reason for edit:: added [code][/code] tags - learn to use them yourself
http://www.dreamincode.net/forums/topic/294477-l-am-code-error/page__pid__1716906__st__0
CC-MAIN-2016-07
en
refinedweb
#include <coherence/util/Hashtable.hpp> Inherits AbstractMap. List of all members. Hashtable is an open-addressing based hash map implementation, relying on an array of pre-allocated entry structures. This approach significantly reduces the cost of insertion into the map as there is no Entry allocation required except as part of rehashing. This optimization makes it an good candidate for short lived maps. Though thread-safe Hashtable is optimized for single-threaded access, and assumes that in most cases escape-analysis will elide the synchronization. If the Map will be accessed concurrently by many threads it is likely that SafeHashMap would be a better choice. If the Map will largely live on a single thread, or be short lived then Hashtable is a good choice. Note: Hashtable's entryset iterator returns Map::Entry objects which are only valid until the iterator is advanced. If the Entry needs to be retained longer then it should either be cloned or an shallow copy should be created. Construct a thread-safe hash map using the specified settings.
http://docs.oracle.com/cd/E24290_01/coh.371/e22845/classcoherence_1_1util_1_1_hashtable.html
CC-MAIN-2016-07
en
refinedweb
My book is a First Edition printed in October 2002. This is nitpicky but... The second sentence mentions the FILTER tag twice, when it seems that it should probably only be mentioned once. AUTHOR: This is correct. FILTER should just be included in the list of "advanced tags". However, this is not a serious technical mistake, it's a minor language mistake. The code: You have <% $cd_count %> cd<% $cd_count != 1 ? 's': '' %> is missing the period (.) after the final closing %> tag. The first line of the <%perl> block should read: my @words = $sentence =~ /(S+)/g; The code in the book is missing the '$' sigil on the variable. The second to last line of the example in paragraph 4: esethat ordsway areyay inyay Igpay Atinlay.</pig> should instead be: esethay ordsway areyay inyay Igpay Atinlay.</pig> to be an accurate translation. The word "these" is "esethay" in Pig Latin. AUTHOR: Indeed, my pig latin was mistaken ;) "most of them contain plan Perl" should read "most of them contain plain Perl". It is currently <% $temperature %> degrees. should be: It is currently <% $temp %> degrees. The name "view_user.mas" somehow got turned in one instance into "view_user.max", I think somewhere in the editing process. It should be "view_user.mas" throughout this section. "You should also remember that variables defined via an <%args> block are not visible in a <%shared> block, meaning that the only access to arguments inside a shared block is via the %ARGS hash or one of the request object methods such as request_args." It's not true that you can access args via %ARGS in a shared block. You can only access args via $m->request_args. <& $m->call_next &> should be: % $m->call_next; As written the example would attempt to call a component as specified by the return value of $m->call_next, which probably isn't intended. This paragraph should be intended to match those above it, as it applies to the previous list item. Sometimes we may want to path in a query should read: Sometimes we may want to pass in a query The site admin <input> tag is missing its closing '>'. The closing </form> tag incorrectly appears as <form> There are two errors in this code block: In the Cache::FileCache->new line, there is a missing { before namespace. Add 'use Cache::FileCache' to the <%once> section. The code example show should be: <%filter> s/href="([^"])+"/'href="' . add_session_id($1) . '"'/eg; s/action="([^"])+"/'action="' . add_session_id($1) . '"'/eg; </%filter> These substitution filters also recognize URLs in single quotes. I think they're better than the fix proposed in the official errata. s/(href=)(['"])([^2]+?)2/$1.$2.add_session_id($3).$2/eg; #' s/(action=)(['"])([^2]+?)2/$1.$2.add_session_id($3).$2/eg; # "This example lets you give each developer their own hostename:" ^ should be: "This example lets you give each developer their own hostname:" The reference to HTML::Mason::Request::WithApacheSession is outdated, as the module's name has changed to MasonX::Request::WithApacheSession While you _could_ pass a request_class parameter to the Interp constructor, you probably _should_ be passing it to the ApacheHandler constructor. Because of Mason's "contained object" system, there's no need to create an Interp directly. The ApacheHandler constructor will create one for you and pass it any relevant parameters. Creating an Interp object manually will cause problems if you don't also give it the correct resolver_class parameter. ApacheHandler takes care of this for you. There are two parameters listed with dashes in their name, "Mason-CodeCacheSize" and "Mason-IgnoreWarningsExpr". These are both incorrect, neither should have a dash. It should be mentioned here that current versions of Bricolage (as of 1.4.3) do not yet work with Mason 1.1x, because of API changes in Mason between 1.05 and 1.10. The book covers the Mason 1.1x API, so this might be a little confusing. Hopefully a near-future verson of Bricolage will also support Mason 1.1x. © 2016, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.oreilly.com/catalog/errata.csp?isbn=9780596002251
CC-MAIN-2016-07
en
refinedweb
Conversion Constructors and Subtle Dangers explicit keyword when defining constructors with one only (non-defaulted) parameter. Even so, the subtleties of what can go wrong wit...explicit keyword when defining constructors with one only (non-defaulted) parameter. Even so, the subtleties of what can go wrong wit... Pretty much every old C++ war-horse knows to use the explicit keyword when defining constructors with one only (non-defaulted) parameter. Even so, the subtleties of what can go wrong with the best laid plans of library designers can still surprise. Recently, a user of the Pantheios diagnostic logging library reported a subtle vulnerability involving conversion constructors in a particular set of circumstances. Background For reasons of robustness, Pantheios does not accept arguments of fundamental types - int, char, double, void*' and so on. All arguments must be of a string type, or one which the Pantheios Application Layer knows how to interpret as a string. Thus, a statement such as the following will be rejected by the compiler: [Note: all the log statements here would usually be expressed on one line. Limitations of this medium require I split them.] #include <pantheios/pantheios.hpp> pantheios::log_NOTICE( "secret of life, universe, and everything: " , <strong>42</strong>); To incorporate a fundamental type instance into a log statement a user must first convert it to a string. One (bad) way would be to convert it to a string yourself as follows: #include <pantheios/pantheios.hpp> <strong>char num[21];</strong> <strong>snprintf(num, 21, "%d", 42);</strong> pantheios::log_NOTICE( "secret of life, universe, and everything: " , <strong>num</strong>); There are several reasons why this is bad: it is verbose; it is not practically portable, because it uses snprintf(), which is "deprecated" by Microsoft's later compilers in favour of so-called safe equivalents (in this case the famous _snprintf_s()); it is potentially inefficient, because the conversion from int->string is made even when statements at log level NOTICE are disabled; it requires you to use magic numbers or some kind of dimensionof() macro. A better way is to use one of the stock inserted classes/functions provided by Pantheios, in this case the pantheios::integer inserter class, which may be used to insert all integer types (and with different width, radix, etc.), as in: <strong>#include <pantheios/inserters/integer.hpp></strong> #include <pantheios/pantheios.hpp> pantheios::log_NOTICE( "secret of life, universe, and everything: " , <strong>pantheios::integer</strong>(42)); Incidentally, if you (understandably) think that's getting a tad verbose, you can use some of the provided namespace/class/function aliases to shorten the log statements, as in: <strong>#include <pantheios/inserters/i.hpp></strong> <strong>#include <pantheios/pan.hpp></strong> <strong>pan</strong>::log_NOTICE( "secret of life, universe, and everything: " , <strong>pan::i</strong>(42)); Problem Now you know how insertion of an integer should look, let's consider how things should go if you try to insert it directly. With Visual C++ 9, I get a set of compilation errors along the following lines: h:\freelibs\pantheios\main\1.0\include\pantheios\internal\generated\log_sev_functions.hpp(9998) : error C2665: 'stlsoft::winstl_project::c_str_len_a' : none of the 17 overloads could convert all the argument types h:\stlsoft\releases\1.9\stlsoft\include\winstl\shims\access\string\time.hpp(672): could be 'stlsoft::winstl_project::ws_size_t stlsoft::winstl_project::c_str_len_a(const SYSTEMTIME &)' h:\stlsoft\releases\1.9\stlsoft\include\winstl\shims\access\string\time.hpp(720): or 'stlsoft::winstl_project::ws_size_t stlsoft::winstl_project::c_str_len_a(const FILETIME &)' h:\stlsoft\releases\1.9\stlsoft\include\winstl\shims\access\string\time.hpp(753): or 'stlsoft::winstl_project::ws_size_t stlsoft::winstl_project::c_str_len_a(const UDATE &)' h:\stlsoft\releases\1.9\stlsoft\include\winstl\shims\access\string\hwnd.hpp(561): or 'stlsoft::winstl_project::ws_size_t stlsoft::winstl_project::c_str_len_a(HWND)' h:\stlsoft\releases\1.9\stlsoft\include\comstl\shims\access\string\variant.hpp(556): or 'stlsoft::comstl_project::cs_size_t stlsoft::comstl_project::c_str_len_a(const VARIANT &)' h:\stlsoft\releases\1.9\stlsoft\include\comstl\shims\access\string\guid.hpp(298): or 'stlsoft::comstl_project::cs_size_t stlsoft::comstl_project::c_str_len_a(const GUID &)' h:\stlsoft\releases\1.9\stlsoft\include\stlsoft\shims\access\string\std\time.hpp(254): or 'stlsoft::ss_size_t stlsoft::c_str_len_a(const tm &)' h:\stlsoft\releases\1.9\stlsoft\include\stlsoft\shims\access\string\std\time.hpp(143): or 'stlsoft::ss_size_t stlsoft::c_str_len_a(const tm *)' h:\stlsoft\releases\1.9\stlsoft\include\stlsoft\shims\access\string\std\basic_string.hpp(392): or 'stlsoft::ss_size_t stlsoft::c_str_len_a(const std::string &)' h:\stlsoft\releases\1.9\stlsoft\include\stlsoft\shims\access\string\std\exception.hpp(299): or 'stlsoft::ss_size_t stlsoft::c_str_len_a(const std::exception &)' h:\stlsoft\releases\1.9\stlsoft\include\stlsoft\shims\access\string\fwd.h(100): or 'stlsoft::ss_size_t stlsoft::c_str_len_a(const stlsoft::ss_char_a_t *)' h:\freelibs\pantheios\main\1.0\include\pantheios\pantheios.h(1459): or 'size_t pantheios::shims::c_str_len_a(const pantheios::pan_slice_t &)' h:\freelibs\pantheios\main\1.0\include\pantheios\pantheios.h(1549): or 'size_t pantheios::shims::c_str_len_a(const pantheios::pan_slice_t *)' h:\freelibs\pantheios\main\1.0\include\pantheios\pantheios.h(1592): or 'size_t pantheios::shims::c_str_len_a(pantheios::pan_severity_t)' while trying to match the argument list '(const int)' h:\publishing\blogs\ddj\code\conversion_constructors\example1\example1.cpp(8) : see reference to function template instantiation 'int pantheios::log_NOTICE<const char[43],int>(T0 (&),const T1 &)' being compiled with [ T0=const char [43], T1=int ] h:\freelibs\pantheios\main\1.0\include\pantheios\internal\generated\log_sev_functions.hpp(9998) : error C2665: 'stlsoft::winstl_project::c_str_data_a' : none of the 17 overloads could convert all the argument types h:\stlsoft\releases\1.9\stlsoft\include\winstl\shims\access\string\time.hpp(448): could be 'stlsoft::basic_shim_string<C> stlsoft::winstl_project::c_str_data_a(const SYSTEMTIME &)' with [ C=stlsoft::ss_char_a_t ] h:\stlsoft\releases\1.9\stlsoft\include\winstl\shims\access\string\time.hpp(482): or 'stlsoft::basic_shim_string<C> stlsoft::winstl_project::c_str_data_a(const FILETIME &)' with [ C=stlsoft::ss_char_a_t ] h:\stlsoft\releases\1.9\stlsoft\include\winstl\shims\access\string\time.hpp(514): or 'stlsoft::basic_shim_string<C> stlsoft::winstl_project::c_str_data_a(const UDATE &)' with [ C=stlsoft::ss_char_a_t ] h:\stlsoft\releases\1.9\stlsoft\include\winstl\shims\access\string\hwnd.hpp(529): or 'stlsoft::winstl_project::c_str_ptr_HWND_proxy<C> stlsoft::winstl_project::c_str_data_a(HWND)' with [ C=stlsoft::winstl_project::ws_char_a_t ] h:\stlsoft\releases\1.9\stlsoft\include\comstl\shims\access\string\variant.hpp(489): or 'stlsoft::comstl_project::c_str_VARIANT_proxy_a stlsoft::comstl_project::c_str_data_a(const VARIANT &)' h:\stlsoft\releases\1.9\stlsoft\include\comstl\shims\access\string\guid.hpp(256): or 'stlsoft::comstl_project::c_str_ptr_GUID_proxy<C> stlsoft::comstl_project::c_str_data_a(const GUID &)' with [ C=stlsoft::comstl_project::cs_char_a_t ] h:\stlsoft\releases\1.9\stlsoft\include\stlsoft\shims\access\string\std\time.hpp(241): or 'stlsoft::basic_shim_string<C> stlsoft::c_str_data_a(const tm &)' with [ C=stlsoft::ss_char_a_t ] h:\stlsoft\releases\1.9\stlsoft\include\stlsoft\shims\access\string\std\time.hpp(100): or 'stlsoft::basic_shim_string<C> stlsoft::c_str_data_a(const tm *)' with [ C=stlsoft::ss_char_a_t ] h:\stlsoft\releases\1.9\stlsoft\include\stlsoft\shims\access\string\std\basic_string.hpp(226): or 'const stlsoft::ss_char_a_t *stlsoft::c_str_data_a(const std::string &)' h:\stlsoft\releases\1.9\stlsoft\include\stlsoft\shims\access\string\std\exception.hpp(223): or 'const stlsoft::ss_char_a_t *stlsoft::c_str_data_a(const std::exception &)' h:\stlsoft\releases\1.9\stlsoft\include\stlsoft\shims\access\string\fwd.h(93): or 'const stlsoft::ss_char_a_t *stlsoft::c_str_data_a(const stlsoft::ss_char_a_t *)' h:\freelibs\pantheios\main\1.0\include\pantheios\pantheios.h(1437): or 'const char *pantheios::shims::c_str_data_a(const pantheios::pan_slice_t &)' h:\freelibs\pantheios\main\1.0\include\pantheios\pantheios.h(1527): or 'const char *pantheios::shims::c_str_data_a(const pantheios::pan_slice_t *)' h:\freelibs\pantheios\main\1.0\include\pantheios\pantheios.h(1574): or 'const char *pantheios::shims::c_str_data_a(pantheios::pan_severity_t)' while trying to match the argument list '(const int)' Although this is scary at first, it actually holds the essential truth of the problem, something often not the case with C++ compile errors (particularly with moderate-heavy use of templates). The problem is precisely as described (all 17 times!): there are no overloads of c_str_len_a() and c_str_data_a() for "the argument list ' (const int)'". These two function overload suites are string access shims, which are how Pantheios meets its design parameters of expressiveness, flexibility, and performance, something I'll get into in more detail in an upcoming article series on diagnostics I will be writing for Dr Dobb's this year. (The articles will also discuss why the library must reject fundamental type arguments, via the absence of string access shims defined for numeric types; for now you'll just have to take my word.) Many string access shim overloads are provided by the STLSoft libraries, on which Pantheios depends, as well as other open-source libraries (and several of my closed-source commercial developments for clients, allowing their types to be succinctly logged). So far, so good. This behaviour is all by-design, and is actually of benefit to the programmer, being as how it brings potential runtime problems into firm compile-time problems. In all the years I and others have been using Pantheios, this mechanism had never been subverted. Or, at least that was the case until late last year, when a user reported that, after passing an integer directly in a log statement, it had not been displayed correctly in the log. Alarm bells still ringing, we investigated and, lo and behold, it transpired that the integer was indeed being passed to a pair of string access shims, via the conversion constructor of the ATL type CComBSTR, for which string access shims are defined (although only in wide-string builds, hence c_str_len_w() and c_str_data_w()). This only occurred when building when using ATL on Windows and building a wide-string version of an application, in which case the pre-processor symbol UNICODE is defined. What actually happens is that the integer value is passed by the compiler to the (non- explicit) single-parameter CComBSTR(int nSize) constructor, as part of the compiler attempting to find match for the argument. That constructor is used to pre-allocate a buffer of the given size for later use. So what happens in the case of our log statement is that the statement written out is: "secret of life, universe, and everything: " <<- we are left to ponder ... What should have been "42" is actually an empty string, since the secretive CComBSTR instance is empty of content (although it has a buffer of 42-character ready to receive some content). As you can imagine, had the integer had a very high value we may also have had an unwanted memory allocation failure to go along with the malformed log statement. Naturally enough, when we'd got to this point, I wondered with some heat why on earth the ATL designers had not seen fit to declare that constructor explicit. But what to do about it? I cannot proscribe the use of ATL to Pantheios' users. I cannot remove the requisite wide-form string access shim overloads - c_str_len_w(CComBSTR const&) and c_str_data_w(CComBSTR const&) - from STLSoft, since they are established and used in other things. I can recommend that users compile a multibyte-string version of their code, as well as the wide-string one they want, to act as a guard, but that is often not possible. One thing I could do would be to remove the implicit inclusion of the requisite STLSoft libraries from Pantheios when compiling for wide-string in the presence of ATL. But anyone using Pantheios under those conditions who wanted to be able to insert CComBSTR instances into a log statement - a pretty common requirement - would be disadvantaged. I confess I thought long and hard about this, and came up with several partially-forme, baroque, and long-winded solutions, until the light finally shone on me. I'd even written about such things in the first chapter of my first book, Imperfect C++, back in 2004: the answer is constraints. So, all that's been required to allow Pantheios once more to claim 100% type-safety in all circumstances is the insertion of constraints along the lines of that shown in the following 3-parameter application layer log_NOTICE() statement function template: template<typename T0, typename T1> int log_NOTICE( pan_sev_t severity , T0 const& v0 , T1 const& v1 ) { <strong>STLSOFT_STATIC_ASSERT(0 == stlsoft::<a href="">is_fundamental_type</a><T0>::value);</strong> <strong>STLSOFT_STATIC_ASSERT(0 == stlsoft::is_fundamental_type<T1>::value);</strong> . . . The constraint is pretty canonical for these things: a static assertion is used to enforce a compile-time characteristic elicited via a meta-programming type. Specifically, is_fundamental_type is used to verify that the types T0 and T1 are not fundamental types. If either of them is, as in the original case, the compiler will fail on that line. So far, all the compilers that used to support Pantheios are happy with the additional compile-time load - remember, these are compile-time constraints, and make absolutely no different to runtime speed/behaviour - except for GCC 3.x and Digital Mars C/C++, for which the constraints are diluted/elided. Conclusion 1. When you're designing a library involving C++ classes, be sure that you mark explicit every one-parameter (or one-non-default-parameter) constructor, unless you explicitly want to allow implicit conversion construction. Yes, I am aware that that's a tongue-twister. And aware too of the irony. But a language as long-lived and successful as C++ has a lot of backwards compatibility to maintain. Such wrong-way-round situations extend to more than just explicit/implicit, and maybe I'll have a carp about them on another day. 2. When you're designing a library involving arbitrary heterogeneous types, remember the utility of constraints (particularly compile-time ones) for controlling how far the envelope of accepted types may be stretched.
http://www.drdobbs.com/cpp/conversion-constructors-and-subtle-dange/229300256
CC-MAIN-2016-07
en
refinedweb
Emulate Tab The Device Emulator GUI How the Device Emulator Fits into the Intel XDK Choose Project Configure Emulator Environment Test in Emulator Debug and Edit Next Step: Test on Actual Device(s) Summary, Key Terms, and Resources This page describes setting up and using the Intel® XDK device emulator. For an overview of the various Intel XDK test and debug capabilities, see the Test and Debug Overview. The Device Emulator GUI NOTE: Apps may run correctly in the emulator but encounter problems when run on an actual device. The processor on your host system is likely faster than the processor on an actual device, so performance-related problems are typically not seen in the emulator. Also, the up-to-date web run-time used by the device emulator may implement features of HTML5 more correctly than the web run-time on the actual device, especially if that device has an old OS version. Think of the devices inside the device emulator as a collection of "ideal" devices with nearly unlimited memory, processor, and HTML5 rendering features. Use the device emulator to quickly identify and fix many defects before you test your app on an actual mobile device. See Why Running in the Device Emulator Differs from Running on Actual Device(s). After you choose your project in the Intel XDK PROJECTS tab, launch the device emulator by either: - Clicking the EMULATE tab to display the Device Emulator as a docked window. - In the DEVELOP tab Live Development Tasks pane, clicking Run My App > Run in Emulator to display the Device Emulator as a floating window. A visual representation of the selected virtual device appears in the center of the EMULATE tab. To the right and left of the device being tested are two columns of palettes. Use the palettes to configure your test environment by choosing a device, settings, and so on (your screen may differ): If you undock the EMULATE tab, you can move its undocked floating window to view it and the DEVELOP tab side-by-side. To undock this tab, click button near EMULATE and drag the undocked EMULATE window (as shown below). To return (dock) the floating window to the EMULATE tab, click the button or close the floating window. How the Device Emulator Fits into the Intel XDK The, and fire event live development features in the DEVELOP tab or the TEST tab. You must first download and install the Intel App Preview tool on your mobile device. The Device Emulator runs your application (app) in a simulated environment. DEVELOP tab live development features or the TEST tab to test the app and remotely debug it on real physical device(s). For example, you can enable live preview of your device (DEVELOP tab), use a USB cable to connect your device to your development system to enable remote debugging (Android* and Apple iOS* devices) or profiling (Android devices), test your app using a local device(s) either connected to your development computer by using to the same WiFi network or with a USB cable. Also, you can use your mobile device and Intel App Preview to test apps you have previously pushed to the Intel XDK servers from anywhere your device has an internet connection.For more information about the various Intel XDK tabs, see the Intel XDK Tabs Overview. Choose a Project Before you test an app using the EMULATE tab, you need to choose an existing choose your project or create a new project. Use the PROJECTS tab to select the active project to be used by other tabs. Selecting the project will also select the index.html file within the chosen project for the Device Emulator. This tab also lets you remove the selected project from the project list. - Open the Intel XDK. - Click the down arrow to the right of the PROJECTS tab. - Click the New Project button. - Click the plus sign next to Samples and Demos. Several available demos are displayed in the General category. You might create sample or template project or open your own app project and use it while learning about the device emulator on this page. - For instructions to import an HTML5 code base or select a sample or template, see Create, Import, and Manage Intel XDK Projects Configure Emulator Environment As shown in The Device Emulator GUI, the Device Emulator provides a series of palettes, each of which controls some aspect of the virtual environment. The palettes are divided into two accordion-style columns, one on the left and one on the right of the virtual device.Before you test your app in the EMULATE tab, set up each configuration that will be used to test your app. To test your app using multiple configurations, set up each configuration, test and debug it, edit your app, and repeat. Also, make sure the appropriate <script ...lines are present. Customizing the Device Emulator Appearance - To undock the Emulator window, either click the button next to EMULATE, or in the DEVELOP tab, click Run My App > Run in Emulator. To return (dock) the floating window to the EMULATE tab, click the button or close the floating window. - To open (expand) or close a palette, click its name. Expand a palette to view and modify its options. - To hide or show a palette column, click the or button. For example, you might hide one column of palettes to increase the width available to display the virtual device under test. - In the toolbar, use the slider or the button to make the virtual device larger or smaller. - You can move palettes by dragging them to a different location. Choose Device, Connection Type, and Other Characteristics In addition to the palettes, check the EMULATE tab settings by clicking the button in the toolbar. Cross-platform Screen Resolution Selecting a device in the Device Emulator determines the pixel dimensions of the frame where the app under test is rendered. You can view display details in the Information palette. In addition to the screen size of the target device, consider the resolution or pixel density of the target device. Pixel density can be measured using pixels per inch (PPI). Test in Emulator After you have configured the Device Emulator environment, you can test the various features provided by your app's APIs, including on-board device sensors, app update, and firing events. Please note that the Device Emulator provides its own implementation of the system code to simulate features of a real device. The Device Emulator is not an instruction-level emulator; it does not emulate an actual device or its processor, memory, operating system, and HTML5 rendering capabilities. For details, see: Why Running in the Device Emulator Differs from Running on Actual Device(s). The Device Emulator supports the use of Apache Cordova* 3.x plug-in APIs. Use the Intel XDK Projects tab to specify which Cordova core and other third-party plug-ins your app will use. NOTE: The Device Emulator assumes the entry point for the current project is index.html in the project source directory. The Device Emulator does not support use of mp3 and mp4 audio files. If you recently modified your sources using the DEVELOP tab, click the button to update the virtual device shown in the EMULATE tab. The first time you do this, consider choosing the option to have the app's modified files automatically reload when you click EMULATE. At any time, you can view or change the auto- or manual-load settings by clicking the button on the left side of the toolbar. If your app uses the Apache Cordova APIs, the config.xml or intelxdk.config.xml file provides a helpful list of the on-board sensors your app uses; this list may be helpful during testing. Scrolling in the Device Emulator If you open enough palettes (or choose a large enough device in the Device palette) the Device Emulator UI may become larger than your Intel XDK window. In that case the entire Device Emulator window becomes scrollable. You can scroll using the mouse wheel or the arrow keys. There is also a vertical scrollbar on the right side. If the app emulated by the virtual device displays a UI that is larger than the Intel XDK window, then the window in the virtual device also becomes scrollable. To scroll within the device window: - Click the device window to give it focus. - To scroll vertically, either use the mouse wheel or the up arrow or down arrow keys. - To scroll horizontally, use the left arrow or right arrow keys. If your host computer has a touch screen, you can also use touch-based scrolling, both for the device window and the entire Device Emulator UI. Accelerometer Use this palette to simulate device rotation and gyroscopic movement by dragging the small device using the mouse pointer. As you drag it, the displayed accelerometer values change: - Rotation on the x-, y-, and z-axis. - Gyroscopic values alpha, beta, and gamma. Apps that use accelerometer APIs can show the changes to the accelerometer values visually, such as the Hello Cordova demo. To choose this project, open the Projects tab and click: Start a New Project > Samples and Demos > select HTML5+Cordova > Hello Cordova. You can view his sample's documentation, including related emulator and debugging information. Geolocation Use this palette to simulate your app's geographical location capabilities. This palette provides a geo location map, a slider to zoom in or out, and other capabilities. Test Live Update You can test you app's ability to send an Update notification to users that a new version of your app is available. You can choose to emulate the Live Update now, after a reboot occurs, or to notify the user or app. To do this: - Open the AppMobi Live Update Service palette. - Select how the test message will be delivered. Messages can be delivered automatically or by notifying the user of the app that a message needs to be viewed (see the appMobi service documentation). - Type a message to be entered into a change log. - Click Send. Debug and Edit To modify your source code at any time, click the DEVELOP tab and click the file name in the sidebar to select the file to be edited, such as index.html. Debugging Inside the Device Emulator The device emulator - like the rest of the Intel XDK - is an HTML5 application that runs inside node-WebKit. Node-WebKit uses a web runtime (WebKit) based on the web runtime used in the standard Chrome browser. Your app runs within an inner HTML frame whose size depends on the currently selected device. Node WebKit provides a built-in debugger, which is based on and resembles the standard Chrome Developer Tools (CDT) debugger. You can use the built-in debugger to debug your application. The EMULATE tab simulates device viewport sizes, touch events, user agent strings, and various device APIs (other than unimplemented APIs listed below) for a convenient debugging experience using standard Chrome Dev Tools. (Actually you are debugging the entire Intel XDK, but the built-in debugger hides the parts outside of the inner frame containing your app.) To start the debugger, click the button in the toolbar. The debugger appears in a floating window, as shown in the lower part of the screen below (your screen may differ): To close the debugger, click the red x in the upper-right corner. This debugger lets you view elements and resources, type commands into a console, and perform other functions. For example, display Elements and right click to select options for that source line from a context menu (as shown). For information about this debugger, see. Use the debugger to focus on the emulated app's elements and sources. If you see Console messages from files that are not part of your app's sources, these messages may be from the Intel XDK and can be ignored. The debugger lets you view your source code. If you modify the source code in the debugger, this will impact the future behavior of the EMULATE tab as you continue testing your app. However, modifying sources in the debugger does not modify your actual source code in the DEVELOP tab. Unimplemented APIs The EMULATE tab simulates a subset of the device APIs that are available to your application. The only way to debug these non-simulated APIs is by using on-device debugging with App Preview or a built application. For a list of unimplemented APIs in the Device Emulator (and other limitations), see this page: Why Running in the Device Emulator Differs from Running on Actual Device(s). Editing Your Source Files The debugger lets you view and modify the source code in the debugger, but changes you make in the debugger will not modify your actual source code. Either use your favorite external editor or the code editor built into the DEVELOP tab to modify your sources: - After you open or create a project, click the DEVELOP tab. If needed, click the CODE button to display the code editor view. - In the left sidebar under the project name is a list of source files (file tree). Click the source file to be modified. The selected file appears. - In the left sidebar, after you modify a file, a Working Files section appears above the project name. To add a file immediately to the Working Files section, double-click its name. - Modify the file(s) as needed. To save your changes for this file, either click the File > Save menu item or press Ctrl+S or Cmd+S. - For more information about the built-in editor, see Using the Built-in Code Editor. To return to the EMULATE tab, click EMULATE. To restart your app so its appearance and behavior in the EMULATE tab is consistent with your saved sources, click the button. If you use an external editor, remember to click the button after you save the file. The DEVELOP tab also has an APP DESIGNER view for the App Designer GUI layout editor. Click the DESIGN button to use this GUI view for projects created to use App Designer. For more information about the various Intel XDK tabs, see the Intel XDK Tabs Overview topic. Retest Your App After you have fixed any issues using the DEVELOP tab, retest the modified parts of your app using the applicable devices and settings in the EMULATE tab. Return briefly to the topic Test in Emulator. Next Step: Test on Actual Device(s) After you complete testing using the device emulator and are satisfied with your app's quality, the next step is to test the app using the actual device(s) it will support. Before you push your app to the server, log into your Intel XDK account.For each mobile device: - Download the Intel App Preview app from the appropriate app store(s) and install it on your Android, Apple iOS, or Microsoft Windows Phone 8* mobile devices, or your Windows 8 tablet or laptop. - To preview your running app during development, in the DEVELOP tab Code view, use the Live Development Tasks. This lets you run and view your app on a device connected to your development computer using WiFi or USB cable, or preview the app in a browser window. Alternatively, you can use the TEST tab after you sync your app with the testing server. - To access your app built previously, launch the Intel App Preview app and log into your Intel XDK account. In the DEVELOP or TEST tab, you can learn more by clicking the button in upper-right corner. For a video about using the TEST tab, view this video: Additional ways of testing include: - On Android and iOS devices, use the DEBUG tab to access the app running remotely on the target device connected with a USB cable. See the instructions on that tab. - Use a remote debugger (such as weinre) that accesses the app running on the target device. See the instructions on the TEST tab. - Use the simulator or emulator provided with each respective SDK. You use the SDK simulator or emulator on your development system, without access to a real device. Summary, Key Terms, and Resources Summary This page described how to use the Device Emulator, part of the Intel XDK, to test and debug an app. Key Terms - mobile app: An app that executes on a target device. It executes in the device's WebView and interacts with the user and on-board device sensors. - on-board sensor: Built-in sensors available on the real device, such as its accelerometer, geo location, and similar features. - real device: Actual physical hardware, such as a smart phone, tablet, or other mobile device. - virtual device: Software environment that simulates a real device. It is convenient for testing how an app will look and function on actual, physical hardware. - web app: An app that executes on a web server. To use this type of app, a mobile device uses a web browser and internet access. Resources - The main Intel XDK page at: - The Intel XDK introduction page at:Introduction to the Intel XDK - To quickly walk through the Intel XDK development workflow while using a demo sample: Getting Started Tutorial . - Device emulator limitations at: Why Running in the Device Emulator Differs from Running on Actual Device(s) - For a description of when to use the various Intel XDK and related debug and test options, see Debug and Test Overview.
https://software.intel.com/pt-br/xdk/docs/dev-emulator?language=ru
CC-MAIN-2016-07
en
refinedweb
Closed Bug 402558 Opened 15 years ago Closed 15 years ago urls from bookmarks folder in sidebar don't open in tabs on middle-click Categories (Firefox :: Bookmarks & History, defect, P2) Tracking () Firefox 3 beta5 People (Reporter: aryx, Assigned: mak) References Details (Keywords: regression) Attachments (1 file, 9 obsolete files) Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9a9pre) Gecko/2007110405 Minefield/3.0a9pre In the bookmarks sidebar, middle-clicking a bookmarks folder doesn't open the bookmarks in tabs (it doesn't open anything). In Firefox 2.0.0.*, it opens them. Flags: blocking-firefox3? Flags: blocking-firefox3? → blocking-firefox3+ OS: Windows XP → All Hardware: PC → All Target Milestone: --- → Firefox 3 M10 2007051804 works 2007051821 broke Probably broke from: "Turing on Bookmarks on Places." Target Milestone: Firefox 3 M10 → Firefox 3 Mx Target Milestone: Firefox 3 Mx → Firefox 3 M11 Priority: -- → P3 this is for firefox 2 parity serve middle-click on bookmarks folders + trailing spaces fix + a fix to openContainerNodeInTabs since if there are no urls to open, it opens an (untitled) tab Mano: should i fix the last problem in _openTabset with an "if (aURLS.length <= 0) return;" instead of in openContainerNodeInTabs (that uses _openTabset)? Reed: please, don't check-in until i put checkin-needed in keywords, i need to address review comments, thank you :) Assignee: nobody → mak77 Status: NEW → ASSIGNED Attachment #293492 - Flags: review?(mano) Comment on attachment 293492 [details] [diff] [review] serve middle-click on bookmark folders Why just folders? In Places, Open-In-Tabs is supported for all urls-containers (except day containers, known bug) Attachment #293492 - Flags: review?(mano) → review- i was looking at FF2 parity, FF2 does not open history containers also, open in tabs does not get all urls recursively, it opens only the first sublevel, so in history sidebar it would serve only hosts folders and date folders, but not grouped by date & hosts. changing to open all containers has problems with day containers (as you said, it tries to open a bunch of not visible visits), but also on host containers, and also gives me an assertion in CountVisibleRowsForItems... this is for all containers. notes: on middle-click the node container opens, than urls are loaded, this is because i need to get only childs with child.viewIndex >= 0 (or open in tabs will try to open all hidden duplicates, so for a folder with 1 visible item i end up with a request to open 64 tabs!) still don't know if the check on urlsToOpen.length should be done in _openTabset after opening, itemInserted for dynamic containers is calling _countVisibleRowsForItem for a non visible item, and that is causing an assertion, so i'm excluding dynamic containers from _countVisibleRowsForItem Attachment #293492 - Attachment is obsolete: true Attachment #293513 - Flags: review?(mano) I think we should consider re-disabling dynamic containers for 1.9, the implementation is not all that usable at this point. open only visible items will be fixed in Bug 409998 - removed fix for non visible items (Bug 409998) - moved empty url array check in _openTabset - middle click open urls ONLY if container is open (since context menu Open All in Tabs is disabled for closed containers) note: i don't get asserts anymore on dynamic containers, don't know what other patch could have fixed that though... Attachment #293513 - Attachment is obsolete: true Attachment #295230 - Flags: review?(mano) Attachment #293513 - Flags: review?(mano) Since when "Open All in Tabs is disabled for closed containers", that seems like a bug/regression. No reasoning there either. the problem is that to open only visible items we rely on viewIndex, so the container must be opened before calling open in tabs... a better solution could be change getURLsForContainerNode to return an array of unique urls, and remove check on viewIndex This should be done only the query/site container in question is closed, when it's opened, any visible url nodes should be "mapped" to tabs. Comment on attachment 295230 [details] [diff] [review] open container urls on middle-click i need to re-check a couple of things, clearing requests Attachment #295230 - Attachment is obsolete: true Attachment #295230 - Flags: review?(mano) i was tryiong to create a new util to see if a folder has url childs (without having to use getURLsForContainerNode(node).length == 0 that is slow), but for the right pane when i call node.containerOpen = true to use childCount i get a "cannot change properties for a Native Wrapped object". I think that is because folders in the right pane are not containers (see) mano: this is working for everything but not for dynamic containers in the left pane, mainly because this.getFolderContents(aNode.itemId, false, false).root; returns an empty container for left pane bookmark menu, toolbar and unfiled, because they are place:folder=x... so having no children the option open all in tabs is disabled. how can i get correct content? changing expandQueries option does not work (the interface says that for simple folder queries it has no effect since they should be expanded by default) Comment on attachment 296319 [details] [diff] [review] wip partly fixed in Bug 411803 Attachment #296319 - Attachment is obsolete: true - add a new util to check if a container has child urls (perf) - fix right click context to use that - fix sidebar to open in tabs on middle-click, CTRL+click, META+click still does not count correctly for Library left pane folders (need some hint, could be spun-off to a new bug) Target Milestone: Firefox 3 beta3 → --- Comment on attachment 296524 [details] [diff] [review] fixes modifiers too this patch needs a refresh Attachment #296524 - Attachment is obsolete: true unbitrot, used new functions in utils Attachment #305975 - Flags: review?(mano) This covers the regression that middle click stopped working on "Open All in Tabs"? in the context menu or in the menu added to popups? can you explain better? thank you. 1) Go to bookmark menu 2) Go to sub-menu in bookmark menu, with multiple bookmarks in it 3) Middle click on "Open all in tabs" 4) Nothing happens!! Uses latest trunk build as I write this. I searched for a bug on this, kind of looked like Bug 418611 and this was its duplicate. Keywords: regression Priority: P3 → P2 Whiteboard: [has patch][needs review mano] Same with control-click: Steps: - Use the bookmark organizer to make a folder under the bookmarks toolbar folder and populate it with a couple of individual bookmarks. - Control click on a folder in the bookmarks toolbar (under the address bar) Actual (FF3): - The folder opens as if an unmodified left click had been made Expected (and FF2 behaviour): - Each bookmark in the folder is opened as a tab. The above is also valid if "control-click" is replaced with "middle-click". Comment on attachment 305975 [details] [diff] [review] patch >Index: browser/components/places/content/sidebarUtils.js >=================================================================== >- var modifKey = aEvent.shiftKey || aEvent.ctrlKey || aEvent.altKey || >- aEvent.metaKey || (aEvent.button != 0); >- if (!modifKey && tbo.view.isContainer(row.value)) { >+ >+ var modifKey = aEvent.ctrlKey || Event.metaKey; >+ var openInTabs = aEvent.button == 1 || (aEvent.button == 0 && modifKey); this should rather be #ifdef'ed.... Cmd+click on mac and Ctrl+click on windows. Shift is supposed to open in tabs in a new window, I think. Check the toolbar. Attachment #305975 - Flags: review?(mano) → review- (In reply to comment #27) > Shift is supposed to open in tabs in a new window, I think. Check the toolbar. the toolbar does open tabs in new window if you Shift-click the "open all in tabs" item, while shift-click on the folder is like a normal click (opens the folder). the FX2 sidebar opens in new window when you shift-click on the folder, so i'm adding this check to get back the same behaviour as FX2 I think shift+middle-click for open-tabs-in-new-tabs makes sense in the toolbar and menu context as well. Mano, this is the actual behaviour with an updated patch: 1. left-click: toggle container open (CORRECT) 2. ctrl+left-click: open container contents in tabs (append) (CORRECT) 3. shift+left-click: open container contents in tabs in new window (CORRECT) 4. ctrl+shift+left-click: open container contents in tabs (replace) (CORRECT?) 5. middle-click: open container contents in tabs (append) (CORRECT) 6. ctrl+middle-click: open container contents in tabs (append) (CORRECT) 7. shift+middle-click: open container contents in tabs (replace) (CORRECT?) 8. ctrl+shift+middle-click: open container contents in tabs (replace) (CORRECT?) "CORRECT?" items are mostly due to the fact we are simply passing the work to openContainerNodeInTabs that ends up calling whereToOpenLink(aEvent, false, true); If we want to change that behaviour we must patch _openTabSet too to force the opening in a new window. Clicking both keys makes unclear the user will, in FX2 that does not open anything, should be the same here? (In reply to comment #29) > I think shift+middle-click for open-tabs-in-new-tabs makes sense in the toolbar and menu context as well. this will most probably have the same behaviour as previous points, with shift+middle-click, whereToOpenLink will _replace_ contents with new tabs. Target Milestone: --- → Firefox 3 Whiteboard: [has patch][needs review mano] → [needs new patch] Whiteboard: [needs new patch] → [needs def][swag: 0.5d] implements comment #30 Attachment #305975 - Attachment is obsolete: true i want to add gutter selection in onClick to this patch, should solve Bug 421210 Comment on attachment 307676 [details] [diff] [review] patch Mike, could you define what is the expected behaviour about comment #30? Attachment #307676 - Flags: ui-review?(mconnor) Comment on attachment 307676 [details] [diff] [review] patch I believe that the behaviour outlined in comment 30 is correct, as I understand it. I think we've got pretty solid logic in whereToOpenLink at this point, so we should really just trust it to do the right thing... Attachment #307676 - Flags: ui-review?(mconnor) → ui-review+ this fixes: - containers in sidebar open on middle-click or left-click + modifiers - selection in sidebar can be done in gutter (space before the favicon) - you can middle-click the "Open all in tabs" option in menupopups - open all in tabs context menu option disabled state is calculated faster - open all in tabs context menu option works correctly for folder shortcuts - onclick handler in BookmarksEventHandler code cleanup (rem. useless code) - middle-click or left-click with modifiers works on folders in toolbar and menus after ev. review + checkin will look around to close related bugs Attachment #307676 - Attachment is obsolete: true Attachment #308873 - Flags: review?(mano) Whiteboard: [needs def][swag: 0.5d] → [has patch][needs review Mano] Attachment #308873 - Attachment is obsolete: true Attachment #309413 - Flags: review?(mano) Attachment #308873 - Flags: review?(mano) Comment on attachment 309413 [details] [diff] [review] unbitrot for PlacesUIUtils >+ var modifKey = aEvent.metaKey || aEvent.shiftKey); Syntax error, in both places. I'll fix this on checkin. r=mano Attachment #309413 - Flags: review?(mano) → review+ I'm simplifying the check to: #ifdef XP_MACOSX var modifKey = aEvent.metaKey || aEvent.shiftKey; #else var modifKey = aEvent.ctrlKey || aEvent.shiftKey; #endif if (aEvent.button == 2 || (aEvent.button == 0 && !modifKey)) return; mozilla/browser/base/content/browser-places.js 1.120 mozilla/browser/components/places/content/bookmarksPanel.xul 1.12 mozilla/browser/components/places/content/controller.js 1.223 mozilla/browser/components/places/content/history-panel.xul 1.16 Status: ASSIGNED → RESOLVED Closed: 15 years ago Resolution: --- → FIXED Whiteboard: [has patch][needs review Mano] Target Milestone: Firefox 3 → Firefox 3 beta5 after this fix when i open folder with many tab with middle-click the Confirm open dialog is showing before the menus is closing. the dialog is behind the open menus onemen.one: Please file a bug on that and cc me and Marco. verified with: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9b5pre) Gecko/2008031704 Minefield/3.0b5pre
https://bugzilla.mozilla.org/show_bug.cgi?id=402558
CC-MAIN-2022-40
en
refinedweb
Robert Bosch Interview Experience 2019(On-campus) I attended the placement drive of RBEI on Aug 2019 in our college G.L. Bajaj Institute of Technology & Management, Gr, Noida. Almost 500 students appeared for this drive. The profile offered for CSE students was Associate Software Engineer. Round 1: Online test The test consisted of Apt+Logical Reasoning+Output based questions. Level of Apt and Reasoning was easy. You can solve it easily with some practice. Total time is around 60 mins. There is also a negative marking of 0.25 marks per question. There was also 2 coding questions and the time given was 30 mins. I did one question completely and was not able to attempt the other due to time limit. The result came in the evening through mail and I was shortlisted for the interview round scheduled for the next day. 120 students were shortlisted for the next round. Round 2: Technical Interview There were 18 panelists for interview including both for CS and EC. This round was the most difficult in all the 3 rounds. Mine interview lasted for approx 2 hours.He asked numerous questions from my projects, coding questions, DBMS, OS questions, microprocessor questions, C & C++ questions. ->Questions on Project: Idea behind project, technologies used with thorough explanation. ->Coding ques: Searching algos, Implement stack (push, pop funs) in C, Exception handling in C++ (as I mentioned C & C++ in my resume), Multithreading in C++, Call by value & reference in detail. ->DBMS: Normalization with types, ACID properties. ->Operating System: Semaphores, Mutex, Monitors, Thrashing, Deadlock with causes and preventive measures. ->C & C++ questions: Advantages of C over C++, Explain namespace in c++, Virtual in C++, Access specifiers and some more conceptual questions. I almost answered all questions except one or two. The panelist was very humble and listened to all my answers patiently with a smile. Around 40 students cleared this round. Round 3: Hr interview After a wait for 30 mins, results came and I was shortlisted for the next round. My advice is do not take this round lightly as lot of my friends were eliminated from this round even after qualifying the tech interview. The interviewer was polite and asked me to relax and asked few questions such as Tell me about urself, tell about ur family, Why do u want to join Bosch, how much serious are you about Bosch and some other questions. The interview lasted for 10 mins. The final result came on the next day through mail and I was selected. Advice: Prepare the basics of programming languages, coding questions and subjects. Believe in yourself and most importantly be confident and you will get selected. All the best!!!
https://www.geeksforgeeks.org/robert-bosch-interview-experience-on-campus/
CC-MAIN-2022-40
en
refinedweb
How many times have you filled out forms requesting personal information? It's probably too many times to count. When online and signed in, you can save a lot of time thanks to your browser’s autofill feature. In other cases, you often have to provide the same data manually, again and again. The first Document AI identity processors are now generally available and can help you solve this problem. In this post, you’ll see how to… - Process identity documents with Document AI - Create your own identity form autofiller Use cases Here are a few situations that you've probably encountered: - Financial accounts: Companies need to validate the identity of individuals. When creating a customer account, you need to present a government-issued ID for manual validation. - Transportation networks: To handle subscriptions, operators often manage fleets of custom identity-like cards. These cards are used for in-person validation, and they require an ID photo. - Identity gates: When crossing a border (or even when flying domestically), you need to pass an identity check. The main gates have streamlined processes and are generally well equipped to scale with the traffic. On the contrary, smaller gates along borders can have manual processes – sometimes on the way in and the way out – which can lead to long lines and delays. - Hotels: When traveling abroad and checking in, you often need to show your passport for a scan. Sometimes, you also need to fill out a longer paper form and write down the same data. - Customer benefits: For benefit certificates or loyalty cards, you generally have to provide personal info, which can include a portrait photo. In these examples, the requested info – including the portrait photo – is already on your identity document. Moreover, an official authority has already validated it. Checking or retrieving the data directly from this source of truth would not only make processes faster and more effective, but also remove a lot of friction for end users. Identity processors Processor types Each Document AI identity processor is a machine learning model trained to extract information from a standard ID document such as: - Driver license - National ID - Passport Note: an ID can have information on both sides, so identity processors support up to two pages per document. Availability Generally available as of June 2022, you can use two US identity processors in production: Currently available in Preview: - The Identity Doc Fraud Detector, to check whether an ID document has been tampered with - Three French identity processors Notes: - More identity processors are in the pipe. - To request access to processors in Preview, please fill out the Access Request Form. Processor creation You can create a processor: - Manually from Cloud Console (web admin UI) - Programmatically with the API Processors are location-based. This helps guarantee where processing will occur for each processor. Here are the current multi-region locations: Once you've created a processor, you reference it with its ID ( PROCESSOR_ID hereafter). Note: To manage processors programmatically, see the codelab Managing Document AI processors with Python. Document processing You can process documents in two ways: - Synchronously with an online request, to analyze a single document and directly use the results - Asynchronously with a batch request, to launch a batch processing operation on multiple or larger documents Online requests Example of a REST online request: - The method is named process. - The input document here is a PNG image (base64 encoded). - This request is processed in the European Union. - The response is returned synchronously. POST /v1/projects/PROJECT_ID/locations/eu/processors/PROCESSOR_ID :process { "rawDocument": { "content": "iVBORw0KGg…", "mimeType": "image/png" }, "skipHumanReview": true } Batch requests Example of a REST batch request: - The method is named batchProcess. - The batchProcessmethod launches the batch processing of multiple documents. - This request is processed in the United States. - The response is returned asynchronously; output files will be stored under my-storage-bucket/output/. POST /v1/projects/PROJECT_ID/locations/us/processors/PROCESSOR_ID :batchProcess { "inputDocuments": { "gcsDocuments": { "documents": [ { "gcsUri": "gs://my-storage-bucket/input/id-doc-1.pdf", "mimeType": "application/pdf" }, { "gcsUri": "gs://my-storage-bucket/input/id-doc-2.tiff", "mimeType": "image/tiff" }, { "gcsUri": "gs://my-storage-bucket/input/id-doc-3.png", "mimeType": "image/png" }, { "gcsUri": "gs://my-storage-bucket/input/id-doc-4.gif", "mimeType": "image/gif" } ] } }, "documentOutputConfig": { "gcsOutputConfig": { "gcsUri": "gs://my-storage-bucket/output/" } }, "skipHumanReview": true } Interfaces Document AI is available through the usual Google Cloud interfaces: - The RPC API (low-latency gRPC) - The REST API (JSON requests and responses) - Client libraries (gRPC wrappers, currently available for Python, Node.js, and Java) - Cloud Console (web admin UI) Note: With the client libraries, you can develop in your preferred programming language. You'll see an example later in this post. Identity fields A typical REST response looks like the following: - The textand pagesfields include the OCR data detected by the underlying ML models. This part is common to all Document AI processors. - The entitieslist contains the fields specifically detected by the identity processor. { "text": "…", "pages": […], "entities": [ { "textAnchor": {…}, "type": "Family Name", "mentionText": "PICHAI", "confidence": 0.999945, "pageAnchor": {…}, "id": "4" }, { "textAnchor": {…}, "type": "Given Names", "mentionText": "Sundar", "confidence": 0.9999612, "pageAnchor": {…}, "id": "5" } ,… ] } Here are the detectable identity fields: Please note that Address and MRZ Code are optional fields. For example, a US passport contains an MRZ but no address. Fraud detection Available in preview, the Identity Doc Fraud Detector helps detect tampering attempts. Typically, when an identity document does not "pass" the fraud detector, your automated process can block the attempt or trigger a human validation. Here is an example of signals returned: Sample demo You can process a document live with just a few lines of code. Here is a Python example: import google.cloud.documentai_v1 as docai def process_document( file: typing.BinaryIO, mime_type: str, project_id: str, location: str, processor_id: str, ) -> docai.Document: api_endpoint = {"api_endpoint": f"{location}-documentai.googleapis.com"} client = docai.DocumentProcessorServiceClient(client_options=api_endpoint) raw_document = docai.RawDocument(content=file.read(), mime_type=mime_type) name = client.processor_path(project_id, location, processor_id) request = docai.ProcessRequest( raw_document=raw_document, name=name, skip_human_review=True, ) response = client.process_document(request) return docai.Document(response.document) This function uses the Python client library: - The input is a file(any format supported by the processor). clientis an API wrapper (configured for processing to take place in the desired location). process_documentcalls the API processmethod, which returns results in seconds. - The output is a structured Document. You can collect the detected fields by parsing the document entities: def id_data_from_document(document: docai.Document) -> dict: id_data = defaultdict(dict) for entity in document.entities: key = entity.type_ page_index = 0 value = entity.mention_text confidence, normalized = None, None if entity.page_anchor: page_index = entity.page_anchor.page_refs[0].page if not value: # Send the detected portrait image instead image = crop_entity(document, entity) value = data_url_from_image(image) if entity.confidence != 0.0: confidence = int(entity.confidence * 100 + 0.5) if entity.normalized_value: normalized = entity.normalized_value.text id_data[key][page_index] = dict( value=value, confidence=confidence, normalized=normalized, ) return id_data Note: This function builds a mapping ready to be sent to a frontend. A similar function can be used for other specialized processors. Finalize your app: - Define your user experience and architecture - Implement your backend and its API - Implement your frontend with a mix of HTML + CSS + JS - Add a couple of features: file uploads, document samples, or webcam captures - That's it; you've built an identity form autofiller Here is a sample web app in action: Here is the processing of a French national ID, dropping images from the client: Note: For documents with multiple pages, you can use a PDF or TIFF container. In this example, the two uploaded PNG images are merged by the backend and processed as a TIFF file. And this is the processing of a US driver license, captured with a laptop 720p webcam: Notes: - Did you notice that the webcam capture is skewed and the detected portrait image straight? That's because Document AI automatically deskews the input at the page level. Documents can even be upside down. - Some fields (such as the dates) are returned with their normalized values. This can make storing and processing these values a lot easier – and less error-prone – for developers. The source code for this demo is available in our Document AI sample repository. More - Try Document AI in your browser - Document AI documentation - Document AI how-to guides - Sending a processing request - Full processor and detail list - Release notes - Codelab – Specialized processors with Document AI - Code – Document AI samples Stay tuned; the family of Document AI processors keeps growing and growing. Top comments (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/googlecloud/automate-identity-document-processing-with-document-ai-3h2p
CC-MAIN-2022-40
en
refinedweb
tag:blogger.com,1999:blog-107486142022-09-07T02:33:19.223-04:00FrazzledDadBleary-eyed ruminations of a work at home Father.Jim Holmes Complexity<div>So, so much of good system design is abstracting out complexity. So, so much of good testing is understanding where the complexity is and poking that with a flamethrower until you decypher as many interesting things about that complexity as you possibly can.</div><div><br /></div><div>Yes, that's a lot of badly mixed metaphors. Deal with it.</div><div><br /></div><div.</div><div><br /></div><div>Simple problem, familiar domain. Hours, rate, determine how much a worker gets paid before deductions. </div><div><br /></div><div>Right now we've finished standard time and have six total tests (three single XUnit [Fact] tests and one data-driven [Theory] with the same three test scenarios). </div><div><br /></div><div>The "system" code right now is this bit of glorious, beautiful stuff below. Please, be kind and remember the context is to show testers a bit about TDD and how code works. No, I wouldn't use int for actual payroll, m'kay?</div><div><br /></div><div> <span style="font-family: courier;"> public class PayrollCalculator {</span></div><div><span style="font-family: courier;"> public int ComputeHourlyWages(int hours, int rate)</span></div><div><span style="font-family: courier;"> {</span></div><div><span style="font-family: courier;"> return hours * rate;</span></div><div><span style="font-family: courier;"> }</span></div><div><span style="font-family: courier;"> }</span></div><div> </div><div!</div><div><br /></div><div>The intent of my tester's question was if we should make the system work like the snippet below--some hand-wavy psuedo code is inline.</div><div><br /></div><div><span style="font-family: courier;"> public class PayrollCalculator {</span></div><div><span style="font-family: courier;"><br /></span></div><div><span style="font-family: courier;"> public int ComputeOvertimeWages(int hours, int rate) {</span></div><div><span style="font-family: courier;"> //calculate ot wages</span></div><div><span style="font-family: courier;"> return otWages;</span></div><div><span style="font-family: courier;"> }</span></div><div><span style="font-family: courier;"><br /></span></div><div><span style="font-family: courier;"> public int ComputeStandardTimeWages (int hours, int rate) {</span></div><div><span style="font-family: courier;"> <span style="white-space: pre;"> </span> //calculate standard wages</span></div><div><span style="font-family: courier;"> return standardWages;</span></div><div><span style="font-family: courier;"> }</span></div><div><span style="font-family: courier;"> }</span></div><div> </div><div> </div><div>Splitting calls to compute separate parts of one overall action may seem to make sense initially, but it's far more risky and complex.</div><div><br /></div><div>What are we trying to do? We're trying to figure out what a worker gets paid for the week. That's the single outcome.</div><div><br /></div><div>Think about some of the complexities we might run in to if this was broken into two separate calls. Think of some of the risks that might be involved.</div><div><br /></div><div><ul style="text-align: left;"><li>Does the order of the calls matter? Do I need to figure standard hours first, then call the ComputeOvertimeWages method? What happens if I call overtime before standard?</li><li>Do I call overtime for only the hours above 40?</li><li>If the worker put in over 40 hours, do I call standard wages with just 40, or will the method drop extra hours and just figure using 40?</li><li>Does the code invoking these calls have to keep track of standard and overtime hours?</li><li>What happens if in the future we change the number of standard hours in the pay period?</li></ul></div><div><br /></div><div>As a system designer you're far better off abstracting all this away from the person calling your methods.</div><div><br /></div><div>One simple method, and you hide that complexity. Just give the person calling your API what they want: the actual wages for the worker.</div><div><br /></div><div>This same concept applies if you're dealing with complex workflows and state. Let's say you have a five step workflow for creating a new job bid, and you need several critical pieces of information at each step.</div><div><br /></div><div.</div><div><br /></div><div>One interaction, one nice result. Easier to write for your consumers, far easier to test, too.</div><div><br /></div><div>Someone years ago spoke of making it so your users could fall into the pit of success. Do more of that, and less of pushing them into the pit of despair. (I'm throwing out a Princess Bride reference, not one to Harry Harlow...)</div> Talk: You Got ThisLast week I was fortunate to have been the keynoter at <a href="" target="_blank">DevSpace Technical Conference</a> in Huntsville, Alabama. DevSpace's chairman Chris Gardner reached out to me some months ago and asked if I'd be willing to talk about how I've gotten through life since <a href="" target="_blank">the awful events of January 10th, 2017</a>.<br /> <br /> Below is a video of that talk. It's intense, emotional, and very likely a complete surprise to attendees who didn't already know of the tragedy that struck my family last year.<br /> <br /> As with <a href="" target="_blank">my KalamazooX conference talk last March</a>,.<br /> <br />. <br /> <br /> This talk is fairly different from the KalX talk above. Lots of overlap, but it's a different focus, because I was trying to point out to the audience that each and every one of us has the ability to weather horrible storms.<br /> <br /> You Got This.<br /> <br /> <iframe allow="autoplay; encrypted-media" allowfullscreen="" frameborder="0" height="543" src="" width="800"></iframe> New Technical and Leadership Blog for Me!I decided to move my technical and leadership postings over to a new blog on my <a href="" target="_blank">Guidepost Systems site</a>.<br /> <br /> I'm doing this in the hopes of continuing to shore up and flesh out my professional branding around Guidepost Systems.<br /> <br /> I will occasionally cross-link content here as a reminder. I'll continue to post notices on my <a href="" target="_blank">Twitter timeline</a> when things go live over at <a href="" target="_blank">my blog</a> there.<br /> <br /> Please go follow along at that new location. I look forward to comments and discussions on postings there!<br /> <br /> (I've already got a series going there on creating a technical debt payment Test CredoA few years ago I scribbled down some thoughts to myself as I was struggling with my brain and a frustrating project.<br /> <br /> I pinned these notes on a cubicle wall without thinking much as a reminder to myself. Never thought much else of it, simply because this was me reminding myself of things I needed reminding of.<br /> <br /> Friday a good pal who was on that same project hit me with a shot out of nowhere when he reminded me of this. I guess it had an impact on him as well.<br /> <br /> Frankly I’d forgotten about these. His comment was a good reason to go hunt this down.<br /> <table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody> <tr><td style="text-align: center;"><img alt="Jim's Testing Credo" height="225" src="" style="margin-left: auto; margin-right: auto;" title="Jim's Testing Credo" width="400" /></td></tr> <tr><td class="tr-caption" style="text-align: center;">Jim's Testing Credo</td></tr> </tbody></table> A Geek Leader Podcast<p>Somehow I forgot to post here that John Rouda was kind enough to invite me on his A Geek Leader podcast some time back.</p> <p>We talk about leadership, learning, adversity, and of course The Event from Jan 10th, 2017.</p> <p>John’s a wonderful, gracious host and we had a great conversation. You can find details <a href="">at John’s site< Rationalizing Bad Coding Practices<p>Rant. (Surprise.)</p> <p?</p> <p>Believe it or not, there are times I’m OK with this.</p> <p>I’m OK with the practices above <strong>if</strong>:</p> <ul> <li>Your business stakeholders and users are happy with the system in production</li> <li>Your rework rate for defects and missed requirements is near zero</li> <li>You have fewer than six to ten defects over several months</li> <li>You have near zero defects in production</li> <li>Your codebase is simple to maintain and add features to</li> <li>Static analysis of your codebase backs up the previous point with solid metrics meeting recognized industry standards for coupling, complexity, etc.</li> <li>Everyone on the team can work any part of the codebase</li> <li>New team members can pair up with an experienced member and be productive in days, not weeks</li> </ul> <p>If you meet the above criteria, then it’s OK to pass up on disciplined, PROVEN approaches to software delivery–because you are meeting the end game: high-value, maintainable software that’s solving problems.</p> <p>The thing is, very, VERY few people, teams, or organizations can answer all those questions affirmatively if they’re being remotely honest.</p> <p.</p> <p>The rest of the 99.865% of the software industry has decades of data proving how skipping careful work leads to failed projects and lousy care of our users.</p> <p.</p> <p>Do not rationalize your concious decisions to do poor work with “I’m more effective when I just…” No. No, you are not. You think you may be, but not unless you can answer the questions above “Yes!” with confidence and honesty.</p> <p>Stop rationalizing. Stop making excuses.</p> <p>Own. Your. Shit.</p> <p>And clean it WebDriver Components<h2> Understanding WebDriver Components</h2> <b>[UPDATE]</b> This post is based on a submission I made to the official WebDriver documentation in early Spring of 2018. It's meant to help folks understand how pieces and parts fit together for WebDriver. <b>[/]</b><br /> <br /> Building a test suite using WebDriver will require you to understand and effectively use a number of different components. As with everything in software, different people use different terms for the same idea. Below is a breakdown of how terms are used in this description. <br /> <h3> Terminology</h3> <ul> <li><b>API:</b> Application Programming Interface. This is the set of "commands" you use to manipulate WebDriver. </li> <li><b>Library:</b> A code module which contains the APIs and the code necessary to implement them. Libraries are specific to each language binding, eg .jar files for Java, .dll files for .NET, etc. </li> <li><b>Driver:</b> Responsible for controlling the actual browser. Most drivers are created by the browser vendors themselves. Drivers are generally executable modules that run on the system with the browser itself, not on the system executing the test suite. (Although those may be the same system.) <i>NOTE: Some people refer to the drivers as proxies.</i> </li> <li><b>Framework:</b>. </li> </ul> <h3> The Parts and Pieces</h3> At its minimum, WebDriver talks to a browser through a driver. Communication is two way: WebDriver passes commands to the browser through the driver, and receives information back via the same route. <br /> <div class="separator" style="clear: both; text-align: center;"> <a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-</a></div> <br />.<br /> <br /> This simple example above is <i>direct</i> communication. Communication to the browser may also be <i>remote</i> communication through Selenium Server or RemoteWebDriver. RemoteWebDriver runs on the same system as the driver and the browser. <br /> <div class="separator" style="clear: both; text-align: center;"> <a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-</a></div> <br /> Remote communication can also take place using Selenium Server or Selenium Grid, both of which in turn talk to the driver on the host system <br /> <div class="separator" style="clear: both; text-align: center;"> <a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-</a></div> <br /> <h3> Where Frameworks Fit In</h3> Web. <br /> This is where various frameworks come in to play. At a minimum you'll need a test framework that matches the language bindings, eg NUnit for .NET, JUnit for Java, RSpec for Ruby, etc.<br /> <br /> The test framework is responsible for running and executing your WebDriver and related steps in your tests. As such, you can think of it looking akin to the following image. <br /> <div class="separator" style="clear: both; text-align: center;"> <a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-</a></div> <br /> The test framework is also what provides you asserts, comparisons, checks, or whatever that framework's vernacular for the actual test you're performing, eg<br /> <br /> <blockquote class="tr_bq"> AssertAreEqual(orderTotalAmount, "$42");</blockquote> <br /> Natural language frameworks/tools such as Cucumber may exist as part of that Test Framework box in the figure above, or they may wrap the Test Framework entirely in their own implementation.<br /> <br /> Natural language frameworks enable the team to write tests in plain English that help ensure clarity of <i>why</i> you are building something and <i>what</i> it is supposed to do, versus the very granular <i>how</i> of a good unit test.<br /> <br /> If you're not familiar with specifications, Gherkin, Cucumber, BDD, ATDD, or whatever other soup-of-the-day acronym/phrase the world has come up with, then I encourage you to go find a copy of <a href="" target="_blank">Specifications By Example</a>. It's a wonderful place to start. You should follow that up with <a href="" target="_blank">50 Quick Ideas to Improve Your User Stories</a>, and <a href="" target="_blank">50 Quick Ideas to Improve Your Tests</a>, both by Gojko Adzjic.<br /> <h3> Following Up</h3> Don't stop here. Go learn more about how WebDriver works. Read the <a href="" target="_blank">WebDriver documentation</a>. Sign up for Dave Haeffner's awesome <a href="" target="_blank">Elemental Selenium</a> newsletter and read his past articles.<br /> <br /> Join the <a href="" target="_blank">Slack Channel</a> and ask questions. (But please, do yourself and the Selenium community a favor and first do a little research so you're asking questions in a fashion that can help others best respond!) ThatConference.<br /> <br /> Lots there on moving testing conversations to the left. Lots there about testing as an activity.<br /> <br /> Thank you if you attended the session. I had some really good questions, folks were patient with my bad jokes, and there were some really good conversations after the talk.<br /> <br />%.<br /> <br /> Thank you.<br /> <script async="" class="speakerdeck-embed" data-</a><br /> <br />.<br /> <br /> <a href="" target="_blank"></a> <script async</a>) and is full of conversations and exercises meant to help attendees figure out <em>if</em> they want to become leaders, and what they need to learn about themselves in order to be successful as they grow. It’s also full of my bad jokes, but what else would you expect?<br /> <br /> Slides for the workshop are on SpeakerDeck at <a href="" target="_blank"></a> <script asyncthe intro article with links to others here</a>.] <br /> Titanfall 2 is a really fun game, even though the multiplayer aspect is not a type of game I do well at or even search out to play in other games. [<i>Ed.: Dude, you have 28 <b>days</b> of total gameplay and you just got Gen 50. WTF? Seriously?</i>]<br /> A few closing thoughts for this series:<br /> <ul> <li><b>Find Game Modes That Work For You.< <i>for you</i>.</li> <li><b>Figure Out Your Goals. Or If You Even Care.</b> <i>have</i> to have goals. That’s just fine too.</li> <li><b>Get a Mic. Chat With Your Team.</b> <b>exactly</b> why you ended with five kills and few points. Jerkface.</li> </ul> The vast majority of folks with mics tend to be good teammates. A very few even know how to communicate well to <i>help</i> the team, especially when you’re playing Frontier Defense.<br /> <ul> <li><b>Learn Effective Communication.</b> .”</li> </ul> Of course, there’s my always helpful running “useful” commentary: “Well, shit. That didn’t work so well.” Or “Damnit, Funky Chicken killed my ass again because I was stupid and ran in front of him.”<br /> Don’t be me. Be better than me…<br /> <ul> <li><b>Have Fun.</b>.</li> </ul> <h2 id="inclosing"> In Closing</h2> I.<br /> Look me up some time if you’re interested. My GamerTag is FrazzledDad and I’m online 9pm-ish in the Pacific timezone.<br /> In the meantime, go have some fun.: Movement and Shooting<p>[NOTE: One in a series of posts on my Titanfall 2 experience. Find <a href="" target="_blank">the intro article with links to others here</a>.] </p> <p.)</p> <h2 id="speedisyourfriend">Speed Is Your Friend</h2> <p>ProTip from Captain Obvious: The faster you’re moving, the harder it is to get shot. Duh.</p> <p>Spend time in the Gauntlet learning to move quickly, and learning how to string together moves that add to your speed: wall runs, leaps, grapple, slides, all the neat things that really make moving as a pilot so fun.</p> <p>Learning the maps well will help you out greatly with your movement, simply by knowing “Oh, yeah, I can bounce along this route right here.”</p> <p.</p> <p>Speaking of the grapple…</p> <h2 id="ilovemygrapple">I Love My Grapple</h2> <p>The various pilot tactical mods are all neat, but I have used the Grapple exclusively for many months. The Grapple lets me get to higher spots for better firing positions.</p> <p.</p> <p.</p> !</p> <p>Like everything else, the Grapple take some practice to get proficient with it. It’s freaking awesome once you’re good.</p> <p>As I’ve repeatedly said in this series, this is specific to my style of play. I’m happy for you if there are other tacticals you prefer. Honest.</p> <h2 id="changingdirectionviaslides">Changing Direction via Slides</h2> <p.</p> <h2 id="sightlocationwhilemoving">Sight Location While Moving</h2> <p>Pay attention to where you’re keeping your hip-fire sights while moving. For the longest time I’d run around with my ADS reticule down below the horizon. No clue why, it’s just how I rolled. </p> <p.</p> <h2 id="movingsidewaysorkeepyoursightonthreats">Moving Sideways, Or Keep Your Sight on Threats</h2> <p.</p> <h2 id="getfasteratgettingyoursightontarget">Get Faster at Getting Your Sight on Target</h2> <p>Getting your sight on target faster means you’ve got better odds at killing the enemy before they kill you. Hello, thanks Captain Obvious.</p> <p>One part of this is the Gun Ready mod which gets you into ADS quicker. The other part is getting better at getting your sights <strong><em>on</em></strong> the target. That comes through practice, either deliberate practice or in the game.</p> <p.</p> .</p> <p>I spent a lot of time doing things like that simple movement from various directions at various target ranges (near, mid, far).</p> <p>I’m not great, but it paid off.</p> <h2 id="learntoshootfromthehip">Learn to Shoot From the Hip</h2> <p>Firing from the hip saves you time transitioning to sights. It also leaves you a wider view versus the constrained one you get in ADS. Hip fire is especially good against opponent minions who don’t move and dodge very effectively.</p> <p>Don’t focus on improving just your ADS firing; spend time on hip fire too.<: Tactics<p>[NOTE: One in a series of posts on my Titanfall 2 experience. Find <a href="" target="_blank">the intro article with links to others here</a>.] </p> <p>Oi. Where to start?</p> <p.” </p> <p>Thanks a Ton. Next in that series: “Five Ways to Make Friends, Starting With Not Picking Your Butt in Public.”</p> <p>Here are a collection of odds and ends I’ve picked up. It’s stuff that lots of accomplished FPS players will be saying “Well, duh!” to, but hopefully some readers (all three of you) may find useful.</p> <p>This is general tactics—there’s a whole separate post on movement and shooting. Yes, there’s some overlap. Deal with it.</p> <h2 id="learnthemaps">Learn The Maps</h2> <p>Know the map. I can’t emphasize how important this is. It took me far longer to figure out just how critical this is for <strong>any</strong> FPS game. Knowing the map inside and out gives you many critical advantages. </p> <p.</p> <p>Some things to look for as you’re learning the maps:</p> <ul> <li><p>Find good shooting spots </p></li> <li><p>Find good shooting spots that help hide you</p></li> <li><p>Find good shooting spots that help hide you with good cover that protects you from fire from at least one or more angles. (Think of hiding with a wall to your side or mechanical structures on roofs behind you.)</p></li> <li><p>Find good fire lanes—areas that offer good cover for you and lots of visibility to see opponents. Think of the main street under the monorail on Eden; the main corridors on Rise; much of the open spaces on Homestead</p></li> </ul> <p.”</p> <h2 id="cover">Cover</h2> <p>It took me way, <strong><em>WAY</em></strong> too long to get better at using cover. </p> <p>If you’re moving, do so along paths that block you from fire from one or more directions. Wall running is great for several reasons. First, you’re moving fast. Secondly, you’re harder to hit. Third, nobody can shoot you anywhere from the other side of the wall.</p> <p <em>not</em> covered from.</p> <p>Keep an eye on your minimap. Keep cover in mind when you see threat indicators on the map. Keep something between you and those threat directions until you’re ready to have a look or attack out in that direction.</p> <h2 id="avoidfirelanes">Avoid Fire Lanes</h2> <p.</p> <h2 id="reloadconstantly">Reload Constantly</h2> <p>As Master Sergeant Brianna Fallon eloquently put it to her squad in <em>Chains of Command</em>, “If one of you sons of bitches gets killed for lack of shooting back because you ran out of ammo, I will personally violate your carcass.”</p> <p>You do not want to die because your mag had one round in it when you come face to face with an opponent who has you in their sights.</p> <p>Regardless if I’m in a Titan or on foot as a Pilot, I reload <em>constantly</em>. I’ll take advantage of displacing movements to reload, ducking behind cover, etc. I don’t wait for my mag to empty and auto-reload. Instead I want to make sure I’m heading to the next engagement with a full mag.</p> <p.</p> <p>Reload. Reload all. The. Freaking. Time.</p> <h2 id="firedisplacerepeat">Fire, Displace, Repeat</h2> <p>You know what I love? I love opponents who fall in love with a clever spot and hang out there firing away, giving me a chance to work around to get shots at them.</p> <p. </p> <p.”</p> <p>That’s a tactical choice, and it’s not necessarily a bad one. Just make those choices with some smarts instead of “HOLY COW IMA TOTALLY HAVING A GREAT TIME HERE OH CRAP I DIED.”</p> <h2 id="watchyourflanks.nobodyelsewill">Watch Your Flanks. Nobody Else Will</h2> <p>Very, very few teams work well together. Frankly, few teams even work modestly well together. Nearly everyone runs off in search of their own glory while ignoring that paying attention to what’s going on around them might help them and the rest of the team.</p> <p.</p> <p>Keep a weather eye on your flanks. Because it’s rare that others will.</p> <h2 id="avoidrodeoingtitansfromthefront">Avoid Rodeoing Titans From the Front</h2> <p.</p> <p>If you <strong>do</strong> from the front, use whatever ordinance you have to try and distract the Titan. This is partially why I like Firestars—you can blind a Titan and either run or rodeo with much better chance of success. Be careful, though, because you can kill <em>yourself</em> with your own Firestar or electric smoke ordinance. Ask me how I know…</p> <p>If you’re grappling from the front of a Titan, do <strong>not</strong> fly towards the titan in a straight line. Use your controller to fly up high, then loop down and mount the Titan. This will keep you out of melee range. Same thing works flying to one side or another. Point being, don’t fly in straight to the Titan.</p> <h2 id="oddsandends">Odds and Ends</h2> <p><strong>Choose Your Colors Carefully:</strong>.</p> <p><strong>Avoid Drop Ship Door Fire Lanes:</strong>.</p> <p><strong>Don’t Waste Ammo on Pilots in the Drop Ship:</strong> You can’t kill pilots in the drop ship, just the ship itself. <: Weapons, Boosts, Kits, and Ordinance<p>[NOTE: One in a series of posts on my Titanfall 2 experience. Find <a href="" target="_blank">the intro article with links to others here</a>.] </p> <p>Over the year I’ve played I’ve settled into a comfy groove with my equipment. Here’s a few thoughts on my fave and not-so-fave items.</p> <h2 id="weapons">Weapons</h2> <p).</p> <p <strong><em>for my style of play</em></strong>.</p> <p><strong>Hemlock:</strong> My go-to weapon. It’s great in hip fire, and I can snipe at very long range. The burst mode gives me several rounds accurately on target, and I can knock out pilots with two bursts. Normally. When I’m playing well.</p> <p><strong>Flatline:</strong> Second favorite weapon. Great impact, solid accuracy both from hip fire and ADS, even at longer range. Moderate recoil is controllable for me when I’m shooting past mid-range.</p> <p><strong>Devotion:</strong> My go-to weapon for Frontier Defense due to its huge ammo capacity, especially when modded up with extra ammo. Beautiful at close range with hip aiming, solid at mid-range ADS. Squirrely at long range, but hey, it’s an LMG.</p> <p><strong>Kraber:</strong>.</p> <blockquote> <p>Note: This gun is <strong><em>killer awesome</em></strong>.</p> </blockquote> <p><strong>G2:</strong>.</p> <blockquote> <p.</p> </blockquote> <p><strong>Cold War:</strong> I love, LOVE, <strong>LOVE</strong> this weapon for Bounty Hunt. See my notes about it in the Game Modes post.</p> <p><strong>CAR:</strong> I like this gun just fine. Good accuracy, nice damage. I don’t play it much because there’s other weapons I prefer.</p> <p><strong>R–97:</strong> I don’t play well at closer ranges, so I tend not to use this much. But it sounds wicked cool when it fires. Same reason I love the Vector in Call of Duty Squads. So I do run it every once in awhile for fun. Because why not? </p> <p><strong>Shotguns:</strong> No. Just. No. Like I’ve said, me no likey close range combat. I got all my shotguns to Gen 2 just to prove I could, then stopped playing them.</p> <p><strong>DMR:</strong>.</p> <p><strong>Alternator:</strong> Beloved weapon of the crazed crackhead monkey wall-running space flying stim-boosted kids who kill me all the time. I just don’t play it well. I hit Gen 2 with it and put it to rest in the same grave with the DMR.</p> <p><strong>Others:< <strong><em>for my style of play.</em></strong></p> <h3 id="anti-titanweapons">Anti-Titan Weapons</h3> <p!</p> <p>The other AT weapons aren’t bad, and I will run the Archer occasionally. The Thunderbolt is just what I prefer. I think I’m up to Gen60 with mine.</p> <h3 id="athoughtortwoonsights">A Thought or Two on Sights</h3> <p>Find a sight that works well for your style of play. I love the HCOG and have been using it exclusively for months. It takes away field of vision when you’re ADS, but it works well at all ranges for me.</p> .</p> <p>I also used this when leveling up my DMR because I hate that gun and had trouble with it. Now I don’t worry about it any more. See comments about unmarked grave in section above…</p> <h2 id="ordinance">Ordinance</h2> <p>I’ve run the Firestar solely for months. I love its area of effect, I love that I can damage and blind Titans with it, and I love its range.</p> <p>The other ordinance options are all solid, and I’ve played them a fair amount. Gravity Star is just plain wicked fun, but it doesn’t do a damned thing against Titans, and it barely knocks dust off Reapers.</p> <p>So I stick with the Firestar.</p> <h2 id="pilotkits">Pilot Kits</h2> <p>I’ve come to the place where I use Phase Embark and Titan Hunter exclusively. Phase Embark’s speed of getting into my Titan can be crucial if I’m hurt, or if my Titan’s engaged. Titan Hunter helps me get my Titan faster. Yay, Titans!</p> <p>Ordinance Expert is nice because it shows you the arc of where your ordinance will hit. This is a <strong>great</strong> training aid as you’re learning. I moved off it once I got moderately comfy understanding the arc my ordinance would travel.</p> <p>Fast Regen is also good, especially when you’re like me and tend to spend time in Leeroy Jenkins mode running into battles wiser folks might not.</p> <p>All the other kits, for my style, are boring or unhelpful.</p> <h2 id="boosts">Boosts</h2> <p>I’ve run the Pilot Sentry as my main boost for a long, long time. It helps me lock down hardpoints, control fire lanes, kill off bounty Remnant forces, and generally annoy the hell out of opposing pilots.</p> <p>The Titan Sentry is good as well, but doesn’t seem to do as well for me.</p> <p>All the other boosts are fine, although I am damned proud to say I have not once used the Smart Pistol. Not. Once. I lived on the Smart Pistol in TF1, but I’m happy with how well I’ve progressed in my gun skills in TF2.<: The Titans<br /> [NOTE: One in a series of posts on my Titanfall 2 experience. Find <a href="" target="_blank">the intro article with links to others here</a>.] <br /> <br /> Here’s some thoughts on things relating to Titans.<br /> <br /> <h2 id="abitonsometitankits"> A Bit on Some Titan Kits</h2> <br /> <b>Warpfall Transmitter:</b> I use this exclusively. Sure, Dome Shield is nice, but I have crushed a crapload of Titans, pilots, and enemy units via the fast fall feature.<br /> <br /> <b>Assault Chip:</b>.<br /> <br /> <b>Stealth Auto-Eject:</b>.)<br /> <h2 id="thetitans"> The Titans</h2> <br /> <b>Monarch:</b>.<br /> <br /> If I’m playing Attrition or similar I’ll use Overcore, Energy Thief, Arc Rounds, Fast Rearm, and Chassis. For Frontier Defense I use Nuke Eject, Energy Thief, Energy Transfer, Maelstrom, and Accelerator.<br /> <br /> <b>Legion:</b> Big, slow lard ass with a gatling gun. I love it. Great gun which works <i>really</i>…<br /> <br /> <b>Northstar:</b>.<br /> <br /> <b>Scorch:</b> Flame on! This is my go-to Titan when the opposing team has someone dashing around being a jackass in a Ronin. The Scorch’s flame shield <b><i>wrecks</i></b> Ronins in a hurry. Also does Reapers in quite nicely. If you’re playing Frontier Defense on Drydock, make <b>sure</b>.<br /> <br /> <b>Ion:</b> Not my favorite Titan, as I have lots of trouble trying to balance energy use. Effectively for me this means I’m rarely able to use the Laser Shot.<br /> <br /> <br /> <figure><br /><img alt="Frickin Lasers" src="" title="Frickin quot;Lasersquot;" /><br /><figcaption>Frickin “Lasers”</figcaption></figure><br /> Standard load out: Turbo Engine, Zero-Point Tripwire. For Frontier Defense Nuke Eject, Refraction Lens.<br /> <br /> <b>Tone:</b>.<br /> <br /> <b>Ronin:</b> I hate this Titan <b><i>with a passion</i></b>..<br /> <br />.<br /> <br /> God, I hate the Ronin. I hate it so much that I’d be lost in indecision if given the choice between kicking Paul Krugman or the designers of the Ronin in the goolies.<br /> <br /> Unfortunately, the Titanfall folks haven’t nerfed the Ronin by this point means they’re likely not going to.<br /> <br /> I don’t play Ronin any more. When I did general load out was Turbo Engine and Thunderstorm.<br /> <br /> <br /> : Frontier Defense[<b>NOTE:</b> One in a series of posts on my Titanfall 2 experience. Find <a href="" target="_blank">the intro article with links to others here</a>.] <br /> <br /> Frontier Defense (FD) is one of my favorite modes, if not outright my most favorite. I like it because it reinforces good teamwork, something the other modes absolutely do <b>not<.<br /> <h2 id="points-for-leveling-up"> Points for Leveling Up</h2> Make sure you understand the post on Maximizing Points. All that applies to Frontier Defense Mode.<br />!<br /> <h2 id="aegis-upgrades"> AEGIS Upgrades</h2> FD gives your titans a new bunch of level-up abilities. Each titan gets a unique set of mods that run from chassis and shield boosts to additional glorious OMG lethal blow stuff up way more better things.<br />.<br /> AEGIS upgrades are earned by a separate XP track. It’s similar to your pilot’s XP track and you can supplement with an extra XP by purchasing Titan skins from the store—that point will also share across your entire team.<br /> <h2 id="unique-titan-mix"> Unique Titan Mix</h2> Having four different titans on the team garners an extra AEGIS XP for the entire team. I’ll try to fit in whatever titan makes sense for the team, although I try to start with one of my favorites (Scorch, Legion, Monarch, Northstar).<br /> <h2 id="how-i-roll-for-frontier-defense"> How I Roll for Frontier Defense</h2> <b>Pilot Weapons:</b>.<br /> I rotate through sidearms, so there’s no one favorite.<br /> My preferred anti-titan weapon is the Thunderbolt since it’s an area of effect weapon that I can shoot in the general direction of a number of enemies.<br /> <b>NOTE:</b> Anti-Titan weapons in FD mode have <i>unlimited</i> ammo, another reason I love the Thunderbolt for this mode.<br /> <b>Titans:</b> I generally play Northstar, Scorch, or Legion because they’re great at dishing out damage—especially after you get them well up in the AEGIS levels.<br /> <i>valuable</i>.<br /> For Titan Kits I normally use Nuke Eject regardless of Titan type. Because if I’m gonna go, I’m gonna take a bunch of those asshat enemies with me.<br /> <i>Legion:</i> Hidden compartment. Because 2x power shots are great.<br /> <i>Scorch:</i> Wildfire Launcher. Makes total sense when you’re getting multiple thermite shots.<br /> <i>Northstar:</i> Enhanced Payload. More damage from cluster missiles? Take my money. I occasionally use Piercing Round, but frankly I’m not sure of its effectiveness.<br /> <i>Monarch:</i> Energy Thief. Even if getting a battery wasn’t such a win I’d likely keep this just because the execution is freaking awesome.<br /> <i>Ion:</i> Refraction Lens. This totally wrecks Reapers. Yes, lots of damage on other things, but I’ve noticed it the most with how fast I’m able to kill Reapers. And I hate those rat bastards.<br /> <h2 id="using-the-armory"> Using The Armory</h2> When I first started playing I spent every last cent on Nuke Rodeo bombs. I’ve blown up a <b>lot</b>.<br /> Generally I only buy turrets when playing Homestead, Rise, or Exoplanet. The large number of Plasma Drones <i>require</i> several turrets for the team. The other maps just don’t seem to make sense for turrets, or at least I haven’t found great spots to place them.<br /> <h2 id="a-few-thoughts-on-a-few-maps"> A Few Thoughts on a Few Maps</h2> There are other posts elsewhere on The Internets that break down things about the various maps. Below are a few specific things I’ve found on particular maps.<br /> <b>Angel City:</b>.<br />.<br /> This is one of the maps I rarely buy turrets for. I just haven’t found any good spot where I can get more than a few kills. A turret is nearly the same cost as two arc traps, so for me it’s just not good money spent.<br /> <b>Rise:</b> The first wave, regardless of difficulty, starts with a lone titan at the back of the map. Grapple and wall run down the corridors to go steal a battery.<br /> Arc mines are great at the main junction, the far back spawn point, and the low corridor to the right. That corridor is a serious choke point and is the <i>prime</i> spot to hang out in later waves.<br />.<br /> That same zone is also great for Scorch’s ability to stack thermite, flame wall, incendiary traps, and flame core.<br /> <b>Homestead:</b> If possible, grab a Scorch. The metric crapton of plasma drones flow on either side of the large round tower in the middle of the map. Camp out on either side and use the flame shield to destroy swaths of those nasty little bastards.<br /> I like placing one turret at the trees on the left of the small rise just in front of the harvester. I’ll regularly get 60 turret kills from this one alone. Do <i><b>NOT</b></i>.<br /> This map is one where I’ll definitely buy a few Nuke Bombs later in the game because enemy titans will cluster in midfield on the far side of the central tower.<br /> <b>Forward Base Kodai:</b> I love how the game designers included smoke. Seriously. What an awesome tactical mess to have to work around. It’s a modest thing that makes play way more interesting.<br />.<br />.<br /> Northstar with its traps is a great titan here, as you can really slow down the rush of Titans in later waves. Plus the cluster missiles do a great job with all the stalkers.<br /> <b>Blackwater Canal:</b> Load up on arc traps for the canyon at the front of the map. Scatter a few to the route left of the harvester too. I don’t bother with arc traps up top because it’s easy to defend and hold the line there.<br />.<br /> <h2 id="notes-on-scores"> Notes on Scores</h2> Getting.<br /> <img alt="9K in Frontier Defense" src="" title="9K in Frontier Defense" /><br /> Keep your eye on the prize if your main focus is leveling up. Getting MVP is cool, but it doesn’t directly level you up faster. Even if you get MVP all five rounds…<br /> <img alt="MPV All Five Waves" src="" title="MPV All Five Waves" />: Game Modes[<b>NOTE:</b> One in a series of posts on my Titanfall 2 experience. Find <a href="" target="_blank">the intro article with links to others here</a>.]<br /> <br /> Titanfall 2 has a bunch of great different game modes. Some focus on Titan combat, some on pilot combat, some are mixed.<br />.<br />.<br /> <h2 id="learn-how-to-win-each-game-mode"> Learn How To Win Each Game Mode</h2> In most cases, for the mixtape I run, simply killing lots of pilots won’t win you every game. Several game modes require you to do other things to help your team win.<br />.<br /> <img alt="Scored Lots of Points, Lost Because Team Was Killing Pilots Instead of Getting Points" height="360" src="" title="Scored Lots of Points, Lost Because Team Was Killing Pilots Instead of Getting Points" width="640" /><br /> Focus on winning. That means understanding the requirements and scoring for the game. At least make an effort to help your team win.<br /> <h2 id="pilot-kits-and-ordinance"> Pilot Kits and Ordinance</h2> I like the Firestar because it’s persistent and an area of effect weapon. It also blinds Titans and does good damage on them. I’ve gotten several “from the dead” kills from flame damage on a Titan that did me in.<br /> Titan Hunter kit works for me because it helps me get a titan faster.<br /> I use Phase Embark because it lets me get into the shelter of my Titan as quickly as possible. He who runs away lives to run away another day.<br /> Hover? Useless for me. Why do I want to float over the battlefield where people can shoot my ass out of the sky?<br /> Stealth Kit seems to work great for others, but I still get killed regularly by electric smoke when I’m rodeoing with it, so I gave up on it.<br />.<br /> <h2 id="thoughts-on-specific-game-modes"> Thoughts on Specific Game Modes</h2> Here are a few things that work for me in various game modes.<br /> <h3 id="attrition"> Attrition</h3> Goal: Kill as many opponents as you can. Titans are ten points, pilots five. Remember that minion kills can get you serious points, especially if you take out Reapers. As of this writing they’re three points each (they used to be <strong>five!</strong>), which means they’re good for overall points. They also can kick your ass all over the place if you’re not careful while you’re trying to blow them to smithereens.<br /> Pilot Weapons: I like mid-range weapons like the Hemlock, G2, and Flatline. Again, this suits my style of play. A couple maps like Blackwater and Homestead are great for sniping with the Kraber, too. (I <em><strong>hate</strong></em> the DMR and don’t play well with it.)<br />.<br /> <h3 id="bounty-hunt"> Bounty Hunt</h3> Goal: Kill Remnant forces for their bounty. Cash bounty in at banks in between rounds. Kill opposing pilots to piss them off and steal half their bounty collection.<br /> Tactics: Focus on Remnant forces, kill pilots when they’re around. Remember your points to victory come from bounty, <em>not</em>…<br />.<br /> Boost: Pilot sentry is awesome. I regularly get it 30 seconds into the first round. Drop it in a good spot, then hide from other pilots while blasting Remnants.<br /> Titans: Legion with the extra ammo kit and Overcore kit, because you want Smartcore ASAP. It’s a thing of beauty for laying waste to the Remnants and pilots who are swanning about.<br /> <h3 id="amped-hardpoint"> Amped Hardpoint</h3> Goal: Hold the hardpoints and amp them. Prevent the opposing force from doing the same. You do not get points for killing the opponents!<br />.<br /> Keep an eye on the hardpoint status indicators and overall points. Try to keep your own hardpoints amped and the enemy’s unmapped.<br /> Pilot Weapons: My standard is Hemlock (my favorite gun), Flatline, occasionally G2. Sometimes I’ll play the R97 because it sounds cool and it’s good for the close range defense.<br />).<br /> <h3 id="last-titan-standing"> Last Titan Standing</h3> Goal: Destroy all enemy titans to win the round. No respawning. Pilots outside their Titans are nuisances, but you don’t get points for killing them. Focus on the titans!<br />.<br /> If your Titan is blown up, <strong>PAY ATTENTION!</strong> Your job is <em><strong>not</strong></em> to hide and live. Your job is to grab batteries for your team’s remaining titans, damage the enemy titans, and prevent dismounted enemy pilots from harming your titans. Damaging the enemy titans is <em>critical!</em>.<br /> Pilot Weapons: Focus on Titan damage. Great anti-Titan weapon (Thunderbolt is my preferred one) and a good grenade launcher like the EPG or Cold War.<br />.<br /> A Note On Titan Kits: Take some care with your Titan Kit options for this mode. It makes <em><strong>zero sense</strong></em> to use Assault Chip or Stealth Auto Eject kits for this mode. Zero. Sense. Nuke eject is close behind for poor value, in my experience. Assault and Stealth Eject bring nothing to the table. Use Overcore or Dash. Counter Ready with its 2x smoke <em>may</em> be beneficial if it matches your play style.<br /> <h3 id="titan-brawl"> Titan Brawl</h3> Likely my most favorite game mode. It’s just insane fun. I once got 14 kills with zero deaths running a Monarch. Screenshot below because, well, I don’t brag often but this deserves a bit of braggery.<br /> <img alt="14 Kills, Zero Deaths" height="360" src="" title="14 Kills, Zero Deaths" width="640" /><br /> Goals: Kill as many Titans as you can before the match ends. Constant respawns, no dismounts, no ejections.<br /> Tactics: Stick with your homies. Watch out for flanking enemies. ABS: Always Be Shooting. Rack up damage on the opponents, even if you’re not going to kill one. Somebody else will.<br /> Pilot Weapons: N/A for this mode.<br />…<br /> A Note On Titan Kits: Take some care with your Titan Kit options for this mode. It makes <em><strong>zero sense</strong></em> to use Assault Chip, Nuke Eject, or Stealth Auto Eject kits for this mode. Zero. Sense. You can’t actually use any of those kits in this mode. So Just. Don’t. Use Overcore or Dash. Counter Ready with its 2x smoke <em>may</em> be beneficial if it matches your play style.: Maximizing Points[<b>NOTE:</b> One in a series of posts on my Titanfall 2 experience. Find <a href="" target="_blank">the intro article with links to others here</a>.]<br /> Leveling up requires Experience Points (XP). You get XP for your performance in a match. Points come from winning a match, meeting your performance minimums, completing a match, leveling up titans or weapons or your faction, happy hour, and elite weapon/titan bonuses.<br />.<br /> <img alt="Doubled the score of the rest of my team, got the same XP" height="360" src="" title="Doubled the score of the rest of my team, got the same XP" width="640" /><br /> So here’s the thing: focus on meeting your minimums. Focus on helping your team win, or making the evac shuttle if you should lose. Focus on knowing what weapons and titans are near leveling up.<br /> <h2 id="match-minimums"> Match Minimums</h2> Each.<br /> You’ll find the minimums on the menu accessed from the Start/Menu button. Make sure you know what your minimums when you start each match! Check regularly as the match progresses to make sure you’re going to meet them.<br /> <h2 id="leveling-up-titans-and-weapons"> Leveling Up Titans and Weapons</h2> Let:<br />!<br /> Also, don’t forget your sidearms and anti-titan weapons!<br /> <h2 id="did-you-lose-make-the-evac-ship"> Did You Lose? MAKE THE EVAC SHIP!</h2> Win or lose, you get zero points for living through the epilogue. Zero.<br />.<br /> If you care about leveling up then you need to embrace your inner Dutch Schaefer and “Get to da choppa!”<br /> If there are titans near the ship, your best bet is to try distracting them on the way to getting into the ship. Use Firestars (my personal favorite) to disrupt their vision. Fire your anti-titan weapon as fast as you can.<br />.<br /> If you die on the way to the ship, so what? If you make the evac ship and it's blown up, so what? You lose absolutely nothing.<br /> Shoot for the opportunity to make an extra point. GET TO DA CHOPPA!<br /> <h2 id="elite-squad-leader-points"> Elite Squad Leader Points</h2> You’ll get an extra XP if you bought one of the elite weapons from the store. You share that point with your teammates, which is kind of neat—the team gets a max of one Elite Squad Leader point per match.<br /> <h2 id="happy-hour-points"> Happy Hour Points</h2> Your network has a set happy hour. You’ll get an extra five points playing during this time. That’s awesome! If possible, try and save a Double XP ticket to use when you’re playing Happy Hour games. Ten points versus five points. Epic Win.<br />.<br /> <h2 id="double-xp"> Double XP</h2> Double XP are awesome. They, like, double the points you get!<br />.<br />.)<br />.<br /> <h2 id="grinding-it-out"> Grinding It Out</h2> Regener <i><b>possible</b></i> points:<br /> <b>Base Points</b><br /> <ul> <li>Match Completion = 1</li> <li>Good Performance = 1</li> <li>Match Victory / Successful Evac = 1<br /> <b>Potential Points</b></li> <li>(possible) Elite Squad Leader = 1</li> <li>(possible) Level Up Weapons = 1 per level</li> <li>(possible) Level Up Titans = 1 per level</li> <li>(possible) Level Up Faction = 1<br /> <b>Other</b></li> <li>Happy Hour (once per day) = 5</li> </ul> Ergo, on a basic match were you met minimums, and won or escaped you’re looking at three points. Throw in one or two XP for weapons level ups. I’ll semi-arbitrarily use an average of six points per match based on faction levels up and matches where you do an awesome job and level up a couple weapons/titans.<br />.<br /> No matter what way you cut it, it’s a grind.<br /> <h2 id="what-game-modes-give-the-best-xp-for-leveling-up"> What Game Modes Give The Best XP For Leveling Up</h2> Remembering.<br />?<br /> <table border="1"> <thead> <tr> <th>Mode</th> <th>Avg Points Per Match</th> <th>Avg Match Time</th> <th>Points Per Minute</th> <th>Points With 2x</th> <th>2x PPM</th> <th>2x + Happy Hour</th> <th>PPM</th> </tr> </thead> <tbody> <tr> <td>Attrition</td> <td>6</td> <td>12</td> <td>0.5</td> <td>12</td> <td>1</td> <td>22</td> <td>1.8</td> </tr> <tr> <td>Frontier Defense</td> <td>13</td> <td>35</td> <td>0.37</td> <td>26</td> <td>0.74</td> <td>36</td> <td>1.02</td> </tr> </tbody> </table> <h2 id="keep-your-eye-on-the-prize-xp"> Keep Your Eye On The Prize: XP</h2> Go into your matches with a plan for how you’re going to try and earn XP. Think about what weapons and titans you might swap between. Don’t lose sight that the game is FUN, but still think about how you can play to maximize what you earn.: IntroI’ve been playing Titanfall 2 around a year now. It’s been my go-to brainless activity when I need distraction from the rotten places life has been this last year. At the time of this writing I’m at Generation 48 working my way up to Gen 50. (Titanfall “Regeneration” is the same concept as Call of Duty’s “Prestige Up.”) I’ve played over 3,000 games, been top three around 2100 of those, and MVP 950-ish times. I’ve just passed 15,000 kills (other players) and am near having earned 30,000 credits “net worth.”<br /> <img alt="Overview stats for Titanfall 2" height="225" src="" title="My Overview Stats" width="400" /><br />.<br /> <h2 id="me-and-titanfall"> Me and Titanfall</h2> I have a long love/hate relationship with Titanfall 1 and 2. I’m not a great player, especially when having to play solely against other humans. Therefore, I avoid modes like Pilot vs. Pilot, Capture The Flag, etc.<br /> Why am I not great? Let me list the ways…<br /> <ul> <li>I’m slow on the controller when trying to get a quick lock-on against opponents, which means I die a lot.</li> <li>I have poor aim, especially when someone’s aiming at me, which means I die a lot.</li> <li>I am awful when someone gets in melee range, which means I die a lot.</li> <li>I don’t shoot well while wall-running or mid-air, which means I miss kill opportunities.</li> </ul> My play style is to work from mid-range, both as a pilot and a titan. I’m not great at close up (see points above). As a pilot I’ll spend a fair amount of time on top of various obstacles. Most of the folks playing aren’t thinking in 3D, so it’s a good tactic for me. On the downside, I also tend to go Leeroy Jenkins and run into battles likely best avoided. This is part of why my Kill-Death-Ratio (KDR) against other players is around 0.7 after 3,000+ games. (Note: I’ve been above 1.5 for the last few months, occasionally as high as a ten-game average of 2.5; however, it takes a LONG time to raise that particular statistic up—and frankly I just don’t care about KDR. I know others care a <i><b>lot.</b></i> I don’t.)<br /> <h2 id="some-things-i-dislike"> Some Things I Dislike</h2> <b>Quality and Clarity.</b>.<br /> <b>Graphics Don’t Match Algorithms.</b>.<br /> <b>Regeneration Grind.</b> I badly miss the unique regeneration challenges from TF1. Those were well-thought out and fun. And a pain in the ass at times. With TF2 you’re in for nothing more than a long grind. More on leveling up and points later.<br /> <h2 id="some-things-i-love"> Some Things I Love</h2> <b>Movement.<.)<br /> <b>The Campaign.</b> One of the best campaigns in any game I’ve played. Ever. Loved the story, loved the chapters. I know others aren’t so enamored. That’s OK.<br /> <b>Regular Updates.</b> <i><b>love</b></i> seeing a company that takes this approach of constantly adding new value.<br /> <b>It’s. Just. Fun.</b> Even when I’m getting my face beat in by some of the pros I’m still having a fairly good time. Yes, I get frustrated, yes, I cuss. A lot. It’s still fun.<br /> <b>You Always Get A Titan.</b>.<br /> <h2 id="this-series"> This Series</h2> I’m not sure how long this series will last. At a minimum I’m going to cover the following topics, either as separate posts or parts of others.<br /> <ul> <li><a href="" target="_blank">Maximizing Points</a></li> <li><a href="" target="_blank">Game Modes</a></li> <li><a href="" target="_blank">Frontier Defense</a> </li> <li><a href="" target="_blank">Thoughts on Titans</a></li> <li><a href="" target="_blank">Thoughts on Weapons, Ordinance, Boosts, and Kits</a></li> <li><a href="" target="_blank">Tactics</a></li> <li><a href="" target="_blank">Movement and Shooting</a></li> <li><a href="" target="_blank">Some Closing Thoughts</a></li> </ul> This series is pretty late to the game. Titanfall 2 has been out for quite some time. Regardless, I’ve enjoyed outlining and drafting some of the content, so it’s as much for me as it is for you. Hopefully someone finds it useful. :)<br /> <ul> </ul> LeanPub PodcastThe folks at <a href="">LeanPub</a>, the online publishing service, were kind enough to have me on their Frontmatter podcast. Len Epp chatted me up for roughly an hour on my background, my book <a href="">The Leadership Journey</a>, and how I came to write it.<br /> You can find <a href="">the podcast here</a>, with a complete transcript if you’d rather read. Len’s a great interviewer, and I really enjoyed being on the, Ever Joke About Your Teams' Career Safety<p>Long)!”</p> <p>Without missing a beat the supervisor instantly replied “Of course she won’t. I don’t even joke about something like that landing on her APR. Ever.”</p> <p>The way he said it made it even more impactful: he didn’t get intense, he didn’t yell, he didn’t joke. He just said it emphatically and in a matter-of-fact tone.</p> <p.</p> <p>Those on your teams, those who report to you, those who have any form of accountability to you should know, without a doubt, that their performance reports will be based only on merit and fact, never spite or rumor.</p> <p>You don’t do performance reports? Fine. Don’t fixate on the mechanics. This is more about the meta concept: safety in one’s career progression.</p> <p>The other day on Facebook someone posted an article that ran something like “Seven Signs You’re About to Be Fired.” The poster tagged someone on their team and made a joking comment like “Yo, pay attention!”</p> <p>I got the joke, but it also made me recall the terrific lesson I learned all those years ago.</p> <p>Some things you just shouldn’t ever joke about. And your teams should know, I Didn't Automate That Test<p>No, you don’t need to automate tests for every behavior you build in your system. Sometimes you <strong>shouldn’t</strong> automate tests because you’re taking on an unreasonable amount of technical debt and overhead.</p> <p:</p> <pre><code>requestEnd: function (e) { var node = document.getElementById('flags'); while (node.firstChild) { node.removeChild(node.firstChild); } var type = e.type; $('#flags').append('<div responseType=\'' + type + '\'/>'); }, </code></pre> <p>It’s behavior. Moreover, other automated tests rely on this, so this failing would break other tests! Why did I decide to not write an automated test for it?</p> <p>Well this is literally the only custom JavaScript I have on my site at the moment. Think of the work I’d have to do simply to get a test in place for this:</p> <ul> <li>Figure out which JS testing toolset to use</li> <li>Learn that toolset</li> <li>Integrate that toolset into my current build chain</li> </ul> <p>That’s quite a bit of work and complexity. Step back and think about a few topics:</p> <p><strong>What’s the risk of breaking this behavior?</strong></p> <ul> <li>I rarely edit that page, so the likelyhood of breaking that behavior is low</li> <li>When I do edit the page, I even more rarely touch that particular part of the page’s code. Likelyhood of breakage is even lower.</li> </ul> <p><strong>What damage happens if I break that behavior?</strong></p> <ul> <li>Other tests relying on that element will fail</li> <li>Those failures could lead me astray because they’re failing for an unexpected reason – eg an Update test isn’t failing because the Update test is busted, it’s failing because the flag element isn’t appearing</li> <li>I’ll spend extra time troubleshooting the failure</li> </ul> <p><strong>How do I make sure </strong></p> <p>It’s a pretty easy discussion at this point: does it make sense to take on the overhead for writing test automation for this particular task? No. Hell no.</p> <p>It <em>may</em> make sense as I start to flesh out more behavior soon. But not now. A simple few manual tests and it’s good.</p> <p>Roll Leadership Journey is Complete and Live<p>After 2.5 years of hard work, blood, sweat, and a <em>lot of procrastination</em> I’m happy to announce my book <em><a href="">The Leadership Journey</a></em> is complete and read for purchase! <br>.</p> <p>Instead, it’s practical stories, tips, and exercises meant to get you looking in a mirror and figuring out where you want to go—and then proving some ideas on how you can head off that direction.</p> <p>This stuff is from my heart. It started based off <a href="">my Leadership 101 series</a>, but then grew out in its own direction.</p> <p>I owe lots of folks thanks, particularly readers who purchased the book two years ago expecting a quick finish. HAH! I hope they’re pleased with the final outcome.</p> <p <strong>useful.</strong> My hard work to convey content that’s straight from my experiences, and more importantly from my heart.</p> <p>The book is on sale at LeanPub, which is great for you. Don’t like the book? You can get ALL YOUR MONEY BACK up to 45 days after purchase. </p> <p>I’m pretty sure you’ll find it useful, though!< Journey Final Draft Complete!<p>Thank you so very, very much for those of you who’ve patiently been waiting for the completion of my book <em><a href="">The Leadership Journey</a></em>!</p> <p.</p> <p>I’m playing around with variations of the cover based on the great photo my brother created. <br> <img src="" alt="Book Cover" title="Slide2.png"> <img src="" alt="enter image description here" title="Slide4.png"></p> <p>I hope to have word on the Foreword author in a week or two, and hopefully the foreword completed within the next three weeks.</p> <p>For those of you who don’t know, the book’s available <strong>right now</strong> at <a href="">its page on LeanPub</a>. You can purchase it now, and you’ll get the updates when the Foreword and cover are in the can.</p> <p>Again, thank you to all for your patience. It’s been a labor of love, sweat, and yes, some significant procrastination.</p> <p>I hope you’ll find it worth the wait!<
http://feeds.feedburner.com/Frazzleddad
CC-MAIN-2022-40
en
refinedweb
public class Example : MonoBehaviour { private Transform myTransform; private Vector3 myPosition; private void Start() { SetInitialReferences(); } private void Update() { Debug.Log(myTransform.position.ToString()); //why does this update constantly? Debug.Log(myPosition.ToString()); //this doesent update as I expected not like the other one } void SetInitialReferences() { myTransform = transform; //why is it faster to use myTransform later on than transform? myPosition = transform.position; } myTransform = transform means that the your variable will point to the actual Transform Object because it is a reference type (as all classes). So it doesn't really copy it just points to the same Object in memory. As on if it is faster to cashe the reference rather picking it through the property transform if there is any difference it is trivial. On the other hand myPosition = transform.position which is of Vector3 type copies the value so afterwards there are 2 different Objects in memory, thus any changes on the transform.position won't be reflected on the myPosition var as it is Value Type. Search on google about reference types and value types there are alot of sites (including $$anonymous$$SDN official site) to find out more about them. Cheers. Sry for not seeing your answer earlier. Was the first time I used the forums and did not see that you commented as a reply ins$$anonymous$$d of an answer. Anyway thanks both of you! Answer by MUG806 · Feb 08, 2018 at 02:09 PM If I'm understanding the question, this comes down to the difference between objects and structs. When you store "transform.position" in myPosition, you actually store a copy of the vector coordinates in the variable, because Vector3 is a struct. When you assign the object "transform" to myTransform, you are actually storing a reference to a transform object, which is just the way C# does things with objects. This way when you access myTransform.position, you are getting the original transform object, and then finding its current position, whereas with myPosition, you are just accessing a copy of the position as it was when you assigned the variable. Can be tricky to get your head around at first, but C# just handles assigning objects differently to structs and basic data types like int and float. Thx for the answer! Is clear now :) One last thing... in the method "SetInitialReferences" I set a reference of the transfrom to myTransform. I have heard that it is fast that way if want to access the transform over the reference ins$$anonymous$$d of the actual transfrom. Do you know why that is? And if yes, why is it that way? $$anonymous$$ight be mistaken but I don't think that's the case. You may be getting mixed up with GetComponent() which you can use to access components attached to the game object. That method IS slow so the advice is to create a reference to the components you will be needing every frame just the once in initialisation. I think you are mistaken. You can get a performance improvement from cacheing it. See eg It's no longer as slow as a GetComponent call, but still quicker to cache it yourself. Not so much as to make a huge difference, but I$$anonymous$$O it does little harm to get into the habit, at least for Transforms that are being accessed every frame. Alright, TIL. I'd submit that as an answer cuz that is definition worth knowing. 97 People are following this question. Why using = isn´t fast enough for changing Vector3 values from the transform? 2 Answers Get list of updating transforms 0 Answers Double reference to waypoints in FSM 0 Answers Using deviceOrientation to calibrate Camera 0 Answers List of Transforms updating all items when adding variable. 1 Answer EnterpriseSocial Q&A
https://answers.unity.com/questions/1465872/2-simple-questions-the-questions-are-comments-in-t.html
CC-MAIN-2022-40
en
refinedweb