text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
From: Douglas Gregor (gregod_at_[hidden])
Date: 2002-02-18 16:36:16
On Monday 18 February 2002 03:48 pm, you wrote:
> Consider a "tag expression" to be the expression that is used to rule out a
> given candidate type during matching.
> In my proposal, a tag expression was simply a tagname, where the equality
> test was implicitly used, but tagnames could be made to participate in
> complex expressions too, so (3) is not a reason to prefer traits instead of
> true tags.
Okay, we're converging then. I wasn't sure if you had considered the notion
of "tag expressions" as well.
> In order for this to be really useful, 'tag expressions' should be allowed
> to be used for specializations too.
> Following your syntax, this would look something like:
>
> template<class T> struct Some {} ;
>
> template<class T> struct Some< is_output_iterator_tag<T>::value > {} ;
>
> But this doesn't fit properly into current C++. Something else would be
> needed here.
I have more questions than answers for this. In your original message, you
had the following (I added the #1 and #2):
template<class input_iterator_tag T> struct Some {} ; // #1
template<> struct Some<output_iterator_tag> {} ; // #2
I'm assuming that #1 is used for any type T that has the tag
'input_iterator_tag', and #2 is used for any type T that has the tag
'output_iterator_tag'? If so, then why isn't #2 written as:
template<class output_iterator_tag T> struct Some {}; // #2?
I ask because #2 looks like a full specialization, but it is actually a class
template. Even if tags are in a different namespace from types (as they would
have to be), I believe such a construct would cause confusion.
For reference, I would write the equivalent to the above as:
template<typename T> struct Some; // undefined
template<typename T if (is_input_iterator_tag<T>::value)>
struct Some { }; // #1
template<typename T if (is_output_iterator_tag<T>::value)>
struct Some { }; // #2
Multiple primary class templates? Yes, but I think the idea of a 'primary
template' is misguided anyway once one considers restrictions on the binding
of arguments to parameters. Think function overloading: there is no notion of
a 'primary function' but there are parameter/argument binding restrictions.
In any case, it appears (to me) that the two approaches are roughly
equivalent in all but syntax. On the syntax front, I believe that minimal is
better: introducing additional keywords and namespaces into the language will
complicate it considerably, and is bound to run into a great deal of
resistance.
Doug
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2002/02/25360.php
|
CC-MAIN-2020-40
|
refinedweb
| 434
| 52.09
|
SQL Joins
Persistent touts itself as a database-agnostic interface. How, then, are you supposed to do things which are inherently backend-specific? This most often comes up in Yesod when you want to join two tables together. There are some pure-Haskell solutions that are completely backend-agonistic, but there are also more efficient methods at our disposal. In this chapter, we’ll introduce a common problem you might want to solve, and then build up more sophisticated solutions.
Database queries in Widgets
I’ll address this one right off the bat, since it catches many users by surprise. You might think that you can solve this problem in the Hamlet template itself, e.g.:
<ul> $forall Entity blogid blog <- blogs $with author <- runDB $ get404 $ blogAuthor <li> <a [email protected]{BlogR blogid}> #{blogTitle blog} by #{authorName author}
However, this isn’t allowed, because Hamlet will not allow you to run database actions inside of it. One of the goals of Shakespearean templates is to help you keep your pure and impure code separated, with the idea being that all impure code needs to stay in Haskell.
But we can actually tweak the above code to work in Yesod. The idea is to
separate out the code for each blog entry into a
Widget function, and then
perform the database action in the Haskell portion of the function:
getHomeR :: Handler Html getHomeR = do blogs <- runDB $ selectList [] [] defaultLayout $ do setTitle "Blog posts" [whamlet| <ul> $forall blogEntity <- blogs ^{showBlogLink blogEntity} |] showBlogLink :: Entity Blog -> Widget showBlogLink (Entity blogid blog) = do author <- handlerToWidget $ runDB $ get404 $ blogAuthor blog [whamlet| <li> <a [email protected]{BlogR blogid}> #{blogTitle blog} by #{authorName author} |]
We need to use
handlerToWidget to turn our
Handler action into a
Widget
action, but otherwise the code is straightforward. And furthermore, we now get
exactly the output we wanted:
Authors appear as names
Joins
If we have the exact result we’re looking for, why isn’t this chapter over? The problem is that this technique is highly inefficient. We’re performing one database query to load up all of the blog posts, then a separate query for each blog post to get the author names. This is far less efficient than simply using a SQL join. The question is: how do we do a join in Persistent? We’ll start off by writing some raw SQL:
getHomeR :: Handler Html getHomeR = do blogs <- runDB $ rawSql "SELECT ??, ?? \ \FROM blog INNER JOIN author \ \ON blog.author=author.id" [] defaultLayout $ do setTitle "Blog posts" [whamlet| <ul> $forall (Entity blogid blog, Entity _ author) <- blogs <li> <a [email protected]{BlogR blogid}> #{blogTitle blog} by #{authorName author} |]
We pass the
rawSql function two parameters: a SQL query, and a list of
additional parameters to replace placeholders in the query. That list is empty,
since we’re not using any placeholders. However, note that we’re using
?? in
our
SELECT statement. This is a form of type inspection:
rawSql will detect
the type of entities being demanded, and automatically fill in the fields that
are necessary to make the query.
rawSql is certainly powerful, but it’s also unsafe. There’s no syntax
checking on your SQL query string, so you can get runtime errors. Also, it’s
easy to end up querying for the wrong type and end up with very confusing
runtime error messages.
Esqueleto
Persistent has a companion library- Esqueleto- which provides an expressive, type safe DSL for writing SQL queries. It takes advantage of the Persistent types to ensure it generates valid SQL queries and produces the results requested by the program. In order to use Esqueleto, we’re going to add some imports:
import qualified Database.Esqueleto as E import Database.Esqueleto ((^.))
And then write our query using Esqueleto:
getHomeR :: Handler Html getHomeR = do blogs <- runDB $ E.select $ E.from $ \(blog `E.InnerJoin` author) -> do E.on $ blog ^. BlogAuthor E.==. author ^. AuthorId return ( blog ^. BlogId , blog ^. BlogTitle , author ^. AuthorName ) defaultLayout $ do setTitle "Blog posts" [whamlet| <ul> $forall (E.Value blogid, E.Value title, E.Value name) <- blogs <li> <a [email protected]{BlogR blogid}>#{title} by #{name} |]
Notice how similar the query looks to the SQL we wrote previously. One thing of
particular interest is the
^. operator, which is a projection.
BlogAuthor, for example, means "take the
author column of the
blog table."
And thanks to the type safety of Esqueleto, you could never accidentally
project
AuthorName from
blog: the type system will stop you!
In addition to safety, there’s also a performance advantage to Esqueleto.
Notice the
returned tuple; it explicitly lists the three columns that we
need to generate our listing. This can provide a huge performance boost: unlike
all other examples we’ve had, this one does not require transferring the
(potentially quite large)
content column of the blog post to generate the
listing.
Esqueleto is really the gold standard in writing SQL queries in Persistent. The rule of thumb should be: if you’re doing something that fits naturally into Persistent’s query syntax, use Persistent, as it’s database agnostic and a bit easier to use. But if you’re doing something that would be more efficient with a SQL-specific feature, you should strongly consider Esqueleto.
Streaming
There’s still a problem with our Esqueleto approach. If there are thousands of blog posts, then the workflow will be:
Read thousands of blog posts into memory on the server.
Render out the entire HTML page.
Send the HTML page to the client.
This has two downsides: it uses a lot of memory, and it gives high latency for the user. If this is a bad approach, why does Yesod gear you towards it out of the box, instead of tending towards a streaming approach? Two reasons:
Correctness: imagine if there was an error reading the 243rd record from the database. By doing a non-streaming response, Yesod can catch the exception and send a meaningful 500 error response. If we were already streaming, the streaming body would simply stop in the middle of a misleading 200 OK respond.
Ease of use: it’s usually easier to work with non-streaming bodies.
The standard recommendation I’d give someone who wants to generate listings that may be large is to use pagination. This allows you to do less work on the server, write simple code, get the correctness guarantees Yesod provides out of the box, and reduce user latency. However, there are times when you’ll really want to do a streaming response, so let’s cover that here.
Switching Esqueleto to a streaming response is easy: replace
select with
selectSource. The Esqueleto query itself remains unchanged. Then we’ll use
the
respondSourceDB function to generate a streaming database response, and
manually construct our HTML to wrap up the listing.>"
Notice the usage of
sendChunkText, which sends some raw
Text values over
the network. We then take each of our blog tuples and use conduit’s
map
function to create a streaming value. We use
hamlet to get templating, and
then pass in our
render function to convert the type-safe URLs into their
textual versions. Finally,
toFlushBuilder converts our
Html value into a
Flush Builder value, as needed by Yesod’s streaming framework.
Unfortunately, we’re no longer able to take advantage of Hamlet to do our
overall page layout, since we need to explicit generate start and end tags
separately. This introduces another point for possible bugs, if we accidentally
create unbalanced tags. We also lose the ability to use
defaultLayout, for
exactly the same reason.
Streaming HTML responses are a powerful tool, and are sometimes necessary. But generally speaking, I’d recommend sticking to safer options.
Conclusion
This chapter covered a number of ways of doing a SQL join:
Avoid the join entirely, and manually grab the associated data in Haskell. This is also known as an application level join.
Write the SQL explicitly with
rawSql. While somewhat convenient, this loses a lot of Persistent’s type safety.
Use Esqueleto’s DSL functionality to create a type-safe SQL query.
And if you need it, you can even generate a streaming response from Esqueleto.
For completeness, here’s the entire body of the final, streaming example:
{-# LANGUAGE EmptyDataDecls #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE GADTs #-} {-# LANGUAGE GeneralizedNewtypeDeriving #-} {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE QuasiQuotes #-} {-# LANGUAGE TemplateHaskell #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE ViewPatterns #-} import Control.Monad.Logger import Data.Text (Text) import qualified Database.Esqueleto as E import Database.Esqueleto ((^.)) import Database.Persist.Sqlite import Yesod import qualified Data.Conduit.List as CL import Data.Conduit (($=)) share [mkPersist sqlSettings, mkMigrate "migrateAll"] [persistLowerCase| Author name Text Blog author AuthorId title Text content Html |] data App = App { persistConfig :: SqliteConf , connPool :: ConnectionPool } instance Yesod App instance YesodPersist App where type YesodPersistBackend App = SqlBackend runDB = defaultRunDB persistConfig connPool instance YesodPersistRunner App where getDBRunner = defaultGetDBRunner connPool mkYesod "App" [parseRoutes| / HomeR GET /blog/#BlogId BlogR GET |]>" getBlogR :: BlogId -> Handler Html getBlogR _ = error "Implementation left as exercise to reader" main :: IO () main = do -- Use an in-memory database with 1 connection. Terrible for production, -- but useful for testing. let conf = SqliteConf ":memory:" 1 pool <- createPoolConfig conf flip runSqlPersistMPool pool $ do runMigration migrateAll -- Fill in some testing data alice <- insert $ Author "Alice" bob <- insert $ Author "Bob" insert_ $ Blog alice "Alice's first post" "Hello World!" insert_ $ Blog bob "Bob's first post" "Hello World!!!" insert_ $ Blog alice "Alice's second post" "Goodbye World!" warp 3000 App { persistConfig = conf , connPool = pool }
|
https://www.yesodweb.com/book/sql-joins
|
CC-MAIN-2021-39
|
refinedweb
| 1,568
| 54.83
|
.
initialize the "ArrayList"
ArrayList<List> selectedItems;
protected void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_home_page);
selectedItems = new ArrayList<List>();
}
Please try using following sample code, change xpath/data accordingly
List<WebElement> resultList =
_driver.findElements(By.xpath("//img[contains(@src,'tab-close.png')]/preceding-sibling::span[@class='tab-name
ng-binding']"));
for (WebElement resultItem : resultList){
String tabname=resultItem.getText();
if(resultItem.getText().equalsIgnoreCase("Dashboard")){
GlobalFunctions.clickOnButtonCustomised(false, _driver,
By.xpath("//span[@class='tab-name ng-binding' and
contains(text(),'"+tabname+"')]/following-sibling::img[contains(@src,'tab-close.png')]"),
"");
}
}
You are in fact very close to making it work. The problem was that you did
not define the donationAmount observable property in the main view model,
which resulted in KnockoutJS failing when binding this unavailable property
to the input tag.
Here is the part of the code that had to be changed so that it works now:
function DonationsViewModel() {
var self = this;
self.donationAmount = ko.observable();
// ....
}
and here's some little change to your HTML:
<div>
<select data-bind="options: availableDonations, optionsCaption:
'Please select...', value: selectedDonation, optionsText:
'label'"></select>
<input data-
<button typ="button" data-Add
Donation<
The simple fix is to wrap your append in a try catch statement.
for item in stock_data:
try:
closing_prices.append(stock_data[count])
except IndexError:
break
print (closing_prices)
count = count + 6
the reason you are getting to the error is that when you get to the 5th to
the last item in the list then add 6 to it you are now out of range of the
lists maximum index so to speak.
Another possible solution is to use a while loop.
closing_prices = []
count = 10
while count < len(stock_data):
closing_prices.append(stock_data[count])
count += 6
print closing_prices
You are manipulating the same list over and over again, not a copy.
Create a new list in the loop instead:
for elemento in lista_base:
listado = [elemento, elemento + 1, elemento + 2]
tabla.append(listado)
or create a copy of the list:
for elemento in lista_base:
listado[0] = elemento
listado[1] = elemento+1
listado[2] = elemento+2
tabla.append(listado[:])
where [:] returns a full slice of all the elements. You could also use
list(listado) or import the copy module and use copy.copy(listado) to
create a copy of the existing list.
Appending a list to another list only adds a reference, and your code thus
created many references to the same list that you kept changing in the
loop.
You could have seen what was happening had you printed tabla on every loop.
Pri
Completely untested, made fresh while drinking morning coffee. Good luck!
var list = siteWeb.Lists.TryGetList(listName);
if (list == null)
{
return;
}
var items = list.GetItems(query);
var countOfItems = items.Count;
if (countOfItems == 0)
{
return;
}
var listItemFieldObjects = items.Cast<SPListItem>().Select(item =>
item[1]);
foreach (var item in listItemFieldObjects)
{
var listViewItem = new ListViewItem("Object's value");
listView.BeginUpdate();
listViewItem.SubItems.Add(item.ToString());
listView.Items.Add(listViewItem);
listView.EndUpdate();
}
You're not adding several different items to the list, you're adding the
same exact item several times and modifying it each time.
Classes in C# are reference types; that means each variable doesn't hold
the data for the object itself, it just hold onto a reference to where that
object is. Assigning an object to another variable doesn't create a new
object, it just copies the reference to that same object.
The issue is that you should be creating a new object (i.e. using the new
keyword) each time you go to add a new item to the list.
From the looks of your code it would seem that
StaticLists.GetStockMultibuyForBarcode(sBarcode) isn't returning a new item
each time; it's just returning the same item, or at the very least one of
the existing items. You should be creating a new item, co
EDIT: I am going to assume this question is mis-tagged as WPF and is really
about winforms in which case my answer doesn't make sense
profselect.ItemsSource = datalist;
(not DataSource)
and indeed it probably should be outside the loop
You can use map with operator.add:
>>> from operator import add
>>> map(add,*dict1.values())
[3, 5, 7]
>>> map(add,*dict2.values())
[4, 6, 8, 10]
or zip with a list comprehension if you don't want to import anything:
>>> [sum(x) for x in zip(*dict1.values())]
[3, 5, 7]
>>> [sum(x) for x in zip(*dict2.values())]
[4, 6, 8, 10]
Update:
def func(dic, *keys):
return [sum(x) for x in zip(*(dic[k] for k in keys))]
>>> dict1 = {'a': [0,1,2], 'b': [3,4,5], 'c':[6,7,8]}
>>> func(dict1,'a')
[0, 1, 2]
>>> func(dict1,'a','b')
[3, 5, 7]
>>> func(dict1,'b','c')
[9, 11, 13]
>>> func(dict1,'b','c','a')
[9, 12, 15]
I have the same placement for each list creation
You do NOT have the same placement of creation, the AStore is created
outside of the for loop, and the APages one is.
One option is to create an SQL server trigger. This trigger would fire
whenever a new sales order is created in your source database. You could
make it a CLR trigger, and in the function make use of the SharePoint
client interface.
After you provided a clearer description of the problem: I believe that
you're looking for something like the list event handler. This will run on
the events you care about, and you can pull from the database at the
appropriate time.
Essentially, you need to make a Visual Studio SharePoint project (from my
past experience this method requires you to use VS right on the SharePoint
server, or else you have to copy a lot of DLLs manually from the server);
and make an event receiver.
Use sum() to add up all values of a sequence:
>>> sum([2,5,9,12,50])
78
But if you wanted to keep a running total, just loop through your list:
total = 2
for elem in newlist:
total += a
print total
That also can be used to build a list:
total = 2
cumsum = []
for elem in newlist:
total += a
cumsum.append(total) =
You can use numpy to get the column sum
import sys
import numpy
ROWS = 3
COLS = 4
def main():
global ROWS
global COLS
matrixData = []
# Get input and process it
for i in range(0, ROWS):
thisRow = raw_input('Enter a 3-by-4 matrix row for row ' + str(i) +
' : ').split()
evalRow = []
[evalRow.append(eval(element)) for element in thisRow]
matrixData.append(evalRow)
numpyMatrix = numpy.array(matrixData)
colSums = []
[colSums.append(numpy.sum(numpyMatrix[:,index])) for index in
range(0,COLS)]
# Print the matrix
for row in matrixData:
print row
print colSums
return
if __name__ == '__main__':
main()
sys.exit(0)
EDIT:
I got a bit carried away with this (it's a slow day at work) so I rewrote
the function to use (IMHO) clearer variable names, fewer redundant
variables, and added basic error handling. The example below supports
insertion whereas the previous example assumed simple appending to the end
of a list which was the result of not reading the question properly (see
edits if you're curious).
void listAdd(Node* currentNode, Node toAdd)
{
Node * newNode = malloc(sizeof(Node));
if(!newNode){
//ERROR HANDLING
}
* newNode = toAdd;
newNode->next = NULL;
while (currentNode)
{
if(!currentNode->next)
//We've got to the end of the list without finding a place to
insert the node.
//NULL pointer always evaluates to false in C regardless
If I run your script, I get the following output
[2, 4, 5, 7, 6, 7]
[2, 3, 4, 4, 4]
There is only one 4 deleted from B because you removed an element while
running over the list.
Maybe you should remove the 4's from B after looping over B.
A = [2,3,4,4]
B = [2,4,4,5,7,6,7]
for i in B:
if i == 4:
A.append(4)
B = filter(lambda a: a != 4, B)
print B
print A
You can't.
You could write a function that reads an int from cin and returns it, and
then write
list.addNode(readInt());
But the function that does the read still needs a local variable to read
into.
Try with
$description.each(function() {
$("#list").append("<li>" + $(this).text() + "</li>");
});
DEMO
First you need to spit the string appropriately to get a list of string
arrays. Something like this:
var patient_list = new
List<string[]>(strMLMPatientData.Split(';').Select(x =>
x.Split(',')));
or even better:
var patient_list = strMLMPatientData.Split(';').Select(x =>
x.Split(',')).ToList();
You need Linq for that, but you get the idea.
Then you need to add columns to your data table. You cant add rows to it
when there are no columns..
Try something like this in your function
//add columns appropriately
DataTable table = new DataTable();
table.Columns.Add("Name", typeof(string));
table.Columns.Add("Order", typeof(string));
table.Columns.Add("Date", typeof(string));
foreach (var row in patient_list)
table.Rows.Add(row);
return table;
See an example here. As it
Something jumped out at me right away, you're testing if the argument
equals -q by
if( strcmp( argv[1], "-q" ) != 0) //This is an example of what I am trying
to do.
{
quiet = true;
infile.open( argv[2] );
}
which is incorrect. strcmp returns the lexical difference between the two
strings compared :
so I believe you want
if( strcmp( argv[1], "-q" ) == 0) //This is an example of what I am trying
to do.
{
quiet = true;
infile.open( argv[2] );
}
Like I said, I haven't tested anything, it just jumped out at me.
edit
how I would parse in the sourcefile, destfile, and -q option
std::string sourceFile;
std::string destFile;
if ( argc == 3 )
{
sourceFile = std::string( argv[1] );
destFile = std::string( arg
In the inner loop, you need to create a sub <ul> tag.
foreach (TreeNode childNode in node.ChildNodes) {
HtmlGenericControl child_li = new HtmlGenericControl("li");
navMenu.Controls.Add(child_li);
becomes
if(node.ChildNodes.Count > 0) {
HtmlGenericControl child_ul = new HtmlGenericControl("ul");
li.Controls.Add(child_ul);
foreach (TreeNode childNode in node.ChildNodes) {
HtmlGenericControl child_li = new HtmlGenericControl("li");
child_ul.Controls.Add(child_li);
etc...
it seems fairly straightforward - as the error message states you're trying
to + two different types of objects. If you just cast the rows as lists it
should work, so from my own ad-hoc testing:
>>>cur.execute('<removed>') #one of my own tables
>>>tmp = cur.fetchall()
>>>type(tmp[0]) #this is KEY! You need to change the type
<type 'pyodbc.Row'>
>>>tt = [1,2,3]
>>>tmp[0] + tt #gives the same error you have
Traceback (most recent call last):
File "<pyshell#13>", line 1, in <module>
tmp[0] + tt
TypeError: unsupported operand type(s) for +: 'pyodbc.Row' and 'list'
>>>list(tmp[0]) + tt #returns a list as you wanted
[14520496, ..., 1, 2, 3]
Arrays.asList returns a fixed-size List. Adding or removing elements from
this list is not allowed, it is possible however to modify the elements
inside this list using the set method.
public static <T> List<T> asList(T... a) {
return new ArrayList<T>(a); // this is not java.util.ArrayList
}
private static class ArrayList<E> extends AbstractList<E>
implements RandomAccess, java.io.Serializable {
ArrayList(E[] array) {
if (array==null)
throw new NullPointerException();
a = array;
}
public E set(int index, E element) {
E oldValue = a[index];
a[index] = element;
return oldValue;
}
// add() and remove() methods are inherited from AbstractList
}
public abstract class A
You want to find the sum of all the lists, not from one specifically (as
you have tried).
Use a list comprehension instead of a for-loop:
total2 = sum(int(i[2]) for i in dL if '2011' in i)
To get the average:
average = total2 / float(len([int(i[2]) for i in dL if '2011' in i])) # In
python 3, the float() is not needed
A list comprehension is a quick way to make a list. Take for example this:
result = []
for i in range(1, 4):
result.append(i**2)
Result will contain:
[1, 4, 9]
However, this can be shortened to a list comprehension:
[i**2 for i in range(1,4)]
Which returns the same thing.
The reason for when I call sum() and I don't put in brackets around the
comprehension is because I don't need to. Python interprets this as a
generator expression. You can read more abou
I'm an idiot.
I was declaring the list inside the method that calls
GenerateEmployeeApplication() - I'm guessing once I step out of that the
list is kaput.
I moved the declaration outside any methods and it retains the information.
sorry for wasting everyone's time!
Your Question is not very clear to me. Following is the solution what i
made up from your question.
<.Alert;
private var lastId:int = 5;
private function addNode():void
{
var suffix:int = int(Math.random()*100000);
lastId++;
var node:String = '<person id="'+lastId+'" >' +
'<firstName>FirstName'+suffix+'</firstName>' +
'<
Why do you need to do all this? Why isn't this sufficient?
// This has your data in it.
List<Gladiators> gladiators = new ArrayList<Gladiators>();
// Obviously some attributes, including a unique key or name.
// MUST override equals and hashcode properly
Gladiator g = new Gladiator();
if (gladiators.contains(g)) {
// do something here.
}
NullPointerException is one of the easiest problems to fix. Run your code
in an IDE with debugging turned on and put a breakpoint where the stack
trace says the exception occurred. You'll figure out quickly why something
you assumed should not be null has violated your assumptions.
First: Fetch existing Packages from your database:
public function GetPackages() as List<Package>
{
using (var connection = my_DB.GetConnection())
{
try
{
connection.Open();
SqlDataReader rdr = null;
dt = new DataTable();
string CommandText = "SELECT ID, Name FROM TABLENAME WHERE
UPPER(Import_File_Source) LIKE '%abc%' and STATUS = 1";
SqlCommand cmd = new
System.Data.SqlClient.SqlCommand(CommandText, connection);
var packages = new List<Package>();
using(var reader = cmd.ExecuteReader())
{
do while(reader.Read())
{
packages.Add(new Package({ID = reader.GetString(0), Name =
reader.GetString(1)})
}
You need to initialize your list in the constructor:
public User (string username) : this()
{
_name = username;
}
public User()
{
this.ControlNumber = new List<int>();
}
Otherwise, ControlNumber will have its default value of null.
The @Html.DropDownList() is a little bit special. You can either pass it a
parameter value implicitly or explicitly.
In the tutorial, they are doing it implicitly by calling the helper with
the name of the model property, like so:
<div class="editor-field">
@Html.DropDownList("DepartmentID", String.Empty)
@Html.ValidationMessageFor(model => model.DepartmentID)
</div>
By doing it implicitly, the HTML-helper will look in the ViewBag for an
object with the same name. And, as you can see in the
PopulateDepartmentsDropDownList() method they are doing exactly that (last
line):
private void PopulateDepartmentsDropDownList(object selectedDepartment =
null)
{
var departmentsQuery = from d in db.Departments
orderby d.Name
Answer: The properties are set to private, and they should've been public.
Once I switched that around, all was well.
Credit goes to drch on the C# chat for this amazing answer.
You can't add element to a list created using Arrays.asList(). As the
documentation explains:
Returns a fixed-size list backed by the specified array.
this is not a correct syntax
() => println("x") :: l2
the correct one is
(() => println("x")) :: l2
and why l2 = foo :: l2 does not compile is because the type of foo does not
compliant with l2 to understand it deeply try following
foo.toString
however followings will be compiled
var fn = {() => println("y")}
l2 = fn :: l2
or
foo _ :: l2
this -> //trying to add linked list here
should be:
this ->inventory(); //trying to add linked list here
A few things of note:
you don't need this-> in front of members.
your code will be better (smaller and more efficient) if you use
initializer lists
Example:
player::player():
name("default"),
lvl(99),
// and so on
inventory()
{
}
The members of your player class should (probably) be private
Since you need to link the old and new id's, you will need a different
approach than the one used here. When you generate the new question
records, store the id's in a structure, rather than an array, so you can
maintain a mapping of old => new values.
<!--- initialize mapping of old => new ids -->
<cfset idMapping = {}>
<cfloop ...>
<!--- use the "result" attribute to capture the new id --->
<cfquery result="addRecord" ....>
INSERT INTO YourTable (...) VALUES (...);
</cfquery>
<!--- save the id in the array -->
<cfset idMapping[ oldQuestionID ] = addRecord.GENERATED_KEY>
</cfloop>
When you insert the new answers, use the old id to do a look up and grab
the new ques
The div needs to be shorter then the content of the div. Since you haven't
specified a height, it will be as tall as it needs to be to hold all the
content. Add a height: some-length.
Using array_shift to break up the array worked for me, but it does not work
if $cats has mutliple classes
<?php $classes = array();
$terms = get_the_terms($post->ID, 'product_cat');
foreach ($terms as $term) {
$cats[] = $term->slug;
}
$classes[] = implode(" ", $cats);
<li <?php post_class( $classes); ?>>
This how I solved this. I know this is not a proper one.But it worked for
me.
I am adding a table with variable width in the first cell of the parent
table i.e here pobjTable
PdfPTable DummyTable = new PdfPTable(2);
//Here the floatSpace value changes according to the lftBulletIndent values
float[] headerwidths = { 2f + floatSpace, 98f - floatSpace};
DummyTable.SetWidths(headerwidths);
Pcell = new PdfPCell();
Pcell.Border = Rectangle.NO_BORDER;
DummyTable.AddCell(Pcell);
Pcell = new PdfPCell();
Pcell.AddElement(list);
Pcell.Border = Rectangle.NO_BORDER;
DummyTable.AddCell(Pcell);
pobjTable.AddCell(DummyTable);//Inserting a new table here
pobjTable.AddCell("");
|
http://www.w3hello.com/questions/-Adding-to-a-list-box-in-ASP-Net-
|
CC-MAIN-2018-17
|
refinedweb
| 2,927
| 56.45
|
LINQ (Language Integrated Query)
LINQ is known as Language Integrated Query and it is introduced in .NET 3.5 and Visual Studio 2008. The beauty of LINQ is it provides the ability to .NET languages(like C#, VB.NET, etc.) to generate queries to retrieve data from the data source. For example, a program may get information from the student records or accessing employee records, etc. In, past years, such type of data is stored in a separate database from the application, and you need to learn different types of query language to access such type of data like SQL, XML, etc. And also you cannot create a query using C# language or any other .NET language.
To overcome such type of problems Microsoft developed LINQ. It attaches one, more power to the C# or .NET languages to generate a query for any LINQ compatible data source. And the best part is the syntax used to create a query is the same no matter which type of data source is used means the syntax of creating a query data in a relational database is same as that used to create query data stored in an array there is no need to use SQL or any other non-.NET language mechanism. You can also use LINQ with SQL, with XML files, with ADO.NET, with web services, and with any other database.
In C#, LINQ is present in System.Linq namespace. It provides different type of classes and methods which supports LINQ queries. In this namespace:
- Enumerable class holds standard query operators that operate on object which executes IEnumerable<T>.
- Queryable class holds standard query operators that operate on object which executes IQueryable<T>.
Architecture of LINQ
The architecture of LINQ is 3-layered architecture. In which the topmost layer contains language extension and the bottom layer contains data sources that generally object implementing IEnumerable <T> or IQueryable <T> generic interfaces. The architecture of the LINQ is as shown in the below image:
Why we use LINQ?
Now we learn why LINQ is created, or why we use LINQ. The following points explain why we use LINQ.
- The main purpose behind creating LINQ is, before C# 3.0 we use for loop, foreach loop, or delegates traverse a collection to find a specific object, but the disadvantage of using these methods for finding an object is you need to write a large sum of code to find an object which is more time-consuming and make your program less readable. So to overcome these problems LINQ introduced. Which perform the same operation in a few numbers of lines and make your code more readable and also you can use the same code in other programs.
- It also provides full type checking at compile time, it helps us to detect the error at the runtime, so that we can easily remove them.
- LINQ is it is simple, well-ordered, and high-level language than SQL
- You can also use LINQ with C# array and collections. It gives you a new direction to solve the old problems in an effective manner.
- With the help of LINQ you can easily work with any type of data source like XML, SQL, Entities, objects, etc. A single query can work with any type of database no need to learn different types of languages.
- LINQ supports query expression, Implicitly typed variables, Object and collection initializers, Anonymous types, Extension methods, and Lambda expressions.
Advantages of LINQ
- User does not need to learn new query languages for a different type of data source or data format.
- It increase the readability of the code.
- Query can be reused.
- It gives type checking of the object at compile time.
- It provides IntelliSense for generic collections.
- It can be used with array or collections.
- LINQ supports filtering, sorting, ordering, grouping.
- It makes easy debugging because it is integrated with C# language.
- It provides easy transformation means you can easily convert one data type into another data type like transforming SQL data into XML data.
Recommended Posts:
- What is Query in LINQ?
- LINQ | Query Syntax
- LINQ | Set Operator | Except
- LINQ | Let Keyword
- Aggregation Function in LINQ
- LINQ | Set Operator | Union
- LINQ | Element Operator | Last
- LINQ | Filtering Operator | where
- LINQ | Partition Operator | Take
- LINQ | Set Operator | Distinct
- LINQ | Set Operator | Intersect
- LINQ | How to find the sum of the given Sequence?
- LINQ | Element Operator | First
- LINQ | Method Syntax
- LINQ | Quantifier Operator |.
|
https://www.geeksforgeeks.org/linq-language-integrated-query/
|
CC-MAIN-2019-30
|
refinedweb
| 737
| 64
|
Components of List comprehensions :-
1. An Input Sequence - From which new sequence is built
2. A variable representing members of the input sequence.
3. An conditional expression [ Optional ]
4. An output expression producing elements of the output list - all elements of input sequence which satisfies conditional expression, if available.
Examples:-
1. Use a list comprehension print even numbers from 0 to n(n is some arbitrary value).
>>> values = [x for x in range(0, n+1) if x%2==0] >>> values [0, 2, 4]Here range(0, n+1) is input sequence, x is variable representing members of the input sequence, if x%2==0 represents conditional expression and x at beginning generates output sequence.
2. Using list comprehension print the Fibonacci Sequence in comma separated form for given input n.
def f(n): if n == 0: return 0 elif n == 1: return 1 else: return f(n-1)+f(n-2) n=int(raw_input()) values = [str(f(x)) for x in range(0, n+1)] print ",".join(values)O/P:- 4
0,1,1,2,3
Here str(f(x)) (converting int to string) generates output sequence as list.
3.Using list comprehension, write a program to print the list after removing the 0th, 2nd, 4th , 6th elements in [2,14,45,19,18,10,55].
li = [12,24,35,70,88,120,155] li = [x for (i,x) in enumerate(li) if i%2!=0] print liO/P:- [24, 70, 120]
4. Using list comprehension, write a program to print the list with numbers which are divisible by 5 and 7 in [12,24,35,70,88,120,155]
>>> li =[12,24,35,70,88,120,155] >>> li = [x for x in li if x%5 ==0 and x%7 ==0] >>> li [35, 70]
Note:- Python 3.0 supports Set and Dict comprehensions too.
|
http://www.devinline.com/2016/05/python-list-comprehensions.html
|
CC-MAIN-2018-51
|
refinedweb
| 306
| 63.59
|
A grid is a data container that holds a value at every point in a three dimensional lattice. The simplest type of value to hold is a floating point value. In OEChem, these are called OEScalarGrids. OEScalarGrids are inherently cubic.
The following fundamental parameters define a grid:
Listing 1 demonstrates how to construct a grid with 64 points in each direction, a midpoint at the origin, and a spacing of 0.5 between each point in the grid.
Listing 1: Constructing a scalar grid
package openeye.docexamples.oegrid; import openeye.oegrid.*; public class ScalarGridCtor { public static void main(String argv[]) { OEScalarGrid grid = new OEScalarGrid(64, 64, 64, 0.0f, 0.0f, 0.0f, 0.5f); } }
Note
The memory footprint of the grid is determined by the dimensions. It is easy to create grids that are too large for memory. For example, the grid in Listing 1 will consume \(64^3 * sizeof(float) = 1048576 bytes = 1 MB\) of memory for this nominally sized grid.
Every grid can have its maximum and minimum vales reported by GetXMax, GetXMin etc. The image Grid coordinates shows a 2D representation for simplicity. It shows 4 grid points centered at (0,0), with a spacing of 1.0. Each grid point will have spatial coordinates (0.5,0.5), (-0.5,0.5) etc. However each grid point lies at the center of a pixel (or rather a voxel in 3D). Any property that falls into the area (volume) of that grid point is mapped to that grid point. This means that the maximum value of x is the extent of the grid, not just the maximum value of the grid point coordinates. In this case that value is 1.0.
Grid coordinates
Assuming grid is already filled with relevant values, Listing 2 demonstrates how to retrieve the grid value closest to the atoms in the molecule mol.
Listing 2: Getting values associated with coordinates
for (OEAtomBase atom : mol.GetAtoms()) { float xyz[] = new float[3]; mol.GetCoords(atom, xyz); if (grid.IsInGrid(xyz[0], xyz[1], xyz[2])) System.out.printf("value = %.5f\n", grid.GetValue(xyz[0], xyz[1], xyz[2])); }
Spatial coordinates can also be used to set data on the grid. Listing 3 demonstrates how to assign an atom’s partial charge to the grid point it is closest to.
Listing 3: Setting values associated with coordinates
for (OEAtomBase atom : mol.GetAtoms()) { float xyz[] = new float[3]; mol.GetCoords(atom, xyz); if (grid.IsInGrid(xyz[0], xyz[1], xyz[2])) { float charge = (float)atom.GetPartialCharge(); grid.SetValue(xyz[0], xyz[1], xyz[2], charge); } }
Note
In the preceding two code fragments bounds checking was explicitly performed using the IsInGrid method. Accessing data outside the grid is undefined behavior (usually a segmentation fault). However, IsInGrid can become an expensive operation if performed excessively. One way to avoid this cost is to make sure your grid is big enough to enclose the object being worked on. See OEMakeGridFromCenterAndExtents for an example of constructing a grid that covers the entire molecule to ensure no spatial coordinate access is outside the bounds of the grid.
Grid indices are faster than spatial coordinates because there is no floating point arithmetic to perform. Grid indices make it easy to iterate over the neighbors of any particular point in the grid. Listing 4 demonstrates iterating over all 27 grid points adjacent to and including the grid point given by the grid indices: ix, iy, iz.
Listing 4: Iterating over neighbor points
float xyz[] = new float[3]; mol.GetCoords(atom, xyz); int ixyz[] = new int[3]; grid.SpatialCoordToGridIdx(xyz[0], xyz[1], xyz[2], ixyz); // Make sure not to go past grid bounds int mini = Math.max(ixyz[0] - 1, 0); int minj = Math.max(ixyz[1] - 1, 0); int mink = Math.max(ixyz[2] - 1, 0); int maxi = Math.min(ixyz[0] + 1, grid.GetXDim()); int maxj = Math.min(ixyz[1] + 1, grid.GetYDim()); int maxk = Math.min(ixyz[2] + 1, grid.GetZDim()); for (int k = mink; k < maxk; k++) for (int j = minj; j < maxj; j++) for (int i = mini; i < maxi; i++) System.out.printf("value = %.5f\n", grid.GetValue(i, j, k));
Grid values are actually stored in a large one dimensional block of memory. The fastest way to access all the data is to linearly scan through memory. Listing 5 demonstrates how to square every value in the grid.
Listing 5: Squaring every grid value
for (int i = 0; i < grid.GetSize(); ++i) { float val = grid.GetValue(i); grid.SetValue(i, val * val); }
The following grid file formats are supported:
The ASCII format (.agd) was developed by OpenEye to allow for easy integration with other software. The following is an example of the ASCII output for a grid centered at the origin, 2 points along each axis, a spacing of 0.5, and every value zero.
Title: Example Grid Mid: 0.000000 0.000000 0.000000 Dim: 2 2 2 Spacing: 0.500000 Values: 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00
Listing 6 is the code used to write the ASCII format out. It is provided here to leave no doubt as to how to inter-operate with the format.
Listing 6: Writing the ASCII format
package openeye.docexamples.oegrid; import openeye.oegrid.*; public class ASCIIWriter { public static void main(String argv[]) { OEScalarGrid grid = new OEScalarGrid(2,2,2,0.0f,0.0f,0.0f,0.5f); grid.SetTitle("Simple Grid"); System.out.printf("Title: %s\n", grid.GetTitle()); System.out.printf("Mid: %12.6f %12.6f %12.6f\n", grid.GetXMid(), grid.GetYMid(), grid.GetZMid()); System.out.printf("Dim: %6d %6d %6d\n", grid.GetXDim(), grid.GetYDim(), grid.GetZDim()); System.out.printf("Spacing: %12.6f\n", grid.GetSpacing()); System.out.println("Values:"); for (int iz = 0; iz < grid.GetZDim(); iz++) for (int iy = 0; iy < grid.GetYDim(); iy++) for (int ix = 0; ix < grid.GetXDim(); ix++) { System.out.printf("%-12.6e\n", grid.GetValue(ix,iy,iz)); } } }
Grids can also be attached to molecules and then written out to OEBinary (.oeb) files. A visualizer can then read in the molecule and grid without any other means of making the association. Listing 7 demonstrates how to attach a Gaussian grid to the molecule it was created from.
The grid is attached to a molecule using the ‘generic data’ interface provided by the OEBase base class. This allows an arbitrary number of grids (or any type of data) to be attached to molecules. Since all grids also derive from OEBase generic data can be attached to grids as well allowing for arbitrarily complex data hierarchies.
Listing 7: Attaching a grid to a molecule
OEScalarGrid grid = new OEScalarGrid(); oegrid.OEMakeMolecularGaussianGrid(grid, mol, 0.5f); oegrid.OESetGrid(mol, "Gaussian Grid", grid); oechem.OEWriteMolecule(ofs, mol);
See also
|
https://docs.eyesopen.com/toolkits/java/oechemtk/grids.html
|
CC-MAIN-2017-39
|
refinedweb
| 1,145
| 60.01
|
Straightforward Rails Authorization with Pundit
Free JavaScript Book!
Write powerful, clean and maintainable JavaScript.
RRP $11.95 This is a second article in the “Authorization with Rails” series. In the previous article, we discussed CanCanCan, a widely known solution started by Ryan Bates and now supported by a group of enthusiasts. Today I am going to introduce you a bit less popular, but still viable, library called Pundit created by the folks of ELabs.
“Pundit” means “a scholar”, “a clever guy” (sometimes used as a negative description), but you don’t need to be a genius to use it in your projects. Pundit is really easy to understand and, believe me, you’ll love it. I fell in love it when I browsed its documentation.
The idea behind Pundit is employing plain old Ruby classes and methods without involving any special DSLs. This gem only adds a couple of useful helpers, so, all in all, you can craft your system the way you see fit. This solution is a bit more low-level than CanCanCan, and it is really interesting to compare the two.
In this article we will discuss all of Pundit’s main features: working with access rules, using helper methods,
and scoping and defining permitted attributes.
The working demo is available at sitepoint-pundit.herokuapp.com.
The source code can be found on GitHub.
Preparations
Our preparations will be super fast. Go ahead and create a new Rails app. Pundit works with Rails 3 and 4, but for this demo version 4.1 will be used.
Drop in the following gems into your Gemfile:
Gemfile
[...] gem 'pundit' gem 'clearance' gem 'bootstrap-sass' [...]
Clearance will be used to set up authentication very quickly. This gem was covered in my “Simple Rails Authentication with Clearance” article, so you can refer to it for more information.
Bootstrap will be used for basic styling, though you may skip this as always.
Don’t forget to run
$ bundle install
Now, run Clearance’s generator that is going to create a
User model and some basic configuration:
$ rails generate clearance:install
Modify the layout to include flash messages, as Clearance and Pundit rely on them to display information to the user:
layouts/application.html.erb
[...] <div id="flash"> <% flash.each do |key, value| %> <div class="alert alert-<%= key %>"><%= value %></div> <% end %> </div> [...]
Create a new scaffold for
$ rails g scaffold Post title:string body:text
We also want our users to log in before working with the app:
application_controller.rb
[...] before_action :require_login [...]
Learn PHP for free!
Make the leap into server-side programming with a comprehensive cover of PHP & MySQL.
Normally
RRP $11.95 Yours absolutely free
Set up the root route:
routes.rb
[...] root to: 'posts#index' [...]
Admin User
To finish setting up our lab environment, we need to add an
admin field inside the
users table, so modify the migration like this:
xxx_create_users.rb
[...] t.boolean :admin, default: false, null: false [...]
Run the migrations:
$ rake db:migrate
Lastly, let’s add a small button to easily switch between admin states:
layouts/application.html.erb
[...] <% if current_user %> <div class="well well-sm"> Admin: <strong><%= current_user.admin? %></strong><br> <%= link_to 'Toggle admin rights', user_path(current_user), method: :patch, class: 'btn btn-info' %> </div> <% end %> [...]
We have to check if
current_user is present, otherwise non-authenticated users will see an error.
Add a route:
routes.rb
[...] resources :users, only: [:update] [...]
and a controller:
users_controller.rb
class UsersController < ApplicationController def update @user = User.find(params[:id]) @user.toggle!(:admin) flash[:success] = 'OK!' redirect_to root_path end end
That’s it! The lab environment is ready, so now we can start playing with Pundit.
Integrating Pundit
To start off, add the following line to
ApplicationController:
application_controller.rb
[...] include Pundit [...]
Next, run Pundit’s generator:
$ rails g pundit:install
This is going to create a base class with policies inside the app/policies folder. Policy classes are the core of Pundit and we are going to work with them extensively. The base policy class looks like this:
app/policies/application_policy.rb
class ApplicationPolicy attr_reader :user, :record def initialize(user, record) raise Pundit::NotAuthorizedError, "must be logged in" unless user @user = user @record = record end def index? false end def show? scope.where(:id => record.id).exists? end def create? false end def new? create? end # [...] # some stuff omitted class Scope # [...] end end
Each policy is a basic Ruby class, but you have to keep in mind a couple of things:
- Policies should be named after a model they belong to, but prefixed with the word
Policy. For example, use
PostPolicyfor the
Postmodel. If you don’t have an associated model, you still can use Pundit – read more here.
- The first argument to the
initializemethod is a user record. Pundit uses the
current_usermethod to get it, but if you do not have such a method, this behavior can be changed by overriding
pundit_user. Read more here.
- The second argument is a model object. It does not have to be an
ActiveModelobject though.
- The policy class has to implement query methods like
create?or
new?to check access rights.
If you are using the base policy class and inherit from it, you don’t really need to worry about most of this stuff, but there may be times when you need a custom policy class (for example, when you don’t have a corresponding model).
Providing Access Rules
Now, let’s write our first access rule. For example, we only want admin users to be able to destroy posts. That’s easy to do! Create a new post_policy.rb file inside the policies folder and paste the following code:
policies/post_policy.rb
class PostPolicy < ApplicationPolicy def destroy? user.admin? end end
As you can see, the
destroy? method is going to return
true or
false, indicating whether a user is authorized to perform an action.
Inside the controller we need to check our rule:
posts_controller.rb
[...] def destroy authorize @post @post.destroy redirect_to posts_url, notice: 'Post was successfully destroyed.' end [...]
authorize accepts a second optional argument to provide the name of the rule to use. This is useful if your action’s name is different, for example:
def publish authorize @post, :update? end
You can also pass a class name instead of an instance, for example, if you don’t have a resource to work with:
authorize Post
If the user is not allowed to perform the action, an error will be raised. We need to rescue from it and display a useful message instead. The easiest solution will be to just render some basic text:
application_controller.rb
[...] rescue_from Pundit::NotAuthorizedError, with: :user_not_authorized private def user_not_authorized flash[:warning] = "You are not authorized to perform this action." redirect_to(request.referrer || root_path) end [...]
However, you might need to set up a custom message for different cases or translate it for other languages. That’s possible as well:
application_controller.rb
[...] rescue_from Pundit::NotAuthorizedError, with: :user_not_authorized private def user_not_authorized(exception) policy_name = exception.policy.class.to_s.underscore flash[:warning] = t "#{policy_name}.#{exception.query}", scope: "pundit", default: :default redirect_to(request.referrer || root_path) end [...]
Don’t forget to update translations file:
config/locales/en.yml
en: pundit: default: 'You cannot perform this action.' post_policy: destroy?: 'You cannot destroy this post!'
If a user cannot destroy a post, there is no point in rendering the “Destroy” button. Luckily, Pundit provides a special helper method:
views/posts/index.html.erb
[...] <% if policy(post).destroy? %> <td><%= link_to 'Destroy', post, method: :delete, data: { confirm: 'Are you sure?' } %></td> <% end %> [...]
Go ahead and check it out! Everything should be working nicely.
You might be thinking that inside the policies we don’t handle a case when a user is not authenticated at all. To fix it, simply add the following line:
policies/application_policy.rb
[...] def initialize(user, record) raise Pundit::NotAuthorizedError, "must be logged in" unless user @user = user @record = record end [...]
Just don’t forget to rescue from
Pundit::NotAuthorizedError error inside the
ApplicationController.
Setting Up Relations
Now, let’s set up a one-to-many relationship between posts and users. Create a new migration and apply it:
$ rails g migration add_user_id_to_posts user:references $ rake db:migrate
Modify model files:
models/post.rb
[...] belongs_to :user [...]
models/user.rb
[...] has_many :posts [...]
Also, it would be great to populate the database with some demo records. Use seeds.rb for that:
20.times do |i| Post.create({title: "Post #{i + 1}", body: 'test body', user_id: i > 10 ? 1 : 2}) end
We just create 20 posts belonging to different users. Feel free to modify this code as needed.
Run
$ rake db:seed
to populate the database.
Modify the policy to allow users to destroy their own posts:
policies/post_policy.rb
[...] def destroy? user.admin? || record.user == user end [...]
As you probably remember,
record is being set to the object with which we are working. This is nice, but we can do more. How about creating a scope to load only the posts that user owns? Pundit supports that, too!
Working with Scopes
First of all, create a new, non-RESTful action:
posts_controller.rb
def user_posts end
routes.rb
resources :posts do collection do get '/user_posts', to: 'posts#user_posts', as: :user end end
Add a top menu:
layouts/application.html.erb
[...] <nav class="navbar navbar-inverse"> <div class="container"> <div class="navbar-header"> <%= link_to 'Pundit', root_path, class: 'navbar-brand' %> </div> <div id="navbar"> <ul class="nav navbar-nav"> <li><%= link_to 'All posts', posts_path %></li> <li><%= link_to 'Your posts', user_posts_path %></li> </ul> </div> </div> </nav> [...]
We need to create an actual scope. Inside the application_policy.rb file there is a
Scope class that has the following code:
policies/application_policy.rb
class Scope attr_reader :user, :scope def initialize(user, scope) @user = user @scope = scope end def resolve scope end end
Just as with policies, there are a couple of things to take into consideration:
- The class should be named
Scopeand nested inside your policy class.
- The first argument passed to the
initializemethod is user, just like with policies.
- The second argument is scope (an instance of
ActiveRecordor
ActiveRecord::Relationor anything else).
- The
Scopeclass implements a method called
resolvethat returns an iterable result.
Just inherit from the base class and implement your own
resolve method:
policies/post_policy.rb
class PostPolicy < ApplicationPolicy class Scope < Scope def resolve scope.where(user: user) end end [...] end
This scope simply loads all the posts that belong to a user. Use this new scope inside your controller:
posts_controller.rb
[...] def user_posts @posts = policy_scope(Post) end [...]
You may want to extract some code from the index view into partials (and prettify the table a bit):
views/posts/_post.html.erb
<tr> <td><%= post.title %></td> <td><%= post.body %></td> <td><%= link_to 'Show', post %></td> <td><%= link_to 'Edit', edit_post_path(post) %></td> <% if policy(post).destroy? %> <td><%= link_to 'Destroy', post, method: :delete, data: { confirm: 'Are you sure?' } %></td> <% end %> </tr>
views/posts/_list.html.erb
<table class="table table-bordered table-striped table-condensed table-hover"> <thead> <tr> <th>Title</th> <th>Body</th> <th colspan="3"></th> </tr> </thead> <tbody> <%= render @posts %> </tbody> </table> <br> <%= link_to 'New Post', new_post_path %>
views/posts/index.html.erb
<h1>Listing Posts</h1> <%= render 'list' %>
Now, just create a new view:
views/posts/user_posts.html.erb
<h1>Your Posts</h1> <%= render 'list' %>
Boot the server and observe the result!
Inside the views you may use the following code to select only the required records:
<% policy_scope(@user.posts).each do |post| %> <% end %>
Enforcing Authorization
If you want to check that authorization did take place in your controller, use
verify_authorized as an after action. You can also choose what actions to check:
posts_controller.rb
[...] after_action :verify_authorized, only: [:destroy] [...]
The same can be done to ensure scoping took place:
posts_controller.rb
[...] after_action :verify_policy_scoped, only: [:user_posts] [...]
Under some circumstances, however, it may be unreasonable to perform the authorization check, so you might want to skip it. For example, if a record to destroy wasn’t found, we just return back without any further actions. For this
skip_authorization method is used:
[...] def destroy if @post.present? authorize @post @post.destroy else skip_authorization end redirect_to posts_url, notice: 'Post was successfully destroyed.' end private def set_post @post = Post.find_by(id: params[:id]) end [...]
Permitted Parameters
The last feature we are going to discuss is an ability to define permitted parameters in Pundit’s policies. Note that you have to use Rails 4 or Rails 3 with the strong_params gem for this to work properly.
Suppose, we have some “special” parameter that only administrators should be able to modify. Let’s add the corresponding migration:
$ rails g migration add_special_to_posts special:boolean
Modify the migration a bit:
xxx_add_special_to_posts.rb
[...] add_column :posts, :special, :boolean, default: false [...]
and apply it:
$ rake db:migrate
Now, define a new method in your policies:
policies/post_policy.rb
[...] def permitted_attributes if user.admin? [:title, :body, :special] else [:title, :body] end end [...]
So, an administrator can set all attributes, whereas users can only modify
title and
body. Lastly, update the controller methods:
posts_controller.rb
[...] def create @post = Post.new @post.update_attributes(permitted_attributes(@post)) if @post.save redirect_to @post, notice: 'Post was successfully created.' else render :new end end def update if @post.update(permitted_attributes(@post)) redirect_to @post, notice: 'Post was successfully updated.' else render :edit end end [...]
permitted_attributes is a helper method that expects a resource to be passed. There is a small gotcha inside the
create method: you need to initialize
@post and only then set its attributes. If you do this:
@post = Post.new(permitted_attributes(@post))
an error will be raised, because you are trying to pass a non-existant object to
permitted_attributes.
Instead of employing
permitted_attributes, you can stick to the basic
post_params and modify it like this:
posts_controller.rb
[...] def post_params params.require(:post).permit(policy(@post).permitted_attributes) end [...]
Lastly, modify the views:
views/posts/_form.html.erb
[...] <div class="field"> <%= f.label :special %><br> <%= f.check_box :special %> </div> [...]
views/posts/_list.html.erb
[...] <thead> <tr> <th>Title</th> <th>Body</th> <th>Special?</th> <th colspan="3"></th> </tr> </thead> [...]
views/posts/_post.html.erb
<tr> <td><%= post.title %></td> <td><%= post.body %></td> <td><%= post.special? %></td> <td><%= link_to 'Show', post %></td> <td><%= link_to 'Edit', edit_post_path(post) %></td> <% if policy(post).destroy? %> <td><%= link_to 'Destroy', post, method: :delete, data: { confirm: 'Are you sure?' } %></td> <% end %> </tr>
Now boot the server and trying setting the “special” parameter to true while being a basic user – you shouldn’t be able to do so.
Conclusion
In this article we’ve discussed Pundit – a great authorization solution employing basic Ruby classes. We’ve taken a look at most of its features. I encourage you to browse its documentation, because you might be interested in how to provide additional context or manually specify the policy class.
Have you ever used Pundit before? Would you consider using it in future? Do you think it is more convenient than CanCanCan or is it just a different solution? Share your opinion in the comments!
As always, I thank you for staying with me. If you want me to cover a topic, don’t hesitate to ask. See you!
Learn valuable skills with a practical introduction to Python programming!
Give yourself more options and write higher quality CSS with CSS Optimization Basics.
|
https://www.sitepoint.com/straightforward-rails-authorization-with-pundit/
|
CC-MAIN-2020-29
|
refinedweb
| 2,531
| 51.55
|
Created on 2015-12-15 03:38 by abarnert, last changed 2016-01-04 22:25 by abarnert.
Example:
class MyDict(collections.abc.Mapping):
def __init__(self, d): self.d = d
def __len__(self): return len(self.d)
def __getitem__(self, key): return self.d[key]
def __iter__(self): return iter(self.d)
d = {1:2, 3:4}
m = MyDict({1: 2, 3: 4})`.
Of course it's obvious why this happens once you think about it: in order to handle types that implement the old-style sequence protocol (just respond to `__getitem__` for all integers from 0 to `len(self)`), `reversed` has to fall back to trying `__getitem__` for all integers from `len(d)-1` to 0.
If you provide a `__reversed__` method, it just calls that. Or, if you're a C-API mapping like `dict`, `PySequence_Check` will return false and it'll raise a `TypeError`. But for a Python mapping, there's no way `PySequence_Check` or anything else can know that you're not actually a sequence (after all, you implement `__getitem__` and `__len__`), so it tries to use you as one, and confusion results.
I think trying to fix this for _all_ possible mappings is a non-starter.
But fixing it for mappings that use `collections.abc.Mapping` is easy: just provide a default implementation of `collections.abc.Mapping.__reversed__` that just raises a `TypeError`.
I can't imagine this would break any working code. If it did, the workaround would be simple: just implement `def __reversed__(self): return (self[k] for k in reversed(range(len(self))))`.
Can it be reproduced in default branch? I tried but got:
AttributeError: module 'collections' has no attribute 'abc'
You need to do 'import collections.abc' as abc is a submodule of collections, and is not imported from a bare 'import collections'.
>`.
I got 2 instead of 3.
What are we exactly expecting here? How can a dictionary be reversed?
> I can't imagine this would break any working code. If it did, the workaround would be simple: just implement `def __reversed__(self): return (self[k] for k in reversed(range(len(self))))`.
This seems to make no difference. I still got the KeyError.
> What are we exactly expecting here?
Well, naively, I was expecting a TypeError, just as you get for dict, or a subclass of dict, or a custom extension type that implements the C-API mapping protocol.
Once you understand how reversed works, you can understand why it gives you a nonsensical and useless iterator instead. But nobody would actually _want_ that.
So, as I proposed in the initial description, and the title, what we should be doing is raising a TypeError.
> How can a dictionary be reversed?
Assuming this is a pointless rhetorical question: That's exactly the point. There's no sensible meaning to reversing a dictionary, so it should raise a TypeError. Exactly as it already does for dict, subclasses of dict, and C-API mappings.
If this wasn't rhetorical: I guess you could argue that any arbitrary order in reverse is any arbitrary order, so even returning iter(m) would be acceptable. Or maybe reversed(list(m)) would be even better, if it didn't require O(N) space. But in practice, nobody will ever expect that--again, they don't get it from dict, subclasses, C-API mappings--so why go out of our way to implement it? So, again, it should raise a TypeError.
> This seems to make no difference. I still got the KeyError.
Of course. Again, the current behavior is nonsensical, will almost always raise a KeyError at some point, and will never be anything a reasonable person wants. So a workaround that restores the current behavior will also be nonsensical, almost always raise a KeyError at some point, and never be anything a reasonable person wants.
But, if you happen to be unreasonably unreasonable--say, you created a mapping with {2:40, 0:10, 1:20} and actually wanted reversed(m) to confusingly give you 40, 20, 10--and this change did break your code, the workaround would restore it.
I've seen no evidence that this a problem in practice. It seems no more interesting or problematic than sequence argument unpacking working with dictionaries, "a, b = {'one': 1, 'two': 2}".
> It seems no more interesting or problematic than sequence argument unpacking working with dictionaries, "a, b = {'one': 1, 'two': 2}".
Dictionaries (including dicts, dict subclasses, and custom Mappings) are iterables. People use that fact every time they write `for key in d`. So it's not at all problematic that they work with iterable unpacking. Especially since here, custom Mappings work exactly the same way as dicts, dict subclasses, custom Sets, iterators, and every other kind of iterable.
Dictionaries are not sequences. People never write code expecting them to be. So it is problematic that they work with sequence reversing. Especially since here, custom Mappings do _not_ work the same way as dicts, dict subclasses, custom Sets, iterators, and other non-sequences, all of which raise a TypeError.
But the work around suggested here as:
def __reversed__(self):
return (self[k] for k in reversed(range(len(self))))
is also not a general solution, i.e. it is applicable for the following case:
m = MyDict({2:40, 0:10, 1:20})
but for any other mapping which does not have 0 as a key, it results in KeyError. So another solution, which would be more general could be:
def __reversed__(self):
keys = [k for k in self.keys()]
return (self[k] for k in reversed(keys))
No, the workaround was for duplicating the existing behavior if the fix (raising an error like a normal dict does) broken someone's code. The *only* possibility here is to have a __reversed__ that raises a TypeError.
What is it that makes reversed raise a typeerror on dict here? Not that we can change it at this point, but reversed blindly using len and __getitem__ for user classes but not on dict is rather inconsistent. I suppose the dict TypeError special case catches common mistakes? In which case adding a __reversed__ that raises a TypeError to Mapping seems to make sense for the same reason.
@R. David Murray:
> What is it that makes reversed raise a typeerror on dict here?
There are separate slots for tp_as_sequence and tp_as_mapping, so a C type can be (and generally is) one or the other, not both.
But for Python types, anything that has __getitem__ is both a sequence and a mapping at the C level. (It's one of the few minor inconsistencies between C and Python types left, like having separate slots for nb_add and sq_concat in C but only __add__ for both in Python.)
> Not that we can change it at this point, but reversed blindly using len and __getitem__ for user classes but not on dict is rather inconsistent.
But it's consistent with iter blindly using __len__ and __getitem__ if __iter__ is not present on Python classes but not doing that on C classes. That's how "old-style sequences" work, and I don't think we want to get rid of those (and, even if we did, I'm pretty sure that would require at least a thread on -dev or -ideas...).
> I suppose the dict TypeError special case catches common mistakes?
Yes. That's probably not why it was implemented (it's easier for a C type to _not_ fake being a broken sequence than to do so), but it has that positive effect.
> In which case adding a __reversed__ that raises a TypeError to Mapping seems to make sense for the same reason.
Exactly.
I think Raymond's point is that, while it does make sense, it may still not be important enough to be worth even two lines of code. Hopefully we can get more than two opinions (his and mine) on that question; otherwise, at least as far as I'm concerned, he trumps me.
@Swati Jaiswal:
> But the work around suggested here ... is also not a general solution, i.e. ... for any other mapping which does not have 0 as a key, it results in KeyError.
You're missing the point. The workaround isn't intended to be a general solution to making mappings reversible. It's intended to produce the exact same behavior as the current design, for any code that somehow depends on that. So, any mapping that happens to be reversible by luck is reversible with the workaround; any mapping that successfully produces odd nonsense produces the same odd nonsense; any mapping that raises a KeyError(0) will raise the same KeyError(0).
In the incredibly vast majority of cases (probably 100%) you will not want that workaround; you will want the new behavior that raises a TypeError instead. I don't think the workaround needs to be mentioned in the documentation or anything; I just produced it to prove that, on the incredibly unlikely chance that the change is a problem for someone, the workaround to restore the old behavior is trivial.
Meanwhile, your general solution takes linear space, and linear up-front work, which makes it unacceptable for a general __reversed__ implementation. When you actually want, you can do it manually and explicitly in a one-liner, as already explained in the docs.
If you're still not getting this, pretend I never mentioned the workaround. It really doesn't matter.
Okay, so should I go for a patch for it? And sorry if it sounds naive, but do we provide the work around or the user would implement if they purposely want it. If we provide it, then where should it be written?
@Andrew Barnert,
sorry, I didn't get your previous messages so please ignore the last message i sent. I got your point i.e. We just need to provide the TypeError in the Mapping. And the work around is never implemented.
Should I go for the patch with it?
You can write a patch if you like, but we haven't come to a consensus on actually doing anything. I'm leaning toward Andrew's position, but not strongly, so we need some more core dev opinions.
Unless this somehow breaks the philosophy of ABCs, I would be inclined to add the negative methods.
I agree that by default calling reversed() on mapping should raise a TypeError. But for now issubclass(collections.abc.Mapping, typing.Reversible) returns False. If add default __reversed__ implementation this test will return True. We have to find other way to make Mapping true non-reversible in all meanings.
Perhaps there is a bug in typing.Reversible. It doesn't accept all types supported by reversed().
>>> class Counter(int):
... def __getitem__(s, i): return i
... def __len__(s): return s
...
>>> list(reversed(Counter(10)))
[9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
>>> issubclass(Counter, typing.Reversible)
False
And accepts types that don't work with reversed().
>>> class L(list):
... __reversed__ = None
...
>>> reversed(L())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'NoneType' object is not callable
>>> issubclass(L, typing.Reversible)
True
> Perhaps there is a bug in typing.Reversible. It doesn't accept all types supported by reversed().
> ... And accepts types that don't work with reversed().
The problem is the way the two are defined:
* Reversible is true if you implement __reversed__
* reverse works if you implement __reversed__ or implement the old-style sequence protocol.
That explains why it doesn't work on tuple, bytearray, etc. Iterable actually has the exact same problem, but, because it's a supertype of Sequence, and we have explicit Sequence.register(tuple) and MutableSequence.register(bytearray) in collections.abc, and typing.Iterable specifies collections.abc.Iterable as its "extra", it all works out.
We could do the same for Reversible: add a collections.abc.Reversible, make it a subtype of Iterable and make Sequence a subtype of Reversible instead of Iterable, and make that the extra for typing.Reversible. Then it would work for all of those builtin types (and many third-party types that explicitly register with Sequence), just as Iterable does.
But that only solves the problem in one direction. To solve it in the other direction, we'd need some way to either explicitly mark a method as not implemented (maybe setting it to None, or to any non-callable, or any data descriptor?) that ABC subclass hooks and/or typing checks are expected to understand, or unregister a class with an ABC so that it isn't a subtype even if it passes the implicit hooks.
Or... could we just drop Reversible as an implicit protocol? The lack of an explicit "deny" mechanism for implicit protocols and ABCs is a more general problem, but if this is the only actual instance of that problem in real life, do we need to solve the general problem? If not, there's no obvious way to define typing.Reversible that isn't wrong, it doesn't have a corresponding ABC, it doesn't seem like it will be useful often enough to be worth the problems it causes, and I doubt there's much real-life code out there already depending on it, so that seems a lot easier.
So this issue now has two problems being discussed in it. Someone should start a new issue for the typing.Reversible problem.
Serhiy already filed the typing.Reversible bug on the separate typehinting tracker (). So, unless fixing that bug requires some changes back to collections.abc or something else in the stdlib, I think the only issue here is the original one, on Mapping.
Also, I filed #25958 as an ABC equivalent to Serhiy's typehinting problem. I don't know if that actually needs to be solved, but that definitely takes it out of the way for this issue..)
This sounds good. Also, reversed() could then be modified to produce a
better error. (The "unhashable" error comes from the hash() builtin, so
that's also a precedent.)
On Sat, Dec 26, 2015 at 4:32 PM, Andrew Barnert <report@bugs.python.org>
wrote:
>
> Andrew Barnert added the comment:
>
>.)
>
> ----------
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <>
> _______________________________________
>
> This sounds good. Also, reversed() could then be modified to produce a
better error.
Should `iter` also be modified to produce a better error if `__iter__` is None?
Also, should this be documented? Maybe a sentence in the "Special method names" section of the "Data model" chapter, like this:
> To indicate that some syntax is not supported, set the corresponding special method name to None. For example, if __iter__ is None, the class is not iterable, so iter() will not look for __getitem__.
All sounds fine.
The attached patch does the following:
* collections.abc.Mapping.__reversed__ = None.
* collections.abc.Iterable.__subclasshook__ checks for None the same way Hashable does:
* This tests for any falsey value, not just None. I'm not sure this is ideal, but it's consistent with Hashable, and it's unlikely to ever matter.
* The test will only block implicit subclassing. If a class, e.g., inherits from tuple (which is explicitly registered with Sequence, which inherits from Iterable), it's Iterable, even if it sets __iter__ = None. I think this is the right behavior, and it's consistent with Hashable.
* iter and reversed add checks for None, which raise a TypeError with the appropriate message (instead of "'NoneType' is not callable").
* datamodel.rst section "Special method names" includes a paragraph on setting special methods to None.
* Tests for changes to reversed (in test_enumerate.py), iter (in test_iter.py), Iterable (in test_collections.py), and Mapping (in collections.py).
If this patch goes ahead, I think the ABC documentation should clarify which methods are checked for None and which aren’t. The datamodel.rst file will suggest None for any method, but ABC will only support it for Iterable and Hashable (I think).
Also, what is the point of the odd __getitem__() method in test_enumerate.py? Maybe you should use assertRaisesRegex() to check that the intended TypeError is actually raised.
> If this patch goes ahead, I think the ABC documentation should clarify which methods are checked for None and which aren’t.
That seems fair.
Also, as you pointed out on #25958, at least one other ABC has the same problem as Iterable: you can block the "in" operator by setting __contains__=None, but you'll still be a Container. So, do we want to go through all of the existing ABCs and make sure they all do this negative check, instead of just Iterable?
> Also, what is the point of the odd __getitem__() method in test_enumerate.py? Maybe you should use assertRaisesRegex() to check that the intended TypeError is actually raised.
If an implementation doesn't raise a TypeError there, that's a failure. If it raises one with a different (possibly less helpful) message, I think that's just a quality-of-implementation issue, isn't it?
IMO allowing any special method to be set to None seems to make more trouble than it is worth. Are there practical problems to address, or are they all theoretical?
Ideally I think it would be better to require __reversed__() for reverse() to work, but such a change would break compatibility.
Regarding test_enumerate.py, your class looks like this:
class Blocked(object):
def __getitem__(self): return 1
def __len__(self): return 2
__reversed__ = None
The signature of __getitem__() is wrong, and causes a TypeError during iteration, although your particular test does not go that far. When I see someone using assertRaises() with a common exception like TypeError, I instinctively suggest checking the message to avoid these kind of test case bugs.
I suggest either remove __getitem__() if it serves no purpose, or change it to something like this if you really want an unreversible sequence:
def __getitem__(self, index):
return (1, 1)[index]
I think I tried to address all questions in #25958.
> IMO allowing any special method to be set to None seems to make more trouble than it is worth.
That's exactly why allowing _any_ special method to be None is a separate issue (#25958). Most special methods don't have any corresponding problem to the one with __reversed__.
> Ideally I think it would be better to require __reversed__() for reverse() to work, but such a change would break compatibility.
See the -ideas thread "Deprecating the old-style sequence protocol" ().
> Regarding test_enumerate.py, your class looks like this:
Please look at the two classes directly above it in the same function. The new Blocked exactly parallels the existing NoLen.
> I suggest either remove __getitem__() if it serves no purpose
It very definitely serves a purpose. The whole point of the new test is that reversed will not fall back to using __getitem__ and __len__ if __reversed__ is None. So __getitem__ has to be there; otherwise, we already know (from the NoGetItem test) that it wouldn't get called anyway.
This is exactly the same as the NoLen test, which verifies that __reversed__ will not fall back to __getitem__ and __len__ if one is present but not both.
> , or change it to something like this if you really want an unreversible sequence:
Sure, if I wanted a real class that could be used as a sequence but could not be reversed. But all I want here is a toy class for testing the specific method lookup behavior. Again, exactly like the existing classes in the same test.
Finally, from your previous comment:
> I think the ABC documentation should clarify which methods are checked for None and which aren’t.
Looking at this a bit more: The ABC documentation doesn't even tell you that, e.g., Container and Hashable have subclass hooks that automatically make any class with __contains__ and __hash__ act like registered subclasses while, say, Sequence and Set don't. So, you're suggesting that we should explain where the hooks in some of those types differ, when we haven't even mentioned where the hooks exist at all. Maybe collections.abc _should_ have more detail in the docs, but I don't think that should be part of this bug. (Practically, I've always found the link to the source at the top sufficient--trying to work out exactly why tuple meets some ABC and some custom third-party sequence doesn't, which is a pretty rare case to begin with, is also pretty easy to deal with: you scan the source, quickly find that Sequence.register(tuple), read up on what it does, and realize that collections.abc.Sequence.register(joeschmoe.JoeSchmoeSequence) is what you want, and you're done.)
Agreed that improving the docs doesn't belong in this bug, but in general
if the docs aren't clear enough and only a visit to the source helps you
understand, something's wrong. Because the source may do things one way
today and be changed to do things differently tomorrow, all within the
(intended) promises of the API. But without docs we don't know what those
promises are.
I’m sorry I only read your patch and did not see the NoLen class above Blocked. (I blame the lack of Reitveld review link.) I still think __getitem__() should have a valid signature, but I acknowledge that it’s not really your fault. :)
My main concern about the documentation was that in your patch you say _all_ special methods are now allowed to be None, but in your code you only check __iter__(). This is adding new undocumented inconsistencies e.g. with Iterable vs Container. Maybe it would be better to only say that setting __iter__ to None is supported, instead of setting any special method.
> My main concern about the documentation was that in your patch you say _all_ special methods are now allowed to be None, but in your code you only check __iter__(). This is adding new undocumented inconsistencies e.g. with Iterable vs Container.
No it isn't. We already have undocumented inconsistencies between, e.g., Hashable vs. Container; we're just moving Iterable from one set to the other. (Notice that there's an even larger inconsistency between, e.g., Hashable and Container vs. Sequence and Set. As Guido suggests, that probably _does_ need to be fixed, but not as part of this bug, or #25958.)
And, as for the docs, it's already true that you can block fallback and inheritance of special methods by assigning them to None, but that isn't documented anywhere.
So, this bug doesn't add fix any of those inconsistencies, but it doesn't add any new ones, either. If you think we actually _need_ to fix them, see #25958 as at least a starting point. (Notice that Guido seems to want that one fixed, so, assuming I can write a decent patch for it, this bug would then become a one-liner: "__reversed__ = None" inside Mapping.)
|
https://bugs.python.org/issue25864
|
CC-MAIN-2021-17
|
refinedweb
| 3,829
| 64.3
|
My original task was to create a .NET wrapper class for an existing native C++ code and expose to web page as in Active X but in .NET and set up an environment in which the web page can communicate with native code and vice-versa. This task isn't finished yet, but I have found a solution for intercommunication between the client side web page and a .NET user control. In this article, I will show you how to place a C# user control on the Internet Explorer web page (client side) and how the user control can communicate with the web page and back. In this way, rich client side applications can be built in .NET.
In Internet Explorer, we can embed the user control with <object> tag. The user control will be able to fire an event which can be caught by JavaScript on the web page and the JavaScript can call the user control code. The user control needs to grant privileges with caspol.exe. Instead of adding full trust to the user control, I will give for the whole site (there can be a security leak in some cases) because from .NET 2.0, there is no .NET administrator and global assembly cache if the user does not want to download the platform SDK, just the framework.
<object>
I have installed Visual studio 2008 with .NET Framework 3.5, and without platform SDK, so I haven't added gacutil.exe to cache my assembly.
Start Visual Studio 2008 with C# installed and press File -> New Project. On the left side, click on C#. Now on the right side, select the WindowsFormsControlLibrary. I have renamed my WindowsFormsControlLibrary1 to UserControl3 (to make a different name from my other work), that changes the default namespace from WindowsFormsControlLibrary1 to UserControl3 - nothing else. Now there are two samples on the web which are almost working, this one and an advanced one (you need to delete the & n b s p characters from the source code sample) there are descriptions which are the events and the delegates. There are other samples with communication here. Don't forget the need to grant security privileges to user controls for it to work. Here is a description about that. Shortly, the web page needs to be added to Trusted Sites, for this, start Internet Explorer 6 -> Tools -> Internet Options at the Security tab click on the Trusted Sites, then sites button, then add the site here (I have added my LAN IP here -). Now add trusted site for Internet Explorer, but we need to do it for .NET too:
WindowsFormsControlLibrary
WindowsFormsControlLibrary1
UserControl3
CasPol.exe -q -m -ag All_Code -zone Intranet FullTrust
CasPol.exe -q -m -ag All_Code -zone Trusted FullTrust
Caspol.exe is located at C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727 by default in Windows XP and .NET Framework 3.5, and isn't in PATH system variable!
Now right click on the project in the Visual Studio solution explorer window and select properties, click on the Build tab and check the "Register for Com interop". On the solution explorer window, select the expand the properties node under the project and double click at AssemblyInfo.cs. You should have:
[assembly: ComVisible(true)]
and a guid generated:
[assembly: Guid("936987f7-536b-4a0c-b8f2-37e387d97108")]
Go in design mode and add a button to the user control. I have renamed the button to btFireEvent and I had set the text: Fire Javascript Event. Double click on the button (it will generate the correct handler method). Now complete the code from the samples and your source code in the UserControl1.cs. After that, this file needs to look like this:
btFireEvent
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Drawing;
using System.Data;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using System.Runtime.InteropServices;
namespace UserControl3
{
[InterfaceType(ComInterfaceType.InterfaceIsIDispatch)]
public interface UserControl1Events
{
[DispId(0)]
void SubmitClicked();
}
public delegate void SubmitClickedHandler();
[ComSourceInterfaces(typeof(UserControl1Events))]
public partial class UserControl1 : UserControl
{
public event SubmitClickedHandler SubmitClicked;
protected virtual void OnSubmitClicked()
{
if (SubmitClicked != null)
SubmitClicked();
}
public UserControl1()
{
InitializeComponent();
}
public string WhatTimeIsIt(string msg) {
MessageBox.Show(msg,"Inside C#");
return DateTime.Now.ToString();
}
private void btFireEvent_Click(object sender, EventArgs e)
{
OnSubmitClicked();
}
}
}
Build it! Under the bin/Debug dir, the desired DLL, UserControl3.dll, has to be there.
<html>
<body>
<object id="SimpleUserControl"
classid=""
height="150" width="150" >
</object>
<input type="button" value="What time is it?" önclick="DoWhatTimeIsIt();" />
</body>
<script language="javascript">
function DoWhatTimeIsIt() {
alert(document.SimpleUserControl.WhatTimeIsIt('I am Szabi on Javascript'));
}
</script>
<script for="SimpleUserControl" event="SubmitClicked" language="javascript">
alert("Hello from C# fired event");
</script>
</html>
Where you should change 193.231.162.210 to your local IP or localhost or 127.0.0.1 and the tutorial to the folder name which is created in the previous step.
I hope this sample is working for you too. I wish to receive posts with how to pack / deploy multiple managed .dll -s ( and one of them is native C++ code).
I have tried with:
<link rel="Configuration" href="/path/to/MyControl.config"/>
This is explained here.
But with .NET 3.5, in my config file I can't find HttpForbiddenHandler and can't set up the config file like this:
HttpForbiddenHandler
<configuration>
<runtime>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<dependentAssembly>
<assemblyIdentity name="blsTree" culture="neutral"
publicKeyToken="007a0b78_bf42c201"/>
<codeBase version="1.0.955.24881"
href=""/>
</dependentAssembly>
</assemblyBinding>
</runtime>
<.
|
http://www.codeproject.com/Articles/37741/Way-Communication-Between-Embedded-Windows-User?msg=3117265
|
CC-MAIN-2015-06
|
refinedweb
| 907
| 59.4
|
Fiddler is a highly-extensible web debugging platform, and you can extend it using any .NET language, or via the built-in FiddlerScript engine. While most major UI extensions are built in C#, you can add simple UI extensions very easily in FiddlerScript.
For example, say that you’ve decided that is the coolest site on the Internet, and you want to enhance Fiddler with this meme. Doing so is super-simple with FiddlerScript.
First, click Rules > Customize Rules to open your FiddlerScript rules. If you have installed the (recommended) FiddlerScript editor, it will appear, otherwise your default text editor will open.
At the very top of the script file, add the line:
import System.Text;
…just above the other import statements.
Then, move the cursor to just inside the Handlers class:
There, add the following code:
public BindUITab("HTTPStatusDogs", true) static function ShowStatusDogs(arrSess: Session[]):String { if (arrSess.Length < 1) return "<html>Please select one or more sessions.</html>"; var oSB: System.Text.StringBuilder = new System.Text.StringBuilder(); oSB.Append("<html><head>"); oSB.Append("<style>iframe { width: '100%'; height: 600px; frameBorder:0 }</style>"); oSB.Append("</head><body>"); for (var i:int = 0; i<arrSess.Length; i++) { oSB.AppendFormat("<iframe frameBorder=0</iframe>", arrSess[i].responseCode); } oSB.Append("</body></html>"); return oSB.ToString();}
Save the script file, and the script will automatically recompile. A new “HTTPStatusDogs” tab will appear; when you activate it, the image for each Selected Session’s HTTP response code will be shown in the tab.
The “magic” that makes this work is invoked by the BindUITab attribute atop the function declaration:
The presence of this attribute informs Fiddler that the following function will provide data to be rendered to a new tab, whose name is provided by the first parameter ("HTTPStatusDogs"). The second parameter (true) indicates that the string returned by the function should be rendered as HTML in a web browser view. To easily debug your HTML, change that true to false, and Fiddler will instead show the returned string as plain text in a textbox.
Obviously, you can easily customize this code to accomplish more productive tasks. :-)
Happy fiddling!
-Er
|
http://www.telerik.com/blogs/adding-tabs-with-fiddlerscript
|
CC-MAIN-2017-17
|
refinedweb
| 356
| 58.08
|
Hey every1,
I'm trying to have a Map with a string as key, and a 2 (or more) element array as data. Something likeI'm getting errors when i try to load data into the Map.I'm getting errors when i try to load data into the Map.Code:
map <string, int[2]> myMap
readin.cpp(23) : error C2106: '=' : left operand must be l-value
i've tried alternatives to the linei've tried alternatives to the lineCode:
#include <fstream>
#include <iostream>
#include <string>
#include <map>
using std::map;
using namespace std;
int main(){
map <string, int[2]> b;
map <string, int[2]>::iterator iter;
string rob = "ROB";
int c[2];
c[0] = 1;
c[1] = 0;
b[rob] = c;
iter = b.begin();
cout << iter->second << endl;
}
likelikeCode:
b[rob] = c;
which gave me: error C2059: syntax error : ']' orwhich gave me: error C2059: syntax error : ']' orCode:
b[rob] = c[];
which gave me: error C2440: '=' : cannot convert from 'int' to 'int [2]'which gave me: error C2440: '=' : cannot convert from 'int' to 'int [2]'Code:
b[rob] = c[8];
anyone have any ideas how i should load the data?
Cheers,
Rob.
|
http://cboard.cprogramming.com/cplusplus-programming/46558-maps-ints%5B%5D-printable-thread.html
|
CC-MAIN-2015-48
|
refinedweb
| 192
| 60.48
|
Form POST middleware¶
To discover whether your Object Storage system supports this feature,
check with your service provider or send a GET request using the
/info
path.
You can upload objects directly to the Object Storage system from a
browser by using the form POST middleware. This middleware uses
account or container secret keys to generate a cryptographic signature for the
request. This means that you do not need to send an authentication token
in the
X-Auth-Token header to perform the request.
The form POST middleware uses the same secret keys as the temporary URL middleware uses. For information about how to set these keys, see Secret Keys.
For information about the form POST middleware configuration options, see FormPost in the Source Documentation.
Form POST format¶
To upload objects to a cluster, you can use an HTML form POST request.
The format of the form POST request is:
Example 1.14. Form POST format
<![CDATA[ <form action="SWIFT="file" name="FILE_NAME"/> <br/> <input type="submit"/> </form> ]]>
action=”SWIFT_URL”
Set to full URL where the objects are to be uploaded. The names of uploaded files are appended to the specified SWIFT_URL. So, you can upload directly to the root of a container with a URL like:
Optionally, you can include an object prefix to separate uploads, such as:
method=”POST”
Must be
enctype=”multipart/form-data”
Must be
multipart/form-data.
name=”redirect” value=”REDIRECT_URL”
Redirects the browser to the REDIRECT_URL after the upload completes. The URL has status and message query parameters added to it, which specify the HTTP status code for the upload and an optional error message. The 2nn status code indicates success.
The REDIRECT_URL can be an empty string. If so, the
Location
response header is not set.
name=”max_file_size” value=”BYTES”
Required. Indicates the size, in bytes, of the maximum single file upload.
name=”max_file_count” value= “COUNT”
Required. Indicates the maximum number of files that can be uploaded with the form.
name=”expires” value=”UNIX_TIMESTAMP”
The UNIX timestamp that specifies the time before which the form must be submitted before it becomes no longer valid.
name=”signature” value=”HMAC”
The HMAC-SHA1 signature of the form.
type=”file” name=”FILE_NAME”
File name of the file to be uploaded. You can include from one to the
max_file_count value of files.
The file attributes must appear after the other attributes to be processed correctly.
If attributes appear after the file attributes, they are not sent with the sub-request because all attributes in the file cannot be parsed on the server side unless the whole file is read into memory; the server does not have enough memory to service these requests. Attributes that follow the file attributes are ignored.>" />
type= “submit”
Must be
submit.
HMAC-SHA1 signature for form POST¶
Form POST middleware uses an HMAC-SHA1 cryptographic signature. This signature includes these elements from the form:
The path. Starting with
/v1/onwards and including a container name and, optionally, an object prefix. In Example 1.15, “HMAC-SHA1 signature for form POST” the path is
/v1/my_account/container/object_prefix. Do not URL-encode the path at this stage.
A redirect URL. If there is no redirect URL, use the empty string.
Maximum file size. In Example 1.15, “HMAC-SHA1 signature for form POST” the
max_file_sizeis
104857600bytes.
The maximum number of objects to upload. In Example 1.15, “HMAC-SHA1 signature for form POST”
max_file_countis
10.
Expiry time. In Example 1.15, “HMAC-SHA1 signature for form POST” the expiry time is set to ``600` seconds into the future.
The secret key. Set as the
X-Account-Meta-Temp-URL-Keyheader value for accounts or
X-Container-Meta-Temp-URL-Keyheader value for containers. See Secret Keys for more information.
The following example code generates a signature for use with form POST:
Example 1.15. HMAC-SHA1 signature for form POST
import hmac from hashlib import sha1 from time import time path = '/v1/my_account/container/object_prefix' redirect = ''()
For more information, see RFC 2104: HMAC: Keyed-Hashing for Message Authentication.
Form POST example¶
The following example shows how to submit a form by using a cURL
command. In this example, the object prefix is
photos/ and the file
being uploaded is called
flower.jpg.
This example uses the swift-form-signature script to compute the
expires and
signature values.
$ bin/swift-form-signature /v1/my_account/container/photos/ 5373952000 1 200 MYKEY Expires: 1390825338 Signature: 35129416ebda2f1a21b3c2b8939850dfc63d8f43
$ curl -i -X POST \ -F max_file_size=5373952000 -F max_file_count=1 -F expires=1390825338 \ -F signature=35129416ebda2f1a21b3c2b8939850dfc63d8f43 \ -F redirect= \ -F file=@flower.jpg
|
https://docs.openstack.org/swift/latest/api/form_post_middleware.html
|
CC-MAIN-2022-05
|
refinedweb
| 761
| 50.33
|
Introduction to Python Programming for the Absolute Beginner
So, recently I was talking to one of my colleagues in my office for Python Programming for the Absolute Beginner and stuff. He is a C and Java developer. He was informing me how Java has captured the world and it is used everywhere. Since I love Python Programming to the core though I am not a python freak, I just told him that Certification in Python Programming is not way behind. But I was totally ignored for the fact that Java is more common these days. So I was not satisfied with the debate I had. I have been developing programs for Python since a long time and the one thing I know for sure is that Python Programming basics is ‘n’ times easier than Java. Though people coding in Java would obviously start an argument about portability and stuff. But I would tend to ignore them, however. So, the point I would be talking about today is how Python Programming language is more useful and handy, especially how it is useful to get started for beginners. So Welcome to the guide to Python Programming for the Absolute Beginner.
Why you should Start Python Programming for the Absolute Beginner
So first things first, why to start with Python Programming for the Absolute Beginner? Is Python Programming language a good start for beginners? You may have heard people starting with C as the basics. C is good. In fact, it’s better than Python, but only if you have a good background in programming or at least if you know the basics of Python programming language. But as a absolute python programming beginner? I don’t think so. I myself started with C, but after 15 days of learning, and when I say 15 days, I mean 1 day=16 hours of practice.
So, when I started learning C, the part where I got stuck was pointers. It was too difficult to understand (at that point in time). I came from a hardware background, and trust me, it was not easy. So, since I had no one to guide, I started learning Java which was another blunder from my end. And again, I realised that Java is even tougher than C. So, being confused and irate, I started looking online communities for help like stackoverflow.com, GitHub and the one thing I realised was that I had no proper guide. The reason I started with C and then later Java is because of few random people who did learn Java and C gave me the idea stating that Java and C are more widely used and its one of the easiest programming language. I wish I could meet that person to show him exactly what it meant to a beginner in python programming. But Nevertheless, I received enormous help from these web programming communities and Ubuntu forums and realised that there was a language known as Python Programming which I had actually never heard of. I googled a lot for 6-7 days and then I realized this is what I have been looking for my whole time. Python programming language has a diverse and large community. Its fully open source, which means anyone can do whatever they want with it. Besides, what I learned in C in those 15 days, I covered all of those things in Introduction to Python Programming basics in just a matter of 3-4 days. Besides, Python programming basics was so interesting as a beginner, that I didn’t realize that I have covered so much topic until the 10th day of learning. Hmmm…enough with the numbers talk, let me give you a practical example for that.
Python Programming Beginners Code – “Hello World” Example
Following is an example to print “Hello World” in C, C++, and Java:
C Programming:
#include<stdio.h>
main()
{
printf(“Hello World”);
}
C++ Programming:
#include<iostream.h>
main()
{
cout << “Hello World”;
return 0;
}
Java Programming:
public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello World!");
}
}
Python Programming Language:
print(“Hello World”)
Yes. I have written commands here to output the same thing to print “Hello World” on the screen, but as you can see the syntaxes (or commands to be more specific for a beginner) in C, C++ and Java need explanation, whereas the one in Python Programming does not. Even a person without any basic knowledge of computer can understand that it prints Hello World. Yes, and that’s how easy python programming training is. If you don’t understand any command above right now, don’t worry, I will be explaining that later on in the blog.
Basics of Python Programming
So before we proceed on to the higher stuff, let’s get through the basics of Python Programming for the Absolute Beginner. The first thing you need to understand as a beginner and the mindset you need to start for python programming training is that you will never have the proper answer for whatever you code. Once you have the necessary knowledge, you may feel the need to write some of your own automated programs, and you may also succeed. But most of the times, you may feel that the programs written are not that fast, or at least slower than what you write in some other languages. For example, a program written in C will be 10 times faster than python programming. But that doesn’t mean python programming language is slow. It just means you need to find a way to make it faster. Obviously, it will never be as fast as C, but it will obviously be less time consuming than writing and compiling a program in C or Java. Besides, its easier to debug someone else’s program in Python since the syntaxes are easier to understand than to read a program without comments in C or Java.
Also, remember never to compare two distinct languages when learning. You may say that I am speaking irony since I myself compared C Programming with Python Programming, but that I did stating Certification in Python Programming is good for abolute beginner. C has its own set of advantages and backlashes and similar do Python programming. But an absolute beginner, python programming training is the choice. So, that’s it for the beginner stuff, let’s check some real-life applications of Python
Where is Basics of Python Programming language used?
Image source: pixabay.com
As for any official answer, I would suggest you take a look at the official python website:-
To explain it in simple words, the Python programming language is used almost everywhere. Google itself was using a python programming language in its database management for a long time before it switched to C. Python programming is used a lot for automating web related apps, and especially in the field of Maths, science, and robotics. If you have an exclusive interest in python programming training, then once you learn it, I would recommend you to take a look at Raspberry Pi and the Arduino chip. Its an extremely capable microprocessor to work hand-in-hand with Python Programming.
Besides, let alone the fact that there is no beat to Python’s Django Framework for web development. For certification in Python programming for absolute beginners, once you get the basics of Python Programming training, you can even take a look at Django, which is a framework built on python to develop web related to apps. Some famous sites built on Django would be social networking sites like the Instagram and Pinterest, or community sites like Mozilla Support, Reddit gifts and gaming sites like the Chess.com which has a legacy in the Chess game. Along with web development, basics of a python programming language is also used in embedded scripting, 3D software like Maya, Quantum GIS and a lot more. So, these things prove that though Python Programming training is not known for being used in high-end performance applications since it is not, it is still the people’s choice for doing a lot of other work. Now we know what Python Programming training is capable of, let’s take a look at how to get started with it.
Python Programming for the Absolute Beginner – Pre-requisites
Frankly speaking, there is no as to specific prerequisites for an introduction to Python programming. Python is such an easy language that its mostly people’s primary or the beginner choice of learning, which makes itself the prerequisite for learning anything else.
But this makes Basic Python Programming training a bit too overwhelming. If you tend to keep your mind straight when learning which mostly doesn’t happen, then there is no issue in learning python as a primary language. What I mean by keeping the mind straight is that normally when people learn python, they get too used to the simplicity of python since they are just beginning to learn a python programming language. Python developers can code enormous amounts of code in a decently simplified manner. But if you want to go beyond Certification in python programming and learn other languages after that such as Java, C, Scala or Haskell then this may become an obstacle. The primary reason for that is because once people become habitual to python, the coding and complexity for the likes of C and Java tend to get rather irritating. Especially the pointers and the vast libraries will go over and above the head to a beginner.
So, my point being, if you plan to further your career in just languages like Python, Ruby or Perl then there is no problem in pursuing certification in Python Programming as an absolute beginner. But if you want to be a multi-discipline ninja in programming, then I would recommend you to learn C or Java first, and then learn python because that way, you would learn C and Java the hard way, and since you know how to code, learning Python Programming language will just be a piece of Cake.
Python vs The World
So, now we are wise and know how python and C works. We already saw how python can be used in the outer real world. But let’s take a look at how it can affect our daily lives as well. Python is extremely useful when it comes to automation. For example, let’s say you have a lot of social accounts, and you have a lot of passwords. Since we programmers are paranoid, we tend to keep different passwords for every other Website account we have. But who has the time to remember all this stuff? As a matter of fact, even if we try to study it, once we change a password, we would still have to remember which ones we changed and such similar stuff. So, normally people write it down on a piece of paper, or at least tech freak people like me use a notepad application on cell and laptop. But still, if someone gets their hand on them, it still is risky. Now there are other methods, where you can buy a password saver, but is it really worth it? Yes, and here is where python comes in between. I had this similar issue, and I actually stumbled on to something when I was learning dictionary and conditions in python.
Since it would be hard for you to understand if you are a beginner for Certification in python Programming, I would be writing pseudo codes here to make it understandable for you. Now, python has dictionaries and conditions. Conditions use if, if and else to complete a statement. It goes something like this:
if I don’t go to work;
make me a coffee.
else
don’t;
Now you get my point? Yup, that was pseudocode, which means I could also do something like this:
if password=iamsmart
proceed to step two
else
print “invalid password”
Now, I as soon as I start my notepad file made in python, it would ask me to enter a password, if I enter it then it will allow access, else it won’t. And I combined this with dictionaries. Dictionaries are something which can be used to call out other stuff stored inside. A good example would be a virtual telephone book. For example, in a pseudo world, it would go like this:
Book = { Adam : { phone :1234, address: RoomNo.1},
Eve : { phone : 5678, address: RoomNo.2},
Smith : { phone : 9012, address: RoomNo.3} }
Now, this my friend is called a pseudo dictionary. Here the Book is the dictionary here and it has contents in the form of Adam, Eve and Smith. Now, when I call for the dictionary, it will ask me to enter a name. When I enter a name, it will give me its contents i.e. the phone no. and the address. This is how a dictionary works in Python Programming training. So, what I did was to change the name to websites and phone numbers to passwords. And not only, did I do that, but I also added the if else condition to it, after which it looked like this:
if password=iamsmart
proceed to step two
else
print “invalid password”
LockBox = { Gmail : { Password@1 },
Facebook : { Password@2 },
Instagram : { Password@3 } }
I also did a bit more modifications, but as far as you are concerned, as an absolute beginner in python programming, this is extremely good. Now, whenever I wanted to log in from someone else’s computer or cell phone or my workplace, I would save this small python file converted into an executable with the help of py2exe on my pen drive. So, it first asks for a password, if it is right it then asks which password do you want to check, else it kicks me out.
Later on, I added some PGP encryption to this stuff to make it more secure. But now you know, what it means to learn python. It is not only a simple language to learn, but also useful. If you want to become an extreme programmer, I would recommend learning C or Java and then Python Programming language, else if you are just a beginner like I was and want to do some cool automation as a hobby, then there is nothing better than doing Python Programming for the Absolute Beginner.
First Image source: pixabay.com
Recommended Articles
This has been a basic guide to Python Programming for the Absolute Beginner. Here we discuss why you should learn python programming, its beginner’s codes along with “hello world” example. You may also have a look at the following courses to learn Python programming –
|
https://www.educba.com/python-programming-for-the-absolute-beginner/?source=leftnav
|
CC-MAIN-2020-45
|
refinedweb
| 2,454
| 68.91
|
inet(7F) inet(7F)
NAME
inet - Internet protocol family
SYNOPSIS
#include <<<<sys/types.h>>>>
#include <<<<netinet/in.h>>>>
DESCRIPTION
The internet protocol family is a collection of protocols layered on
top of the Internet Protocol (IP) network layer, which utilizes the
internet address format. The internet family supports the SOCK_STREAM
and SOCK_DGRAM socket types.
Addressing
Internet addresses are four byte entities. The include file
<netinet/in.h> defines this address as the structure struct in_addr.
Sockets bound to the internet protocol family utilize an addressing
structure called struct sockaddr_in. Pointers to this structure can
be used in system calls wherever they ask for a pointer to a struct
sockaddr.
There are three fields of interest within this structure. The first
is sin_family, which must be set to AF_INET. The next is sin_port,
which specifies the port number to be used on the desired host. The
third is sin_addr, which is of type struct in_addr, and specifies the
address of the desired host.
Protocols
The internet protocol family is comprised of the IP network protocol,
Internet Control Message Protocol (ICMP), Transmission Control
Protocol (TCP), and User Datagram Protocol (UDP). TCP is used to
support the SOCK_STREAM socket type while UDP is used to support the
SOCK_DGRAM
inet was developed by the University of California, Berkeley.
SEE ALSO
tcp(7P), udp(7P).
Hewlett-Packard Company - 1 - HP-UX Release 11i: November 2000
|
http://modman.unixdev.net/?sektion=7&page=inet&manpath=HP-UX-11.11
|
CC-MAIN-2017-13
|
refinedweb
| 230
| 50.02
|
DZone Snippets is a public source code repository. Easily build up your personal collection of code snippets, categorize them with tags / keywords, and share them with the world
Reduce Fractions Function Python
A function that reduces/simplifies fractions using the Euclidean Algorithm, in Python.
def reducefract(n, d): '''Reduces fractions. n is the numerator and d the denominator.''' def gcd(n, d): while d != 0: t = d d = n%d n = t return n assert d!=0, "integer division by zero" assert isinstance(d, int), "must be int" assert isinstance(n, int), "must be int" greatest=gcd(n,d) n/=greatest d/=greatest return n, d
|
http://www.dzone.com/snippets/reduce-fractions-function
|
CC-MAIN-2014-10
|
refinedweb
| 106
| 57.87
|
.
Python has excellent built-in file support that works on all platforms supported by Python.
Most of the key file-manipulation functions live in the os module and an associated module called os.path. To provide a degree of platform independence, os loads in the right module for your platform. os provides basic file-handling functions, and os.path handles operations on paths and filenames. On Windows, these modules are called nt and ntpath respectively, although they should always be referred to as os and os.path. The functions in the os module generally accept the same arguments as their corresponding MS-DOS commands. Table 17-1 depicts the os module's file and directory functions.
Here are some quick examples:
>>> import os
>>> os.getcwd()
'C:\\Program Files\\Python'
>>> os.chdir('C:\\temp')
>>> os.mkdir('subdirectory1')
>>> os.rmdir('subdirectory1')
>>>
What's with the \\? This is Python's literal string syntax. Python lets you directly enter special characters at the interactive prompt or in strings embedded in your code. For example, \n means a newline, \t is a tab, and \123 is the octal number 123. If you just want a plain old slash, you have to type \\. The only place where this feels slightly weird is in manipulating filenames. Remember to double all your slashes. An alternative is to use a forward slash (like c:/temp); but Python always gives backslashes when you ask for directory lists on Windows:
>>>>> os.path.exists(mydir)
1
>>> os.path.isdir(mydir)
1
>>> os.path.isfile(mydir) #hope not
0
>>> os.listdir(mydir)
['ChapterXX.doc', '00index.txt', ]
>>> import glob
>>> glob.glob(mydir + '\\' + '*files*.doc')
['c:\\data\\project\\oreilly\\text\\Chapter_-_Processes_and_Files1.doc', 'c:\\
data\\project\\oreilly\\text\\files.doc', 'c:\\data\\project\\oreilly\\text\\
Chapter_-_PythonFiles.doc']
>>>
Note that if you don't want full paths from glob, chdir into the directory first.
The os.path module provides platform-independent routines for chopping up and putting together filenames. os.path.split (path) separates a full path into the directory and filename components; os.path.splitext (filename) separates the filename (and path, if present) from the extension.
As discussed, DOS and Windows use a backslash to separate directories. We shouldn't have used the line glob.glob (mydir + \\ + *files*.doc ) in the previous example; use the variable os.sep instead. On a Unix platform, this is a forward slash:
>>> os.path.split('c:\\windows\\system\\gdi.exe')
('c:\\windows\\system', 'gdi.exe')
>>> os.path.splitext('gdi.exe')
('gdi', '.exe')
>>> os.path.splitext('c:\\windows\\system\\gdi.exe')
('c:\\windows\\system\\gdi', '.exe')
>>> (root, ext) = os.path.splitext('c:\\mydata\\myfile.txt')
>>> newname = root + '.out'
>>> newname
'c:\\mydata\\myfile.out'
>>>
The function tempfile.mktemp() returns a filename suitable for temporary use; this function is available on every platform, but it's smart enough to know where your \temp directory is on Windows:
>>> import tempfile
>>> tempfile.mktemp()
'C:\\WINDOWS\\TEMP\\~-304621-1'
>>>
When the file is closed, it's automatically deleted, assisting in the housekeeping that often goes with working with temporary files.
The function os.stat (filename) returns information about files or directories without opening them. It returns a tuple of ten items. With a tuple of this size, it can be hard to recall what each element is for, so the standard Python module stat contains a number of constants and functions to assist in working with these entries. Table 17-2 lists the entries returned by os.stat()
Some of these aren't used on Windows, but contain useful information when used on other operating systems. Also, note that all dates are returned as integers compatible with the Python time module. Depending on the format of the disk holding the file, some of these time values may not be available.
Let's see an example of using the stat() function:
>>> os.stat('c:\\autoexec.bat')
(33279, 0, 2, 1, 0, 0, 640, 916444800, 915484932, 915484930)
>>>
Here's a function to decode it:
import os, stat, time
def getfileinfo(filename)
stats = os.stat(filename)
size = stats[stat.ST_SIZE]
print 'File size is %d bytes' % size
accessed = stats[stat.ST_ATIME]
modified = stats[stat.ST_MTIME]
print 'Last accessed: ' + time.ctime(accessed)
print 'Last modified: ' + time.ctime(modified)
And the Output
>>> decode_stat.getfileinfo('c:\\autoexec.bat')
File size is 640 bytes
Last accessed: Sat Jan 16 00:00:00 1999
Last modified: Mon Jan 04 21:22:12 1999
>>>
Unfortunately, there's no portable Python module for working with file permissions. Modules exist for working with permissions on various operating systems, including Windows and Unix, but the differences between the various schemes make a simple and unified model difficult. Windows NT permissions are themselves complex and beyond the scope of this book; indeed, it would require a book of this size to cover them in detail. There is a brief example of working with permissions in Chapter 16, Windows NT Administration.
Often you need to move through a directory tree looking at all the subdirectories or files in turn. The Python library provides a powerful generic routine to do this: os.path.walk().
The general idea is that you specify a directory, and os.path.walk() calls a function (that you write) once for each subdirectory of the main directory. Each time your function is called, it's passed a list of all filenames in that directory. Thus, your function can examine every file in every directory under the starting point you specify.
The function you write to perform the desired operation on the file is of the form myfunc ( arg, dirname, filenames). The first argument can be anything you want; we will see examples later. The second argument contains the name of the current directory being examined, starting with the directory you specify in the argument to os.path.walk(); the third is the list of filenames in the directory.
Once you have written the function, call os.path.walk() with three parameters: the name of the directory in which to begin the walking, your callback function, and any third parameter you choose. This third parameter is passed unchanged in your callback function's first parameter, as described previously.
This first example lists the directories examined and how many files are present in each. This makes the callback function simple: you print the dirname parameter,
and the length of the filenames parameter. Then call os.path.walk(), passing a directory from the Python installation and the simple function as the callback:
>>> def walker1(arg, dirname, filenames):
#List directories and numbers of files
print dirname, 'contains', len(filenames), 'files'
>>> os.path.walk('c:\\program files\\python\\win32', walker1, None)
c:\program files\python\win32 contains 24 files
c:\program files\python\win32\lib contains 39 files
c:\program files\python\win32\Help contains 3 files
c:\program files\python\win32\demos contains 19 files
c:\program files\python\win32\demos\service contains 8 files
c:\program files\python\win32\demos\service\install contains 3 files
>>>
That was easy! Note that you don't need the extra argument and so use the value None. Now let's try something a bit more practical and write a program to scan for recent changes. This is useful for archiving or for trying to figure out which new application just ate your registry. The callback function becomes slightly more complex as you loop over the list of files. The example then checks the Windows system directory for all files changed in the last 30 days:
>>> import time
>>> def walker2 (arg, dirname, filenames):
"Lists files modified in last ARG days"
cutoff = time.time() - (arg * 24 * 60 * 60)
for filename in filenames:
stats = os.stat(dirname + os.sep + filename
modified = stats[8]
if modified >=cutoff:
print dirname + os.sep + filename
>>> os.path.walk('c:\\windows\\system', walker2, 30)
c:\windows\system\FFASTLOG.TXT
c:\windows\system\MSISYS.VXD
c:\windows\system\HwInfoD.vxd
c:\windows\system\ws552689.ocx
>>>
So far you haven't returned anything; indeed, if walker2 returned a value, you'd have no easy way to grab it. This is another common use for the "extra argument". Let's imagine you want to total the size of all files in a directory. It's tempting to try this:
def walker3(arg, dirname, filenames):
"Adds up total size of all files"
for filename in filenames:
stats = os.stat(dirname + os.sep + filename)
size = stats[6]
arg = arg + size
def compute_size(rootdir):
"uses walker3 to compute the size"
total = 0
os.path.walk(rootdir, walker3, total)
return total
Here, a walker function does the work, and a controlling function sets up the arguments and returns the results. This is a common pattern when dealing with recursive functions.
Unfortunately this returns zero. You can't modify a simple numeric argument in this way, since arg within the function walker3() is a local variable. However, if arg was an object, you could modify its properties. One of the simplest answers is to use a list; it's passed around, and the walker function is free to modify its contents. Let's rewrite the function to generate a list of sizes:
# these two work
def walker4(arg, dirname, filenames):
"Adds up total size of all files"
for filename in filenames:
stats = os.stat(dirname + os.sep + filename)
size = stats[6]
arg.append(size)
def compute_size (rootdir)
"uses walker3 to compute the size"
sizes = []
os.path.walk(rootdir, walker4, sizes)
# now add them up
total = 0
for size in sizes:
total = total + size
return total
When run, this code behaves as desired:
>>> compute_size('c:\\program files\\python')
26386305
>>> # well, I do have a lot of extensions installed
There are numerous uses for this function, and it can save a lot of lines of code.
Some possibilities include:
Archiving all files older than a certain date
Building a list of filenames meeting certain criteria for further processing
Synchronizing two file trees efficiently across a network, copying only the changes
Keeping an eye on users' storage requirements
We've started to see what makes Python so powerful for manipulating
filesystems. It's not just the walk function: that could have
been done in many languages. The
key point is how walk interacts with Python's higher-level data structures, such as lists, to make these examples simple and straightforward.
Now we've had a good look at moving files around; it's time to look inside them.
Python has a built-in file object, which is available on all Python platforms. Any Python program you hope to run on platforms other than Windows should use the standard file objects. Once you have a Python file object, you can use the methods to read data from the file, write data to the file, and perform various other operations.
The function open (filename, mode= r ) returns a file object. If mode is omitted, the file is opened read-only. Mode is a string, and can be r for reading, w for writing, or a for appending. Add the letter b for binary (as discussed in Chapter 3, Python on Windows), and w+ opens it for updating. See the Python Library Reference (included in HTML format in the standard Python distribution) for further details.
Table 17-3 shows the most important methods for file objects. C programmers will note the similarity to the STDIO routines; this should be no surprise, as they are implemented using the C STDIO routines of the same names.
Every language has functions, such as read and write, and many have readline. Python's ability to handle lists and strings is what really makes file processing a joy. Let's run through a few common idioms.
Here readlines loads the entire file into a list of strings in memory:
>>> f = open('c:\\config.sys', 'r')
>>> lines = f.readlines()
>>> f.close()
>>> from pprint import pprint
>>> pprint(lines[0:3])
['DEVICEHIGH = A:\\CDROM\\CDROM.SYS /D:CD001\012',
'device=C:\\WINDOWS\\COMMAND\\display.sys con=(ega,,1)\012',
'Country=044,850,C:\\WINDOWS\\COMMAND\\country.sys\012']
>>>
The pprint function (short for pretty-print) lets you display large data structures on several lines. Note also that each line still ends in a newline character (octal 012, decimal 10). Because the file is opened in text mode (by omitting the binary specification), you see a single newline character terminating each line, even if the actual file is terminated with carriage-return/linefeed pairs.
You can follow this with a call to string.split() to parse each line. Here's a generic function to parse tab-delimited data:
def read_tab_delimited_file(filename):
"returns a list of tuples"
# we can compress the file opening down to a one-liner -
# the file will be closed automatically
lines = open(filename).readlines()
table = []
for line in lines:
#chop off the final newline
line = line[:-1]
# split up the row on tab characters
row = string.split(line, '\t')
table.append(row)
return table
And here's what it can do:
>>> data = read_tab_delimited_file('c:\\temp\\sales.txt')
>>> pprint(data)
[['Sales', '1996', '1997', '1998',]
['North', '100', '115', '122'],
['South', '176', '154', '180'],
['East', '130', '150', '190',]]
>>>
Note once again how useful pprint is! This is another of Python's key strengths: you can work at the interactive prompt, looking at your raw data, which helps you to get your code right early in the development process.
The previous example is suitable only for files that definitely fit into memory. If they might get bigger, you should loop a line at a time. Here is the common idiom for doing this:
f = open(filename, 'r')
s = f.readline()
while s <> ":
# do something with string 's'
s = f.readline()
f.close()
A number of people have complained about having to type readline() twice, while Perl has a one-line construction to loop over files. The standard library now includes a module called fileinput to save you this minimal amount of extra typing. The module lets you do the following:
import fileinput
for line in fileinput.input([filename]):
process(line)
If no filename is provided, the module loops over standard input, useful in script processing. Pass the filename parameter in single item list; fileinput iterates automatically over any number of files simply by including more items in this parameter. fileinput also lets you access the name of the file and the current line number and provides a mechanism to modify files in place (with a backup) in case something goes wrong.
The read() command loads an entire file into memory if you don't specify a size. You often see the one liner:
>>> mystring = open('c:\\temp\\sales.txt').read()
>>>
This code uses the fact that file objects are closed just before they get garbage-collected. You didn't assign the file object to a variable, so Python closes it and deletes the object (but not the file!) after the line executes. You can slurp an entire file into a string in one line.
Python strings are eight-bit safe and are the easiest means to manipulate binary data. In addition to this, the struct module lets you create C-compatible structures and convert them to and from strings; and the array module efficiently handles arrays of data, which it can convert to and from strings and files.
More information on working with files and the other various Python modules we discussed here can be found in either of these fine O'Reilly Python books we've mentioned before: Programming Python and Learning Python.
There are times when the standard Python file objects can't meet your requirements, and you need to use the Windows API to manipulate files. This can happen in a number of situations, such as:
You need to read or write data to or from a Windows pipe.
You need to set custom Windows security on a file you are creating.
You need to perform advanced techniques for performance reasons, such as "Overlapped" operations or using completion ports.
The win32file.CreateFile() function opens or creates standard files, returning a handle to the file. Standard files come in many flavors, including synchronous files (where read or write operations don't return until the operation has completed); asynchronous (or overlapped I/O) files, where read and write operations return immediately; and temporary files that are automatically deleted when the handle is closed. Files may also be opened requesting that Windows not cache any file operations, that no buffering is performed, etc. All the variations that
CreateFile() can use are too numerous to list here. For full details, please see the Windows API documentation for CreateFile().
The CreateFile() function takes the following parameters:
Name of the file
Integer indicating the type of access requested on the file
Integer-sharing options for the file
Security attributes for the new file or None
A flag, indicating what action to take depending on if the file exists
A set of flags and attributes for the file itself
Another file to act as a template or None
This function returns a PyHANDLE object. PyHANDLEs are simply objects that wrap standard Win32 HANDLEs. When a PyHANDLE object goes out of scope, it's automatically closed; thus, it's generally not necessary to close these HANDLEs as it is necessary when using these from C or C++.
Let's see how these parameters interact and test out some of the documented semantics. Here's a small script that uses the win32file module to work with Win32 file handles. The code creates a file, then checks that other attempts to open the file either succeed or fail, based on the flags passed to CreateFile(). You will also find that auto-delete files behave as expected; i.e., after the last handle is closed, the file no longer exists on disk:
# CheckFileSemantics.py
# Demonstrate the semantics of CreateFile.
# To keep the source code small,
# we import all win32file objects.
from win32file import &asteric
import win32api
import os
# First, lets create a normal file
h1 = CreateFile( \
"\\filel.tst", # The file name \
GENERIC_WRITE, # we want write access. \
FILE_SHARE-READ, # others can open for read \
None, # No special security requirements \
CREATE_ALWAYS, # File to be created. \
FILE_ATTRIBUTE_NORMAL, # Normal attributes \
None ) # No template file.
# now we will print the handle,
# just to prove we have one!
print "The first handle is", h1
# Now attempt to open the file again,
# this time for read access
h2 = CreateFile( \
"\\filel.tst", # The same file name. \
GENERIC_READ, # read access \
FILE_SHARE_WRITE | FILE_SHARE_READ, \
None, # No special security requirements \
OPEN_EXISTING, # expect the file to exist. \
0, # Not creating, so attributes dont matter. \
None ) # No template file
# Prove we have another handle
print "The second handle is", h2
# Now attempt yet again, but for write access.
# We expect this to fail.
try:
h3 = CreateFile( \
"\\filel.tst", # The same file name. \
GENERIC_WRITE, # write access \
0, # No special sharing \
None, # No special security requirements \
CREATE_ALWAYS, # attempting to recreate it! \
0, # Not creating file, so no attributes \
None ) # No template file
except win32api.error, (code, function, message):
print "The file could not be opened for write mode."
print "Error", code, "with message", message
# Close the handles.
h1.Close()
h2.Close()
# Now lets check out the FILE_FLAG_DELETE_ON_CLOSE
fileAttributes = FILE_ATTRIBUTE_NORMAL | \
FILE_FLAG_DELETE_ON_CLOSE
h1 = CreateFile( \
"\\file1.tst", # The file name \
GENERIC_WRITE, # we want write access. \
FILE_SHARE_READ, # others can open for read \
None, # no special security requirements \
CREATE_ALWAYS, # file to be created. \
fileAttributes, \
None ) # No template file.
# Do a stat of the file to ensure it exists.
print "File stats are", os.stat("\\file1.tst")
# Close the handle
h1.Close()
try:
os.stat("\\file1.tst")
except os.error:
print "Could not stat the file - file does not exist"
When you run this script, you see the following output:
The first handle is <PyHANDLE at 8344464 (80)>
The second handle is <PyHANDLE at 8344400 (112)>
The file could not be opened for write mode.
Error 32 with message The process cannot access the file because
it is being used by another process.
File stats are (33206, 0, 11, 1, 0, 0, 0, 916111892, 916111892, 916111892)
Could not stat the file - file does not exist
Thus, the semantics are what you'd expect:
A file opened to allow reading can be opened this way.
A file opened to disallow writing can't be opened this way.
A file opened for automatic delete is indeed deleted when the handle is closed.
The win32file module has functions for reading and writing files. Not surprisingly, win32file.ReadFile() reads files, and win32file.WriteFile() writes files.
win32file.ReadFile() takes the following parameters:
The file handle to read from
The size of the data to read (see the reference for further details)
Optionally, an OVERLAPPED or None
win32file.ReadFile() returns two pieces of information in a Python tuple: the error code for ReadFile and the data itself. The error code is either zero or the value winerror.ERROR_IO_PENDING if overlapped I/O is being performed. All other error codes are trapped and raises a Python exception.
win32file.WriteFile() takes the following parameters:
A file handle opened to allow reading
The data to write
Optionally, an OVERLAPPED or None
win32file.WriteFile() returns the error code from the operation. This is either zero or win32error.ERROR_IO_PENDING if overlapped I/O is used. All other error codes are converted to a Python exception.
Windows provides a number of techniques for high-performance file I/O. The most common is overlapped I/O. Using overlapped I/O, the win32file. ReadFile() and win32file.WriteFile() operations are asynchronous and return before the actual I/O operation has completed. When the I/O operation finally completes, a Windows event is signaled.
Overlapped I/O does have some requirements normal I/O operations don't:
The operating system doesn't automatically advance the file pointer. When not using overlapped I/O, a ReadFile or WriteFile operation automatically advances the file pointer, so the next operation automatically reads the subsequent data in the file. When using overlapped I/O, you must manage the location in the file manually.
The standard technique of returning a Python string object from win32file. ReadFile() doesn't work. Because the I/O operation has not completed when the call returns, a Python string can't be used.
As you can imagine, the code for performing overlapped I/O is more complex than when performing synchronous I/O. Chapter 18, Windows NT Services, contains some sample code that uses basic overlapped I/O on a Windows-named pipe.
Pipes are a concept available in most modern operating systems. Typically, these are a block of shared memory set up much like a file. Typically, one process writes information to a pipe, and another process reads it. They are often used as a form of interprocess communication or as a simple queue implementation. Windows has two flavors of pipes: anonymous pipes and named pipes. Python supports both via the win32pipe module.
Anonymous read-
ing from the pipe, and one for writing to the pipe. The win32pipe.CreatePipe()')
>>>
Named pipes are similar to anonymous pipes, except they have a unique name. Typically, a server process creates a named pipe with a known name, and other client processes connect to this pipe simply by specifying the name. The key benefit of named pipes is that unrelated processes can use them, even from over the network. All a process needs is the name of the pipe, possibly the name of the host server, and sufficient security to open it. This makes named pipes suitable for simple communication between a server and many clients.
Named pipes can be created only by Windows NT. Windows 95/98 can create a client connection to an existing named pipe, but can't create a new named pipe.
Creating and using named pipes is a complex subject and beyond the scope of this book. However, an example using named pipes can be found in Chapter 18. The win32pipe module supports all pipe operations supported by Windows. For further information on named pipes, please see the Windows SDK documentation or one of the pipe samples that comes with the Python for Windows Extensions.
Every program running under Windows runs in the context of a process. A process is an executing application and has a single virtual address space, a list of valid handles, and other Windows resources. A process consists of at least one thread, but may contain a large number of threads.
Python has the ability to manage processes from a fairly high level, right down to the low level defined by the Win32 API. This section discusses some of these capabilities.
Python works well when executing command-line tools, but not so well for GUI programs such as notepad.
Second, notice that Python waits until the new process has terminated before returning. Depending on your requirements, this may or may not be appropriate.
os.execv provides an interesting (although often useless) way to create new processes. The program you specify effectively replaces the calling process. Technically, the process to be created is a new process (i.e., it has a different process ID), so the new process doesn't replace the old process; the old process simply terminates immediately after the call to os.execv. In effect, the new process executed appears to overwrite the current process, almost as if the old process becomes the new process; therefore, it's rarely used.
os.execv takes two arguments: a string containing the program to execute, and a tuple containing the program arguments. For example, if you execute the following code:
>>> import os
>>> os.execv("c:\\Winnt\\notepad.exe", ("c:\\autoexec.bat",) )
Notice that your existing Python or PythonWin implementation immediately terminates (no chance to save anything!) and is replaced by an instance of notepad.
Also notice that os.execv doesn't search your system path. Therefore, you need to specify the full path to notepad. You will probably need to change the example to reflect your Windows installation.
Another function, os.execve, is similar but allows a custom environment for the new process to be defined.
os.popen is also supposed to be a portable technique for creating new processes and capturing their output. os.popen takes three parameters: the command to execute, the default mode for the pipe, and the buffer size. Only the first parameter is required; the others have reasonable defaults (see the Python Library Reference for details). The following code shows that the function returns a Python file object, which can be read to receive the data:
>>> import os
>>> file = os.popen("echo Hello from Python")
>>> file.read()
'Hello from Python\012'
>>>
If you try this code from Python.exe, you will notice it works as expected. However, if you attempt to execute this code from a GUI environment, such as PythonWin, you receive this error:
>>> os.popen("echo Hello from Python")
Traceback (innermost last):
File "<interactive input>", line 0, in ?
error: (0, 'No error')
>>>
Unfortunately, a bug in the Windows popen function prevents this working from a GUI environment.
Attempting to come to the rescue is the win32pipe module, which provides a replacement popen that works in a GUI environment under Windows NT; see the following code:
>>> import win32pipe
>>> file=win32pipe.popen("echo Hello from Python")
>>> file.read()
'Hello from Python\012'
>>>
The module win32api provides some additional techniques for manipulating processes. These allow you to perform many of the common requirements for starting new processes, but still don't provide the ultimate in low-level control.
The WinExec function behaves similarly to the os.system function, as described previously, but it provides some concessions for GUI programs; namely, no console is created, and the function doesn't wait until the new process has completed. The function takes two parameters:
The command to execute
Optionally, the initial state for the application's window
For example, to execute notepad, using the default window state, you can execute the following code:
>>> import win32api
>>> win32api.WinExec("notepad")
>>>
notepad should appear in a normal window, and Python continues executing commands before you close notepad.
To show notepad maximized:
>>> import win32api, win32con
>>> win32api.WinExec("notepad", win32con.SW_SHOWMAXIMIZED)
>>>
The win32api module also provides another useful function for creating new processes. The ShellExecute function is primarily used to open documents, rather than start arbitrary processes. For example, you can tell ShellExecute to "open MyDocument.doc.'' Windows itself determines which process to use to open .doc files and start it on your behalf. This is the same function Windows Explorer uses when you click (or double-click) on a .doc file: it calls ShellExecute, and the correct program is started. The ShellExecute function takes these parameters:
The handle to the parent window or zero for no parent.
The operation to perform on the file.
The name of the file or program to execute.
Optional parameters for the new process.
The initial directory for the application.
A flag indicating if the application should be shown.
Let's try this function. Start Python or PythonWin from a directory with a .doc file in it, then execute the following commands:
>>> import win32api
>>> win32api.ShellExecute(0, "open", \
"MyDocument.doc", None, "", 1)
33
>>>
Assuming Microsoft Word is installed, this code opens the document MyDocument.doc. If you instead wish to print this document, execute this:
>>> import win32api
>>> win32api.ShellExecute(0, "print", \
"MyDocument.doc", None, "", 1)
33
>>>
Microsoft Word then opens and prints the document.
The win32process module provides the ultimate in process level control; it exposes most of the native Windows API for starting, stopping, controlling, and waiting for processes. But before we delve into the win32process module, some definitions are in order.
Every thread and process in the system can be identified by a Windows handle, and by an integer ID. A process or thread ID is a unique number allocated for the process or thread and is valid across the entire system. An ID is invariant while the thread or process is running and serves no purpose other than to uniquely identify the thread or process. IDs are reused, so while two threads or processes will never share the same ID while running, the same ID may be reused by the system once it has terminated. Further, IDs are not secure. Any user can obtain the ID for a thread or process. This is not a security problem, as the ID is not sufficient to control the thread or process.
A handle provides additional control capabilities for the thread or handle. Using a handle, you can wait for a process to terminate, force the termination of a process, or change the characteristics of running process.
While a process can have only a single ID, there may be many handles to it. The handle to a process determines the rights a user has to perform operations on the process or thread.
Given a process ID, the function win32api.OpenProcess() can obtain a handle. The ability to use this handle is determined by the security settings for both the current user and the process itself.
The win32process module contains two functions for creating new processes: CreateProcess() and CreateProcessAsUser(). These functions are identical, except CreateProcessAsUser() accepts an additional parameter indicating the user under which the process should be created.
CreateProcess() accepts a large number of arguments that allow very fine control over the new process:
The program to execute
Optional command-line parameters
Security attributes for the new process or None
Security attributes for the main thread of the new process or None
A flag indicating if handles are inherited by the new process
Flags indicating how the new process is to be created
A new environment for the new process or None for the current environment
The current directory for the new process
Information indicating how the new window is to be positioned and shown
And returns a tuple with four elements:
A handle to the new process
A handle to the main thread of the new process
An integer identifying the new process
An integer identifying the main thread of the new process
To terminate a process, the win32process.TerminateProcess() function is used. This function takes two parameters:
A handle to the process to be terminated
The exit code to associate with the process
If you initially created the new process, it's quite easy to get the handle to the process; you simply remember the result of the win32process.CreateProcess() call.
But what happens if you didn't create the process? If you know the process ID, you can use the function win32api.OpenProcess() to obtain a handle. But how do you find the process ID? There's no easy answer to that question. The file killProcName.py that comes with the Python for Windows Extensions shows one method of obtaining the process ID given the process name. It also shows how to use the win32api.OpenProcess() function to obtain a process handle suitable to terminate:
Once a process is running, there are two process properties that can be set: the priority and the affinity mask. The priority of the process determines how the operating system schedules the threads in the process. The win32process. SetPriorityClass() function can set the priority.
A process's affinity mask defines which processor the process runs on, which obviously makes this useful only in a multiprocessor system. The win32process. SetProcessAffinityMask() function allows you to define this behavior.
This section presents a simple example that demonstrates how to use the CreateProcess API and process handles. In the interests of allowing the salient points to come through, this example won't really do anything too useful; instead, it's restricted to the following functionality:
Creates two instances of notepad with its window position carefully laid out.
Waits 10 seconds for these instances to terminate.
If the instances haven't terminated in that time, kills them.
This functionality demonstrates the win32process.CreateProcess() function, how to use win32process.STARTUPINFO() objects, and how to wait on process handles using the win32event.WaitForMultipleObjects() function.
Note that instead of waiting 10 seconds in one block, you actually wait for one second 10 times. This is so you can print a message out once per second, so it's obvious the program is working correctly:
# CreateProcess.py
#
# Demo of creating two processes using the CreateProcess API,
# then waiting for the processes to terminate.
import win32process
import win32event
import win32con
import win32api
# Create a process specified by commandLine, and
# The process' window should be at position rect
# Returns the handle to the new process.
def CreateMyProcess( commandLine, rect):
# Create a STARTUPINFO object
si = win32process.STARTUPINFO()
# Set the position in the startup info.
si.dwX, si.dwY, si.dwXSize, si.dwYSize = rect
# And indicate which of the items are valid.
si.dwFlags = win32process.STARTF_USEPOSITION | \
win32process.STARTF_USESIZE
# Rest of startup info is default, so we leave alone.
# Create the process.
info = win32process.CreateProcess(
None, # AppName
commandLine, # Command line
None, # Process Security
None, # ThreadSecurity
0, # Inherit Handles?
win32process.NORMAL_PRIORITY_CLASS,
None, # New environment
None, # Current directory
si) # startup info.
# Return the handle to the process.
# Recall info is a tuple of (hProcess, hThread, processId, threadId)
return info[0]
def RunEm():
handles = []
# First get the screen size to calculate layout.
screenX = win32api.GetSystemMetrics(win32con.SM_CXSCREEN)
screenY = win32api.GetSystemMetrics(win32con.SM_CYSCREEN)
# First instance will be on the left hand side of the screen.
rect = 0, 0, screenX/2, screenY
handle = CreateMyProcess("notepad", rect)
handles.append(handle)
# Second instance of notepad will be on the right hand side.
rect = screenX/2+1, 0, screenX/2, screenY
handle = CreateMyProcess("notepad", rect)
handles.append(handle)
# Now we have the processes, wait for them both
# to terminate.
# Rather than waiting the whole time, we loop 10 times,
# waiting for one second each time, printing a message
# each time around the loop
countdown = range(1,10)
countdown.reverse()
for i in countdown:
print "Waiting %d seconds for apps to close" % i
rc = win32event.WaitForMultipleObjects(
handles, # Objects to wait for.
1, # Wait for them all
1000) # timeout in milli-seconds.
if rc == win32event.WAIT_OBJECT_0:
# Our processes closed!
print "Our processes closed in time."
break
# else just continue around the loop.
else:
# We didn't break out of the for loop!
print "Giving up waiting - killing processes"
for handle in handles:
try:
win32process.TerminateProcess(handle, 0)
except win32process.error:
# This one may have already stopped.
pass
if __name__=='__main__':
RunEm()
You run this example from a command prompt as you would any script. Running the script creates two instances of notepad taking up the entire screen. If you switch back to the command prompt, notice the following messages:
C:\Scripts>python CreateProcess.py
Waiting 9 seconds for apps to close
Waiting 2 seconds for apps to close
Waiting 1 seconds for apps to close
Giving up waiting - killing processes
C:\Scripts>
If instead of switching back to the command prompt, you simply close the new instances of notepad, you'll see the following:
C:\Scripts>python CreateProcess.py
Waiting 9 seconds for apps to close
Waiting 8 seconds for apps to close
Our processes closed in time.
C:\Scripts>
In this chapter, we have looked that the various techniques we can use in Python for working with files and processes. We discussed how Python's standard library has a number of modules for working with files and processes in a portable way, and a few of the problems you may encounter when using these modules.
We also discussed the native Windows API for dealing with these objects and the Python interface to this API. We saw how Python can be used to work with and exploit the Windows specific features of files and processes.
|
https://flylib.com/books/en/1.116.1.22/1/
|
CC-MAIN-2018-43
|
refinedweb
| 6,236
| 56.76
|
Parsix.
What's the problemHow we solve it
Instead of just validating, we should instead parse the input into a shape that makes the compiler aware of what we want to do. In Parsix, the previous example would become:
import parsix.core.Parse @JvmInline value class Email(val email: String) val parseEmail: Parse<String, Email> fun storeEmailEndpoint(inp: String) { when (val parsed = parseEmail(inp)) { is Ok -> storeEmail(parsed.value) is ParseError -> ParseFailure -> parseFailureToResponse(parsed) }
Then you can use this codeLibrary core values
Do the right thing: as library authors, we strive to make best-practices easy to follow and foster a culture that sees the compiler as an invaluable friend, rather than a foe.
Composable: people can only handle a certain amount of information at a time, the lower the better..
Build your own ParseBuild is Parsed?What is Parsed?
Parsed is a sealed class, it models our parse result and can have only two shapes:
Ok(value)models the success case
ParseFailureanother sealed class, models the failure case
If you are familiar with functional programming, this type is a specialised type of
Result (also known as
Either).
A simple parseA simple parse
Given that each business domain is different from one another, Parsix offers only data class Age(val value: UInt) data class AdultAge(val value: UInt) data class NotAdultError(val inp: Age) : TerminalError fun parseAdultAge(inp: Age): Parsed<AdultAge> = if (inp.value >= 18) Ok(AdultAge(inp.value)) else EnumParseParse based on object projection
Let's say in our domain we need to work with Name, it must be a string and it has length between 3 and 255 chars:
import parsix.core.Parsed import parsix.core.Ok import parsix.core.ParseError import parsix.core.TerminalError import parsix.core.parseBetween /** ParseError -> MapParse complex type from Map
Parsing a single value is ok, but more often we will need to build types that require more than one element. For example:
data class User(val name: Name, val age: Age) data class Name(val value: String) dataParseCollectingParseCombineHandlingString")
|
https://reposhub.com/android/kotlin/parsix-parsix.html
|
CC-MAIN-2022-21
|
refinedweb
| 336
| 53.61
|
Before continuing with this tutorial please ensure that the following concepts are familiar to you: pointers, vectors, data stacking, classes
State management is a great way to keep things organized when certain things happen based on certain occasions. Let's take a game as an example. Games run in a continuous loop and perform things based on its current state.
Here's a sample of what happens in a game:
InitializeGame(); while (quit != true) { RunGame(); } EndGame();
Now let's add states to the picture:
InitializeGame(); state = stateMainMenu; while (state != stateQuit) { switch (state) { case stateMainMenu: RunMainMenu(); break; case stateGameplay: RunGameplay(); break; case stateOptions: RunOptions(); break; case statePauseScreen: RunPauseScreen(); break; } }
Using a state manager to handle your application's state gives you more flexibility. For this tutorial I will demonstrate how to implement it in C++ using classes.
The first thing we'll want to do is construct an abstract state class that every other state will derive off of. This is how it will look:
class GameState { public: virtual void Init()=0; virtual void Cleanup()=0; //Pushing and popping states causes pausing/resuming. virtual void Pause()=0; virtual void Resume()=0; virtual void GetEvents()=0; virtual void Update()=0; virtual void Display()=0; };
Don't forget your inclusion guards. I omitted them for this tutorial to keep things short.
Our state class has seven methods: Init(), Cleanup(), Pause(), Resume(), GetEvents(), Update(), and Display()
They are all pure virtual methods, meaning they must be implemented in the derived class.
Before moving on I will explain each method a bit:
Init()
- This method is executed at the beginning of each state.
Cleanup()
- This is the complete opposite of the Init() method. It happens when a state is left (popped off the state stack).
Pause() & Resume()
- These states add additional functionality. It allows you to execute code when you wish to pause/resume a state.
GetEvent()
- This method receives input and performs actions as a response. This allows different things to be done based on the current state. For instance, pressing the Escape key while at the "Main Menu" screen would be different from pressing it in the "Pause Screen".
Update()
- This happens each frame.
Display()
- Like the Update() method, this also happens each frame. Having a separate displaying method for each state instead of a single general one allows us to draw different things based on the current state.
---------------------------------------------------------------------------------------------------------------------------------------
The next step to do is to define the state manager which will be responsible for handling all the "state magic":
// Use the STL vector to hold our states. #include <vector> #include "GameState.h" class GameStateManager { public: void ChangeState(GameState* state); void PushState(GameState* state); void PopState(); void Clear(); private: std::vector<GameState*> m_states; };
You'll notice that we have three methods responsible for manipulating the current state: ChangeState(), PushState(), & PopState().
ChangeState() will remove the current state from the stack and add a new state to the end of the stack. Using a stack for state management allows for greater flexibility.
PushState() will pause the current state and add a new state to the end of the stack.
PopState() will remove the last state on the stack and set the current state to the previous state on the stack.
And for our state stack we'll be using a vector of GameState pointers.
The implementation for our state manager is as follows:
#include "GameStateManager.h" void GameStateManager::Clear() { while (!m_states.empty()) { m_states.back()->Cleanup(); m_states.pop_back(); } } void GameStateManager::ChangeState(GameState *state) { // Cleanup the current state. if (!m_states.empty()) { m_states.back()->Cleanup(); m_states.pop_back(); } // Store and init the new state. m_states.push_back(state); m_states.back()->Init(); } /* * Pause the current state and go to a new state. */ void GameStateManager::PushState(GameState *state) { if (!m_states.empty()) m_states.back()->Pause(); m_states.push_back(state); m_states.back()->Init(); } /* * Leave current state and go to previous state. */ void GameStateManager::PopState() { if (!m_states.empty()) { m_states.back()->Cleanup(); m_states.pop_back(); } if (!m_states.empty()) m_states.back()->Resume(); }
There are many ways to implement a game state design, but this is a way that I write them for projects that I do as it provides for flexibility while maintaining simplicity. Hopefully this tutorial was informative for you.
|
https://www.dreamincode.net/forums/topic/118898-state-management/
|
CC-MAIN-2018-30
|
refinedweb
| 692
| 55.74
|
13 May 2010 23:06 [Source: ICIS news]
(adds Airgas response in paragraphs 11, 12)
HOUSTON (ICIS news)--Air Products has nominated three executives to the Airgas board as it continues to pursue its $5.1bn (€4bn) takeover bid for Airgas, the ?xml:namespace>
Airgas has a staggered board of nine directors, and if elected by shareholders, Air Products’ nominees would make it easier for it to gain a majority on the Airgas board.
The Airgas meeting was to be held on or before 17 September, according to US Securities and Exchange Commission (SEC) filings.
Air Products nominated John Clancey, chairman emeritus of shipping firm Maersk; Robert Lumpkins, chairman of fertilizer producer Mosaic; and Ted Miller, founder of wireless communication provider Crown Castle International.
Air Products also sought to compel Airgas to hold its annual meeting each January. If passed by Airgas shareholders, that would force Airgas to hold another meeting in January 2011, in which Air Products could theoretically put forth more board nominees.
In addition, Air Products proposed that any board nominee from Airgas who loses an election would then be ineligible to serve on the board for three years.
Air Products also proposed that all by-law amendments adopted by the Airgas board after 7 April be repealed.
,” said Air Products CEO John McGlade.
“These proposals and the independent board nominations provide Airgas shareholders with the opportunity to change the unproductive and costly path being pursued by the current Airgas board,” McGlade added.
Air Products in April extended the expiration date for the bid to 4 June. The total value of the offer is $7bn, including $5.1bn of equity and $1.9bn of assumed debt, according to Air Products.
Airgas, which previously said the offer “grossly undervalues” it, reiterated that position on Thursday and said it would evaluate Air Products's candidates and proposals "in due course".
"We believe that Air Products's interests are diametrically opposed to those of our stockholders and that Air Products chose its candidates and proposals precisely to help advance its goal of acquiring Airgas at the cheapest possible price," Airgas said.
($1 = €0.79)
|
http://www.icis.com/Articles/2010/05/13/9359412/air-products-continues-airgas-takeover-bid-nominates-3-to.html
|
CC-MAIN-2013-48
|
refinedweb
| 354
| 50.97
|
Much has been said about .NET’s use of XML, and unfortunately, a lot if it is hyperbole. Still, two things are undeniable: .NET puts an integrated set of XML tools into the programmer’s hands, and this is really the first time a Microsoft development platform has had integrated XML support out of the box. This is complete support, with well over 150 classes to be found in the System.XML namespace, a dizzying number. What follows here is my attempt to give the uninitiated a guided tour of some of the important XML classes in .NET and a look at the philosophy behind its API.
The “pull model” vs. the DOM and SAX
If you’ve used XML before, you’re probably familiar with the two main flavors of parser that are available, the Document Object Model (DOM) and the Simple API for XML (SAX). These two parser types operate in fundamentally different ways: DOM loads an entire XML document into a hierarchical tree in memory, while SAX runs through a document one element at a time, handing each element back to an application through some kind of communication interface.
The DOM? SAX? What’s that?
If you think “Deluise” when someone says DOM, or “Coltrane” when someone says SAX, you might want to check out Builder.com’s "Remedial XML" series to get your bearings before you continue here. In addition, if you’re VB6-literate, see “Creating XML documents with the DOM in VB6” for more on DOM parsing.
Both these APIs have their respective problems, as well. The DOM is terribly consumptive of resources, especially with large documents. SAX is not very intuitive to use, requires the programmer to keep track of previously processed elements, and doesn’t really provide a way to work only with selected parts of a document.
The .NET XML classes strive to reach a happy medium between these two APIs and incorporate the best features of both into something Microsoft calls the “pull model.” Not much of a name, I know. I’d have called it XML streams or XML stack, but I suppose those wouldn’t be sexy enough. Anyway, with .NET, you use the pull model classes to parse and create XML documents using a simple stream-like interface. Two abstract classes, System.XML.XMLWriter and System.XML.XMLReader provide the basis for .NET’s pull model XML support.
Writing XML with XMLWriter
The XMLWriter class is essentially an XML-aware wrapper for an output stream that makes creating an XML document a breeze. The class includes methods for writing all types of XML content, as you can see in Figure A.
Figure A
XMLWriter’s XML content creation methods
The XMLTextWriter class is a concrete child class of XMLWriter that you can use in your applications as is. Simply pass an instance of an output stream object to the constructor and begin writing XML. You could also extend the base XMLWriter class yourself and create a custom writer if you so desire.
Reading XML with XMLReader
If XMLWriter is a wrapper for an output stream, XMLReader is best viewed as an XML-aware wrapper for an input stream. The class’s Read method allows you to quickly traverse a document, but it's forward-only—no going backward. You can retrieve the contents of the current node in the document through the Value property. By default, XMLReader performs a depth-first traversal of the XML document, meaning it reads child elements before sibling elements in a document. If that sounds confusing, it might help to think that this is the same way you read an XML document yourself. See Figure B if you’re still fuzzy.
The base XMLReader class does not support validation, although it will report well-formedness errors in a document by raising an XMLException. XMLReader has a few child classes that extend it with custom capabilities, and as always, you’re free to extend XMLReader yourself and develop a custom parser for your application.
XMLTextReader for bare-bones parsing
XMLTextReader is a concrete subclass of XMLReader that provides the most basic XML parsing support. It doesn’t validate, but it's the fastest of the .NET XML readers and is very configurable. You can instantiate an XMLTextReader from several different sources using any of 14 constructors, including a file, URL, or an input stream.
XMLValidatingReader supports validation
XMLValidatingReader goes a bit beyond XMLTextReader by providing XSD, DTD, and XDR validation and externals resolution. Schemas are cached in an XMLSchemaCollection and can be added programmatically via XMLTextReader’s Schemas property or by using an XMLUrlResolver class (see below) to resolve an external reference to a schema or DTD embedded in the document. The simplest way to create an XMLValidatingReader is to base it upon an XMLTextReader using the appropriate constructor overload.
XMLNodeReader for node-based parsing
For those who are simply hooked on the DOM, the XMLNodeReader class layers the XMLReader API over a DOM-like parser, so you can easily parse DOM document trees using the forward-only pull model. To use XMLNodeReader, create an XMLDocument object representing the document you want to parse. You can then access that document as a series of XMLNode objects.
The supporting cast
In addition to the “big three” XML readers, you should be familiar with a variety of utility classes in the System.XML namespace:
- XMLResolver is the abstract base class for XMLUrlResolver, which I mentioned earlier. It provides external entity, schema, and DTD resolution and handles import resolution for XSL and schema documents. XMLUrIResolver, by default, provides resolution services for all classes found in the System.XML namespace. You can, of course, extend XMLResolver to provide custom resolution for your application. Most classes include an XMLResolver method for specifying a custom resolver.
- XMLConvert is a handy utility class containing a set of static methods you can use to convert XML element data into native .NET types and to handle character encoding.
- XMLNameTable provides a shortcut method of comparing nodes. It stores the names of all elements and attributes found in a document as object references instead of strings. You can then use it to compare two node names as objects, which is a less expensive operation than comparing two strings for equality.
- XMLNode represents a single node in an XML document, providing the standard DOM-like navigation and information members. XMLDocument extends XMLNode to provide the top-level node for a DOM traversal of an XML document.
|
http://www.techrepublic.com/article/take-a-guided-tour-of-xml-support-in-net/
|
CC-MAIN-2017-04
|
refinedweb
| 1,082
| 54.12
|
<script id="menuTemplate" type="text/html">
<option value="${MenuId}" level="${Level}"> ${Name}
//call some js method here?
</option>
</script>
View Complete Post
Hi ,
I want to call Javascript method on Linkbutton click.
How i can do this.
I m trying this-:
StringBuilder sb = new StringBuilder(); sb.Append("<script language=javascript>"); sb.Append("ShowUnCompleteOrderDialog();"); sb.Append("</script>"); ScriptManager.RegisterStartupScript(this.Page, GetType(), "test", sb.ToString(), false);
it's doning well but given some probs so tell me another way.
Hi
Could anyone give me an example how to call Server Side Function from javascript and then I want to call that javascript function on Button OnClick () Event.
Thanks in Advance
Shabbir:
WCF - First call to Callback instance from Server takes considerable time to make it to the client. Hello.
There's a performance problem we are facing that we are unable to fix using DuplexChannels.
The first call made by the client to the server reaches the server instantly. The first call made by the server to the first client using a Callback proxy will take several seconds to reach the client. All subsequent calls to the client (as well as
other clients) will reach and execute instantly.
The client connects to the Server using a proxy to IServiceProvision, supplies an instance to an implementation of IClientCallback and calls IServiceProvision.Start(). The server instantly processes the call and makes an asynchronous call to IClientCallback.DoWork(),
which takes several seconds to enter the client's DoWork() implementation. The remainder of the execution then runs smoothly. Also, the server will experience such a slowdown only for the very first call of the first client that connects to it.
Is it normal for the Callback interface to take so much time to warmup, only on the first call to the first client?
Below is the full source code. Thanks for any information you may provide.
Cheers
Mike
Shared Class Library
using System.ServiceModel;
namespace Test
{
[ServiceContract(Namespace = "", CallbackContract = typeof(IClientCallback))]
public interface IServiceProvision
{
[OperationContract]
void Start();
[OperationContract]
void
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
|
http://www.dotnetspark.com/links/23127-help-how-to-call-javascript-method-client.aspx
|
CC-MAIN-2018-05
|
refinedweb
| 349
| 57.16
|
US6856993B1 - Transactional file system - Google PatentsTransactional file system Download PDF
Info
- Publication number
- US6856993B1US6856993B1 US09/539,233 US53923300A US6856993B1 US 6856993 B1 US6856993 B1 US 6856993B1 US 53923300 A US53923300 A US 53923300A US 6856993 B1 US6856993 B1 US 6856993B1
- Authority
- US
- United States
- Prior art keywords
- file
- transaction
- file system
- associated
-/1865—Transactional is directed generally to computers and file systems..
Briefly, the present invention provides a system and method via which multiple file system operations may be performed as part of a single user-level transaction. The transactional file system of the present invention enables a user to selectively control the scope and duration of a transaction within the file.
Other aspects of the invention include providing name-space and file data isolation among transactions and other file system operations. Namespace isolation is accomplished by the use of isolation directories to track which names belong to which transaction. As a result, none.
Other advantages will become apparent from the following detailed description when taken in conjunction with the drawings, in which: Microsoft Corporation's Windows® 2000, formerly Windows® NT). The computer 20 includes a file system 36 associated with or included within the operating system 35, such as the Windows NT® File System (NTFS),.
While the present invention is described with respect to the Windows® 2000 operating system and the Microsoft Windows NT® file system (NTFS), those skilled in the art will appreciate that other operating systems and/or file systems may implement and benefit from the present invention.
The Transactional File System General Architecture.
As generally represented in.
In general, and as represented in.
To determine when TXF 70 needs to enlist for a transaction, as generally represented in that:
The legal flag values are as follows:
As shown in
Transactional Access—Read And Write Isolation.
Returning to.
In the compatibility matrix set forth below, a “Yes” means modes are compatible with respect to the additional transactional restrictions: therefor.
In a preferred implementation using the Microsoft Windows® 2000 (or NT®)®,” by Helen Custer, Microsoft Press (1993); “Inside the Windows NT® File System,” Helen Custer Microsoft Press (1994) and “Inside Windows NT®, Second Edition” by David A. Solomon, Microsoft Press (1998), hereby incorporated by reference herein.®.
As shown in.
As represented in
In
As shown in
In general, because TxF transactions have lifetimes that are independent of the NTFS handles, the TxF structures have a lifetime that is independent of the NTFS structures. When both are present, they are linked together as shown in
In accordance with another aspect of the present invention and as generally described below, for logging and recovery of persistent state, the TxF 70 uses a Logging Service 74 (.
To accomplish multi-level logging, as generally represented in.
As will be understood, this is important because it is not correct to repeat an operation during redo, or to undo an operation that never quite happened. As an example, consider the following situation that may be logged in the TxF log:., 1323) in the record for this file (e.g., File3), as generally shown in FIG. 11. Note that another data structure may be alternatively used, however the per-file record is already available on each NTFS volume..
Since undo records are used to abort incomplete transactions, records for which the file-id does not exist as seen by NtfsTxFGetTxFLSN may be simply ignored..
In accordance with one aspect of the present invention and as represented in
If only the page data (the out-of-line part) was written to disk, the system will not find the inline (log) record, and thus the page will not be found and there is nothing to restore. The state is known to be consistent.
However, if the record is present in the log, the page may or may not have been flushed before the system crash..
It should be noted that the use of a unique signature at the end of each sector further detects torn (partial) writes, wherein some of the page, but not all, was copied. Note that the disk hardware guarantees that a sector will be fully written, but not does not guarantee that a page of data (e.g., eight sectors) will be written as a unit. In such an event, the cycle counts will be some mixture of “n” and (presumably) “n−1” values, and the signature will not match the logged signature information. Such a situation is treated as if the entire page was not persisted. all of the page or some of the page is copied. As can be understood, signature checking in this instance would indicate that the entire page data was persisted when in fact it was not.
However, the log may be filled with pages that back memory for the (possibly many) active files, essentially turning the sequential log into a random page file that also doubles as a recovery log, which may become a bottleneck in the system..
The Version Control Blocks are linked in a list in time order. The oldest version is the base stream, and the change table does not contain any entries for this.
In
By way of another example, if FileObjectB accesses page two-hundred (200) and that page is in memory, the access simply works. However, if not, a page fault is generated, and the read is satisfied by reading it from the log at LSN 2500. As another example, consider the FileObjectC accessing page one-hundred (100). Since this page has not been changed in version V2, version V1 is checked and the read satisfied from either the memory image (if resident) or by reading the log at LSN 2000..
After a crash, recovery is relatively straightforward, as redo information of the committed transactions is in the log, and can simply be applied to the main data stream. Note that the version control blocks are in-memory structures, and therefore do not exist at recovery time..
There are qualitative differences in when and where the I/Os are done. In the deferred-redo scheme, the most recent 1F memory stream is backed by the log, not the base file. This is very likely to be the most commonly used stream because it handles the update work, comparatively burdening the log. For versioned readers, both schemes use the log as a paging device.is not significant..
Namespace Isolation.
To accomplish namespace isolation to handle the above-described scenarios, the present invention preserves the state of the namespace for use by the other transactions for the duration of the transaction. To this end, as shown in.
By way of example of how the isolation directory is used, if a file F3 is deleted by a transaction T1, as represented in.
If a transaction (e.g., Tid1) is requesting deletion at step 1900, then step 1904 is executed, which essentially renames the file. For example, as generally represented in.
As represented in, so,
Lastly,.
In this manner, the present invention facilitates a collated search, e.g., find the next name in the collated order, using NTFS collation rules and NTFS routines. The present invention is space efficient, and allows concurrent read/write access..
These entries reserve the name for the transaction, but make it invisible to everyone. Note that the reservation is performed to allow a rollback to work.
Floated Memory Mapped Sections.
The file system, which knows when a transaction commits or aborts, and for example, is cleaning up the data structures affected by that transaction, can query the memory manager to determine whether. (43)
Priority Applications (1)
Applications Claiming Priority (15)
Related Child Applications (4)
Publications (1)
Family
ID=24150363
Family Applications (8)
Family Applications After (7)
Country Status (6)
Cited By (109)
Families Citing this family (156)
Citations (10)
Family Cites Families (41)
- 2000
- 2000-03-30 US US09/539,233 patent/US6856993B1/en active Active
- 2001
- 2001-03-16 CN CN2005101036995A patent/CN1746893B/en active IP Right Grant
- 2001-03-16 CN CNB018080634A patent/CN100337233C/en active IP Right Grant
- 2001-03-16 EP EP20010918767 patent/EP1269353A2/en not_active Ceased
- 2001-03-16 AU AU4580601A patent/AU4580601A/en active Pending
- 2001-03-16 WO PCT/US2001/008486 patent/WO2001077908A2/en active Application Filing
- 2001-03-16 CN CNB2005101036976A patent/CN100445998C/en active IP Right Grant
- 2001-03-16 JP JP2001575287A patent/JP4219589B2/en active Active
- 2004
- 2004-12-10 US US11/009,662 patent/US7257595B2/en active Active
- 2004-12-10 US US11/009,228 patent/US7512636B2/en active Active
- 2004-12-13 US US11/010,820 patent/US7418463B2/en active Active
- 2005
- 2005-02-14 US US11/057,935 patent/US7613698B2/en active Active
- 2009
- 2009-10-22 US US12/604,209 patent/US8010559B2/en active Active
- 2011
- 2011-07-13 US US13/181,703 patent/US8510336B2/en active Active
- 2013
- 2013-08-09 US US13/963,675 patent/US20130325830A1/en not_active Abandoned
|
https://patents.google.com/patent/US6856993B1/en
|
CC-MAIN-2019-13
|
refinedweb
| 1,475
| 50.06
|
- NAME
- SYNOPSIS
- DESCRIPTION
- METHODS
- SEE ALSO
- AUTHOR
NAME
CLI::Dispatch - simple CLI dispatcher
SYNOPSIS
* Basic usage In your script file (e.g. script.pl): #!/usr/bin/perl use strict; use lib 'lib'; use CLI::Dispatch; CLI::Dispatch->run('MyScript'); And in your "command" file (e.g. lib/MyScript/DumpMe.pm): package MyScript::DumpMe; use strict; use base 'CLI::Dispatch::Command'; use Data::Dump; sub run { my ($self, @args) = @_; @args = $self unless @args; # do something print $self->{verbose} ? Data::Dump::dump(@args) : @args; } 1; From the shell: > perl script.pl dump_me "some args" --verbose # will dump "some args" * Advanced usage In your script file (e.g. script.pl): #!/usr/bin/perl use strict; use lib 'lib'; use MyScript; MyScript->run; And in your "dispatcher" file (e.g. lib/MyScript.pm): package MyScript; use strict; use base 'CLI::Dispatch'; sub options {qw( help|h|? verbose|v stderr )} sub get_command { shift @ARGV || 'Help' } # no camelization 1; And in your "command" file (e.g. lib/MyScript/escape.pm): package MyScript::escape; use strict; use base 'CLI::Dispatch::Command'; sub options {qw( uri )} sub run { my ($self, @args) = @_; if ( $self->{uri} ) { require URI::Escape; print URI::Escape::uri_escape($args[0]); } else { require HTML::Entities; print HTML::Entities::encode_entities($args[0]); } } 1; From the shell: > perl script.pl escape "query=some string!?" --uri # will print a uri-escaped string * Lazy way In your script file (e.g. inline.pl): use strict; MyScript::Inline->run_directly; package MyScript::Inline; use base 'CLI::Dispatch::Command'; sub run { my ($self, @args) = @_; # do something... } From the shell: > perl inline.pl -v * Using subcommands In your script file (e.g. script.pl): #!/usr/bin/perl use strict; use lib 'lib'; use CLI::Dispatch; CLI::Dispatch->run('MyScript'); And in your "command" file (e.g. lib/MyScript/Command.pm): package MyScript::Command; use strict; use CLI::Dispatch; use base 'CLI::Dispatch::Command'; sub run { my ($self, @args) = @_; # create a dispatcher object configured with the same options # as this command my $dispatcher = CLI::Dispatch->new(%$self); $dispatcher->run('MyScript::Command'); } 1; And in your "subcommand" file (e.g. lib/MyScript/Command/Subcommand'): package MyScript::Command::Subcommand; use strict; use base 'CLI::Dispatch::Command'; sub run { my ($self, @args) = @_; # do something useful } 1; From the shell: > perl script.pl command subcommand "some args" --verbose # will do something useful
DESCRIPTION
CLI::Dispatch is a simple CLI dispatcher. Basic usage is almost the same as the one of App::CLI, but you can omit a dispatcher class if you don't need to customize. Command/class mapping is slightly different, too (ucfirst for App::CLI, and camelize for CLI::Dispatch). And unlike App::Cmd, CLI::Dispatch dispatcher works even when some of the subordinate commands are broken for various reasons (unsupported OS, lack of dependencies, etc). Those are the main reasons why I reinvent the wheel.
See CLI::Dispatch::Command to know how to write an actual command class.
METHODS
run
takes optional namespaces, and parses @ARGV to load an appropriate command class, and runs it with options that are also parsed from @ARGV. As shown in the SYNOPSIS, you don't need to pass anything when you create a dispatcher subclass, and vice versa.
options
specifies an array of global options every command should have. By default,
help and
verbose (and their short forms) are registered. Command-specific options should be placed in each command class.
default_command
specifies a default command that will run when you don't specify any command (when you run a script without any arguments).
help by default.
get_command
usually looks for a command from @ARGV (after global options are parsed), transforms it if necessary (camelize by default), and returns the result.
If you have only one command, and you don't want to specify it every time when you run a script, let this just return the command:
sub get_command { 'JustDoThis' }
Then, when you run the script,
YourScript::JustDoThis command will always be executed (and the first argument won't be considered as a command).
convert_command
takes a command name, transforms it if necessary (camelize by default), and returns the result. You may also want to override this to convert short aliases for long command names.
sub convert_command { my $command = shift->SUPER::convert_command(@_); return ($command eq 'Fcgi') ? 'FastCGI' : $command; }
get_options
takes an array of option specifications and returns a hash of parsed options. See Getopt::Long for option specifications.
load_command
takes a namespace, and a flag to tell if the
help option is set or not, and loads an appropriate command class to return its instance.
run_directly
takes a fully qualified package name, and loads it if necessary, and run it with options parsed from @ARGV. This is mainly used to run a command directly (without configuring a dispatcher), which makes writing a simple command easier. You usually don't need to use this directly. This is called internally when you run a command (based on CLI::Dispatch::Command) directly, without instantiation.
new (since 0.17)
creates a dispatcher object. You usually don't need to use this (because CLI::Dispatch creates this internally). If you need to copy options from a command to its subcommand, this may help.
SEE ALSO
App::CLI, App::Cmd, Getopt::Long
AUTHOR
Kenichi Ishigaki, <ishigaki@cpan.org>
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
|
https://metacpan.org/pod/CLI::Dispatch
|
CC-MAIN-2016-18
|
refinedweb
| 898
| 55.13
|
Introduction
This post is about Intel® Memory Protection Extensions (Intel® MPX) support in Microsoft Visual Studio* 2015; content provided by Gautham Beeraka, George Kuan, and Juan Rodriguez from Intel Corporation.
Overview
Update 1 for Visual Studio 2015 was announced on November 30, 2015. This update includes experimental compiler and debugger support for Intel MPX. Intel MPX can check all pointer reads and writes to ensure they remain within their declared memory bounds. This technology can detect buffer overflows and stop program execution at runtime, averting possible system compromises. It enables C/C++ code to make use of the new MPX instruction set and registers introduced in the 6th Generation Intel® Core™ Processors (“MPX-enabled platform”).
The Microsoft Visual C++ Compiler* and linker can now generate checks automatically, a capability enabled by specifying a command line option.
This blog explains how you can use automatic MPX code generation and debug MPX-enabled binaries. For more details on Intel MPX, please see the Intel MPX Technology web page.
How to enable automatic MPX code generation
Visual Studio 2015 Update 1 introduces a new compiler option: /d2MPX.
/d2MPX currently supports:
- Checking memory writes for potential buffer overflows. This provides protection for local and global pointers and arrays.
- Extensions to the calling conventions to automatically propagate bounds associated with pointer arguments.
To enable automatic MPX code generation for your project:
In Visual Studio, add the /d2MPX option in the Additional Options box (Project|Properties|Configuration Properties|C/C++|Command Line|Additional Options), Figure 1.
Figure 1. Add the /d2MPX compiler option for each desired configuration.
Usage Example
The following example is a program that contains an illustrative buffer overflow
Figure 2. Code with buffer overflow that will be detected with /d2MPX.
In Figure 2, the statement inside of the for loop would have overflowed the out array when it attempts to write past the end of the array since out is smaller than string str. Just before the program would have performed the out-of-bounds store, the MPX hardware will generate a #BR (bound range exceeded) exception, which is manifested as a structured exception handling (SEH) exception “Array bounds exceeded”. The default behavior in absence of an exception handler for the array bounds exceeded exception is immediate termination of the program. Alternatively, one can add an exception handler as shown in the example code to log the exception or to perform some context dependent recovery such as tearing down the process all the while having avoided the out-of-bounds store.
Steps to build and run the example:
Check that the Intel® MPX Runtime Driver is installed on your Microsoft® Windows® 10 November 2015 Update or greater system by verifying its presence in Device Manager under System devices (Figure 3). If it is absent, please download and install the driver from the Intel® Memory Protection Extensions Enabling Guide.
Install Visual Studio 2015 Update 1. Note, if Visual Studio is installed with the phone emulators, Hyper-V will have to be disabled (bcdedit /set hypervisorlaunchtype off and reboot) because this version of Windows does not expose MPX instructions to the guest.
Create a Win32 Console Application named “MPXExample” and use the code in Figure 2 for the driver code.
As noted above, please, double check that the /d2MPX option is enabled for the current Configuration.
Build the project for the X64 platform from within Visual Studio. This should produce an MPXExample.exe binary.
Execute the binary MPXExample.exe on an MPX-enabled platform with Windows 10 – which has the OS support for MPX.
- To have the Visual Studio Debugger break on the array bounds exceeded exception, please enable the option for “Array bounds exceeded” in Exception Settings (Debug|Windows|ExceptionSettings) as shown in Figure 4. Executing MPXExample.exe in the debugger should now break on the exception (Figure 5). In this example, the #BR exception is thrown when MPX detects that we are about to write beyond the upper bound of the out array (Figure 6).
Figure 3. Verify that the Intel MPX Runtime Driver is installed via Device Manager.
Figure 4. Enable break on the array bounds exceeded exception in the Exception Settings window to have the Visual Studio Debugger break on the exception.
Figure 5. The Visual Studio Debugger breaks on the array bounds exceeded exception.
Figure 6. The exception is thrown when checking the upper bound as shown in this snapshot of the Disassembly window.
Visual Studio 2015 Update 1 supports the display and manipulation of the MPX registers via both the Register (Figure 7) and Watch windows (Figure 8) when running on an MPX-enabled platform.
Figure 7. To observe the contexts of the MPX bounds registers, enable MPX in the Debugger Register window.
Figure 8. Adding a bounds register to the Debugger Watch window is also simple. BND0.UB and BND0.LB in the Watch window refer to the upper and lower bounds in the BND0
register respectively. Note that the upper bound of a bounds register is displayed in 2’s complement form.
How to tell if a binary is MPX-enabled
Run dumpbin /headers MPXExample.exe. The MPX debug directory entry should be similar to what is shown in Figure 9.
Figure 9. To tell if a binary is MPX-enabled, check whether a binary includes the mpx debug directory using dumpbin. The mpx debug directory should be listed in the Debug Directories section in the dumpbin output.
Must I compile everything with MPX?
You don’t have to compile all of your code with MPX enabled. A mixture of MPX and non-MPX enabled code will execute correctly. However, code compiled without MPX support will not have any MPX checks.
What hardware and version of Windows do I need?
To gain the benefits of MPX, MPX-enabled code should be executed on an MPX-enabled platform running a version of the Windows Operating System that is MPX aware. As of today, MPX is supported on the following:
What if I execute MPX-enabled code on a platform or on a version of Windows
that does not support MPX?
The MPX-enabled code will execute correctly, but it will not benefit from MPX. You need to execute the code on an MPX-enabled platform with an MPX-aware operating system. The MPX instructions will be treated as NOPs, so you might experience a performance decrease in these scenarios.
Performance Impact
MPX technology provides a powerful safeguard against buffer overflow. Inserting checks for every write to memory may incur some execution time and memory footprint overhead. The amount of overhead is tolerable during testing. However, when enabled for production code, the developer must balance whether the improved memory safety outweighs their customers’ performance needs. We plan to improve performance based on feedback.
Known Issues
There is a known issue with x86 debug build where debug instrumentation interferes MPX operation.
More Information and Feedback
For more information on how Intel MPX works, details on MPX intrinsic functions, calling convention extensions, and runtime behavior of MPX, please refer to the Intel® Memory Protection Extensions Enabling Guide.
Please try out automatic MPX code generation in Visual Studio 2015 Update 1. We are eager to hear about your experiences, especially in terms of usability, code size and runtime performance impact, and your suggestions for how to improve this feature. Please leave feedback in the comment box below or at the Intel ISA Extensions Forum on Intel® Developer Zone.
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. Check with your system manufacturer or retailer or learn more at Intel MPX Technology web page. Intel, the Intel logo, 6th Generation Intel® Core are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. © 2016 Intel Corporation
Join the conversationAdd Comment
Sounds great!
Out of curiosity, have you observed similar overheads as the ASan team?
Concretely: "time difference is 2.5x, which may be caused by the naive compiler implementation, but there is also a 4x RAM usage increase."
("Performance Impact" doesn't seem to include any numbers at the moment.)
Here are the full results: github.com/…/AddressSanitizerIntelMemoryProtectionExtensions
Hi MattPD
This is Juan from Intel. Thanks for noticing today's VC++ blog post. We have not done any specific comparisons vis a vis ASan. We'll take a look at the example on the link you provided.
One of the reasons why the feature is being rolled out in VC++ as experimental is to get the functionality out there and, based on feedback from users, develop actionable improvements in future VC++ updates.
Each compiler has implemented support for MPX given its design, deployment, and servicing constraints and priorities. What we wanted to focus on was on getting the functionality out to developers. Please take a look at the Updated Intel® Memory Protection Extensions Enabling Guide, which has been updated to cover VC++ 2015 Update 1.
We are looking for developer feedback on the feature: best way is to have developers out there give it a try, as your "mileage may vary" :)
Thanks
-Juan
What parts of memory are protected? For example, can it protect against an overrun of a heap-allocated array?
Is the intent for this feature to be only enabled in debug builds? or in other words should this be enabled in release builds running at customer environment?
So, for now we should only test/enable it in x64, because of the known issue?
thx,
Vertex
Just gave it a try – every time I start the test program, the system bluescreens with KMODE_EXCEPTION_NOT_HANDLED in MpxRuntime.sys. (Windows 10, Xeon ES-2667)
There seems to be a robustness failure in the fallback situation.
I don't know if this is a Microsoft OS issue, a compiler issue (checking CPU capabilities on entry to the application) or a driver issue.
Last night I saw this post, and being unsure if my new(ish) July 2015 Lenovo Z50 (i5 CPU) (running Win10 Pro x64,with all patches, including the Nov 2015 update) supported MPX, I manually installed the MPX driver as described. I have VS 2015 Update 1 installed also.
I compiled up the code etc as described above, having verified the driver (dated Aug 2015) was installed, and began working through it in the debugger.
The code then BSODed my system with a Kernel Mode Exception Handler not found BSOD. The problem was repeatable.
I got the impression from the post above that the system was fail safe (e.g. the comments about NOPs etc.), yet my experience suggests otherwise….
My experience is the same as Michael's – the BSOD is in the MPXRuntime.sys driver.
For info – the MPXRuntime.sys driver was version 1.0.0.8, dated sometime in Aug 2015.
Good stuff. Is there a chance that the Haswell MOVBE instruction could be added to intrinsics ?
Sounds very interesting technology but could you please describe your graphs using plain text? My screen reader can't read graphs and especially the platforms that this feature supports is important for me to know.
@Blind User
Would it help to share the source code to the example (figure 2)?
The last figure is supported hardware/software. MPX is currently supported on 6th generation Intel core processors running Windows 10 November 2015 update or higher with the MPX driver.
Thanks for the feedback.
Eric (ebattali@microsoft.com)
The known issue is really vague. I'm afraid to experiment with this because I don't know how exactly it's going to interfere.
Running an application with MPX without the VS debugger attacher causes the application to crash when a buffer overflow occurs. Trying to attach VS to it when this happens causes the application to exit with code 0xc0000409
@Michael – MPX is not supported in Intel(R) Xeon(R) ES-2667. That product line is not based on the 6th Generation Intel® Core™ (code named "Skylake"). Please, follow up with me via email to juan.a.rodriguez at intel.com.
@Mike Diak – MPX is not supported on the Lenovo Z50 – per the specs, it is based on 4th Generation Intel® Core™, not the 6th Generation Intel® Core™ (code named "Skylake"). Please, follow up with me via email to juan.a.rodriguez at intel.com.
High level comments:
Jose – I will follow up with you, but this seems as it stands terribly flakey and fragile.
Microsoft – If only the very very very latest CPU's support this ability, is there really much point in supporting it yet especially in this immature format? Clearly it will be 1-2 years before there are significant numbers of PCs in the field with this hardware, particularly with the generally slower purchase rate of PCs and longer lifecycles before they are replaced.
1) Most users will not know whether their CPU supports this. Thus as a minimum – the driver installer needs to be a wrapped (e.g. Installshield installer) that can interrogate the CPU to check if the support is available BEFORE installing the driver.
2) The driver needs to fail safe. If it's running on an unsupported CPU, it needs to do something more sensible than just throw a BSOD, e.g. unload itself? This seems a terrible omision.
3) Surely the compiler/runtime (which supposedly treats this as a NOP) should do something sensible at application startup. If things are this risky stability wise, it would be a very brave developer who currently shipped software compiled with this option etc, due to the risk of BSODs.
@Blind User
Thanks for your feedback. I have pasted the Figure 2 sample code below..
// mpxexample.cpp
// compile with: /d2MPX
#include "stdafx.h"
#include <windows.h>
const int OUTBUFSIZE = 42;
wchar_t out[OUTBUFSIZE];
void copyUpper(wchar_t* str, size_t size) {
__try {
for (unsigned int i = 0; i < size; i++) {
// buffer overflow when attempting to write the 43rd wchar
out[i] = towupper(str[i]);
}
}
__except (GetExceptionCode() == STATUS_ARRAY_BOUNDS_EXCEEDED) {
wprintf(L"Caught array bounds exceeded exceptionn");
}
}
int main(int argc, char* argv[]) {
wchar_t str[] = L"the quick brown fox jumps over the lazy dog";
memset(out, 0, OUTBUFSIZE);
copyUpper(str, wcsnlen_s(str, 255));
wprintf(L"%sn", out);
return 0;
}
To take advantage of MPX you need a system that support the MPX instruction set. Our recently introduced 6th Generation Intel® Core™ Processors include MPX. In addition, you will need Microsoft® Windows® 10 November 2015 Update ("November Update") and the Intel® MPX Runtime Driver installed. Please refer to the Intel® Memory Protection Extensions Enabling Guide landing page link for further details.
Hope this helps.
I've emailed Jose. I can't help thinking that for most developers using the existing hardware techniques for finding writes/reads of unallocated memory via various libraries such as heapcheck (one I contributed to, based on Bruce Perens' famous electric fence) and various other techniques to do similar things in hardware, are preferable.
While these are not just a matter of recompiling, they do have the ability of running on todays and yesterdays hardware, and use hardware not software, so don't necessarily occur huge performance overheads….
e.g. users.softlab.ntua.gr/…/HeapCheck.html…/Toggle-hardware-data-read-execute-breakpoints-prog
@Mike: HeapCheck doesn't work on stack memory though.
Why SEH and not language exceptions? Because of C?
Looks good to speed-up debug builds, at least it should be faster than traditional buffer bound checks.
I do not like so much the NOPs on unsupported CPUs, is there a way to check manually if the MPX is supported or not by the system?
Alessio is right. We need to be able to check robustly (and failsafely) for MPX support
I created empty project with /d2MPX option and got warning "LNK4075 ignoring /EDITANDCONTINUE due to /OPT:LBR specification".
@Myria
MPX technology is designed to protect static as well as dynamically allocated buffers, such as those coming from a heap. For the dynamically allocated case, the allocators/libraries need to be provided which are MPX aware. The upcoming version of GCC 6 will enable MPX library support by default. However, at this point in time we do not have a similar set of library support for UCRT VC++ libraries.
Custom code can be written to “wrap” heap allocators like malloc with MPX intrinsics in lieu of the default version until a suitable MPX enabled set of libraries or wrappers can be made available. Please see the MPX Enabling Guide for further details.
Below is an example of an MPX wrapped malloc version.
void *__wrap_malloc (size_t n)
{
void *p = (void *) malloc (n); // call original malloc
if (p)
{
return __bnd_set_ptr_bounds (p, n); //bnd: [p, p+n-1]
}
return p;
}
Your feedback on support for MPX enabled libraries is something we are interested in capturing.
@Vertex
Vertex
The feature can be enabled for both debug and release builds. The design of MPX technology is flexible and as such you, the customer/developer, needs to determine its suitability.
VC++ already provides features such as /GS, /analyze, /guard, secure CRT, and other capabilities. MPX is one additional capability available to help mitigate potential buffer overflow vulnerabilities.
As for whether you should only test/enable the functionality on x64, the answer is no. The issue is documented as issue 1 in the list of known issues in the Section 8.5.1 in the MPX Enabling Guide, which I am including below — this only impacts x86 debug builds.
1. Issue: The bound registers may be set to init [0, -1] in debug builds built with /RTC1 or /RTCs (Runtime checks flag). This is because of calls to RTC-related legacy functions _RTC_CheckESP and _RTC_CheckStackVars in the function epilog.
Workaround: Disable runtime checks. In the project set Configuration Properties -> C/C++ -> Code Generation -> Basic Runtime Checks to Default.
@Cleroth
Please see Section 8.5.1 in the MPX Enabling Guide for the known issues and workarounds that exist. Good news is that we have fixes already implemented and are currently under internal review for them to be incorporated in a future Visual C++ update.
@Richard Nutman
Richard
Visual C++ already supports the following MOVBE intrinsics.
unsigned short _load_be_u16(void *);
unsigned int _load_be_u32(void *);
unsigned __int64 _load_be_u64(void *);
void _store_be_u16(void *, unsigned short);
void _store_be_u32(void *, unsigned int);
void _store_be_u64(void *, unsigned __int64);
@Cleroth
A process terminated with 0xc0000409 exit code typically means that there was a stack overflow detected.
msdn.microsoft.com/…/cc704588.aspx
@Juan Rod. Aha! Excellent thanks!
@ Juan Rod Are you sure they have been added ? They're not listed on MSDN anywhere and don't show in any VS2015 headers. Are you confusing with _byteswap_ulong ? Specifically I'm referring to the new Haswell instructions that are faster.
@Richard Nutman
The MOVBE intrinsic declarations are defined in the immintrin.h include header file that ships with Visual Studio 205 Update 2.
@Michael.
@Mike Diack
Thanks for the feedback..
Does this work on older Intel processors with the help of SDE? I guess not, because of the driver component.
All
We have uploaded a new MPX driver package which resolves the blue screens previously experienced by @Mike and @Mike Diack on systems that do not support Intel MPX Technology for download at the Intel® Memory Protection Extensions Enabling Guide web site:
Driver version is 1.0.0.11, Dated 1/28/2016
Thank you
This feature is not working on the new visual studio update 2?
@Richard Nutman
The MOVBE intrinsic declarations are defined in the immintrin.h include header file that ships with Visual Studio 205 Update 2.
@Chaos
Could you please elaborate on what is not working? Several fixes where made in Visual Studio Update 2 for some MPX related bugs.
Adding /d2MPX options is not working, no MPX instruction is generated. The example is working on Update 1, but the same code and the same solution on Update 2 cannot generate any MPX related instructions.
I finally got a Skylake CPU to test this. Release mode works great with Update 3 RC and performance isn’t too bad. I would love to enable this feature for our unit tests and/or for debug mode, but unfortunately the incompatibility with the other runtime debug checks is a blocker. I really don’t want to disable them and I don’t think we’ll upgrade the CPUs in our build system, where unit tests are run in release mode, anytime soon.
@Marcel Raad
Thank you for the feedback. The incompatibility with runtime debug checks (RTC) has been fixed in Visual Studio 2015 Update 2. The Intel® Memory Protection Extensions Enabling Guide referred above has been updated to reflect this.
Great, thanks!
Hmm, I always get the “array bounds exceeded” exeption in debug mode when using boost::call_once, both 32-bit and 64-bit mode. Release mode works fine.
@Marcel Raad
Thank you. We are able to repro the false bounds violation you reported and I understand that a VC++ compiler fix is currently under validation.
Just curious: are you using the available mechanism, e.g., for submitting bugs?
Sorry, 9 months late and the “email me on reply” didn’t work…
Thanks, that’s great to hear! Then I’ll give it another try now :-)
I’m normally using MS Connect for bug reports, but not for MPX as the blog post said “leave feedback in the comment box below.” :-)
When library support for UCRT VC++ libraries is planned?
@Eliyahu
A few folks from Intel monitor this blog on an ongoing basis, so please forgive our delayed responses.
Generally speaking: external customer input and requests are always taken into consideration…sharing your specific scenarios in your input is helpful as well in order to help build support for your use case. The more feedback/interest we will get collectively is appreciated. Please us this and other available mechanisms like User Voice:
I can’t speak for Microsoft on any timelines for support.
|
https://blogs.msdn.microsoft.com/vcblog/2016/01/20/visual-studio-2015-update-1-new-experimental-feature-mpx/?replytocom=107081
|
CC-MAIN-2018-39
|
refinedweb
| 3,657
| 55.13
|
Problem Statement
Problem “Numbers with prime frequencies greater than or equal to k” states that you are given an array of integers size n and an integer value k. All the numbers inside it are prime numbers. The problem statement asks to find out the numbers which appear in the array at least k times and a prime number of times in the array.
Example
arr[] = {29, 11, 37, 53, 53, 53, 29, 53, 29, 53}, k = 2
29 and 53
Explanation: 29 and 53 appear 3 times and 5 times respectively which also satisfies the condition to appear at least k times.
Algorithm to find numbers with prime frequencies greater than or equal to k
1. Declare a map and store all the numbers of the array in the map with their frequencies. 2. Traverse the map. 1. Check if the frequency of each element is at least k or greater than k. 2. Find if that frequency is a prime number or not. 3. If the frequency is a prime number then print that key else go for other numbers.
Explanation
We have given an array of integers and a value k. All the numbers given are also primary, we have asked to find out if the number appears at least k times and also any prime number times in the array, then print that number. To solve this, we will be using the Hashing method. We will use a Map. Our task is to find the appearance of a number, it means we have to find the occurrence of each element, to resolve this we are going to use a Map.
We will traverse the array and count and store each element and its frequency into the map. If the number is a new entry in the map, then make a place for it in the map and mark its frequency as 1. If it already exists, then simply get its frequency and increase its frequency by 1 and again insert that frequency along with its number. In this way, we can count all the frequencies of each number. Now we need to check if the frequency of each number, firstly exists k number of times and also the number of times is appearing should be a prime number.
For that case, we will be traversing each key in a map, and get its frequency. We have made a function which checks that if the frequency is a prime number or not. For prime number, it should not be 1, and also it should not be divisible by another number than itself. If it is divisible by any number then return false. If it is divisible then print that key means number of that frequency, and proceed further for another number.
Code
C++ code to find numbers with prime frequencies greater than or equal to k
#include<iostream> #include<unordered_map> using namespace std; bool checkIfPrime(int n) { if (n <= 1) return false; for (int i = 2; i < n; i++) if (n % i == 0) return false; return true; } void numberWithPrimeFreq(int arr[], int k) { unordered_map<int, int> MAP; for (int i = 0; i < 12; i++) MAP[arr[i]]++; for (auto x : MAP) { if (checkIfPrime(x.second) && x.second >= k) cout << x.first << endl; } } int main() { int arr[] = { 29,11,37,53,53,53,29,53,29,53 }; int k = 2; numberWithPrimeFreq(arr, k); return 0; }
29 53
Java code to find numbers with prime frequencies greater than or equal to k
import java.util.HashMap; import java.util.Map; public class Frequencies_PrimeNumber { public static void numberWithPrimeFreq(int[] arr, int k) { Map<Integer, Integer> MAP = new HashMap<>(); for (int i = 0; i < arr.length; i++) { int val = arr[i]; int freq; if (MAP.containsKey(val)) { freq = MAP.get(val); freq++; } else freq = 1; MAP.put(val, freq); } for (Map.Entry<Integer, Integer> entry :MAP.entrySet()) { int TEMP = entry.getValue(); if (checkIfPrime(TEMP) && TEMP >= k) System.out.println(entry.getKey()); } } private static boolean checkIfPrime(int n) { if ((n > 2 && n % 2 == 0) || n == 1) return false; for (int i = 2 ; i <n; i ++) { if (n % i == 0) return false; } return true; } public static void main(String[] args) { int[] arr = { 29,11,37,53,53,53,29,53,29,53 }; int k = 2; numberWithPrimeFreq(arr, k); } }
53 29
Complexity Analysis
Time Complexity
Here, consider we have n/k elements with frequency k, so every time primality will be checked. But the primality checking takes only O(n) time. Here n is the frequency. So even in the worst case, the whole algorithm can run in linear time. O(N) where “N” is the number of elements in the array.
Space Complexity
Because of the space required for storing the input, O(N) where “N” is the number of elements in the array.
|
https://www.tutorialcup.com/interview/hashing/numbers-with-prime-frequencies-greater-than-or-equal-to-k.htm
|
CC-MAIN-2021-25
|
refinedweb
| 801
| 70.94
|
In a limited partnership, a general partner's minimum participation in profits and losses is:
1%
5%
10%
15%
A
According to tax law, a general partner must have at least a 1% participation in profits and losses for a business to maintain limited partnership status. (20-3
What type of options would be used to hedge a portfolio of computer stocks?
Interest-rate options
Narrow-based index options
Broad-based index options
Yield-based options
B
A portfolio containing only computer stocks represents just one segment of the market. A narrow-based index also contains stocks from only one segment of the market. (15-36
Someone who wishes to hedge a portfolio of preferred stocks would buy:
Yield-based option calls
Yield-based option puts
Interest-rate option calls
Interest-rate option puts
I and III
I and IV
II and III
II and IV
B
The prices of preferred stocks are inversely related to the movement of interest rates, as are bonds. Therefore, if the investor was concerned that rising interest rates would erode the value of the preferred stock portfolio, the purchase of an option that does well when interest rates rise would provide an effective hedge. Interest-rate puts (which are price-based options) would gain value when interest rates rise and would be a reasonable choice. Yield-based calls (which are yield-based options) would increase in value when interest rates rise, also creating a viable hedge. (15-48, 15-50)
A client buys 100 shares of XYZ Corporation at $27 per share and writes an XYZ October 30 call at a $3 premium.
The XYZ Corporation 30 call option will expire on:
October 30th
The first day of November
The Saturday following the third Friday of October
The Saturday following the fourth Friday of October
C
Listed equity options expire on the Saturday following the third Friday of the month at 11:59 p.m. Eastern Time. (16-2)
A client buys 100 shares of XYZ Corporation at $27 per share and writes an XYZ October 30 call at a $3 premium.
What is the breakeven point for the writer?
$24
$27
$30
$33
A
To find the breakeven point for the covered call writer, subtract the premium from the cost of the stock. The cost of the stock ($27) minus the premium ($3 per share), equals a breakeven point of $24. (15
An investor is told that the cost basis of a municipal bond purchased at a discount will be adjusted each year until maturity. This adjustment is due to:
Amortization
Accretion
Depreciation
Appreciation
B
When the basis of a discount bond is adjusted upward over time, the process is called accretion. When the basis of a premium bond is adjusted downward over time, the process is called amortization. (21-11)
A woman invested $200,000 in a real estate limited partnership. Her portion of the income and expenses are as follows:
Gross project revenues $ 180,000
Operating expenses $ 110,000
Interest paid on mortgage $ 45,000
Depreciation $ 40,000
Assuming she has $55,000 in passive income and is in the 28% tax bracket, how much of the passive loss can she utilize against the passive income?
$15,000
$40,000
$45,000
$55,000
A
The calculation for the project is as follows:
$180,000 gross revenue
- $110,000 operating expenses
- $40,000 depreciation
=
$30,000 earnings before interest
- $45,000 interest on mortgage
=
(-$15,000 ) taxable income
The investor may declare a loss of $15,000 against passive income. (20-5)
Which of the following would qualify for a sales breakpoint on large purchases of mutual fund shares?
A partnership formed to buy the securities
A joint account formed between two unrelated individuals
A husband and wife who are joint tenants with rights of survivorship
An investment club coordinated by a registered representative
C
Quantity discounts are only allowed for individuals and individual entities such as corporations. Partnerships and investment clubs are not entitled to a quantity discount. Joint accounts normally do not qualify for breakpoints except in cases where there is a dependency relationship in the account (e.g., husband and wife). (18-23
An underwriting syndicate that offered a new issue at $21 could NOT stabilize the offering at:
20.88
21
21.50
21.75
I only
I or II only
III or IV only
I, II, III, or IV
C
An underwriter can stabilize a new issue at or below the offering price. The underwriter could stabilize at 21 or 20.88, but could not stabilize at 21.50 or 21.75. (9-7)
Which TWO of the following would NOT be permitted to purchase shares of an IPO of KMF?
An attorney involved in the new issue of KMF
A portfolio manager of an investment company buying for his personal account
An investment company registered under the Act of 1940 which has some restricted persons as shareholders
The general account of an insurance company
I and II
I and IV
II and III
III and IV
A
Restricted persons include finders and fiduciaries (such as attorneys and accountants) involved in the new issue and portfolio managers who buy and sell securities on behalf of institutional investors. The New Issue Rule also provides a number of general exemptions.
The exemptions allow a new issue defined under the rule to be sold to the following accounts.
Investment companies registered under the Investment Company Act of 1940
The general or separate account of an insurance company
A common trust fund
An account in which the beneficial interest of all restricted persons does not exceed 10% of the account. (This is a de minimis exemption that allows an account owned in part by restricted persons to purchase a new issue if all restricted persons combined own 10% or less of the account.)
Publicly traded entities other than a broker-dealer or its affiliates that engage in the public offering of new issues
Foreign investment companies
ERISA accounts, state and local benefit plans, and other tax-exempt plans under IRS Code 501(c)(3) (9-5)
Interest earned on which of the following would be added to income when calculating the alternative minimum tax?
Limited tax bonds
School bonds
Private activity bonds
Public housing bonds
C
The computation of the alternative minimum tax involves adding tax preference items back to a taxpayer's income. In some cases, interest earned on a private activity bond may be considered a tax preference item. A private activity bond, also called an AMT bond, is a municipal bond with 10% or more of the proceeds generated from the bond going to a project financed by a private entity (e.g., a corporation). (21-3, 8-
An investor purchases a Canadian dollar September 80 call and writes a Canadian dollar September 82 call. This position is a:
Bullish spread
Bearish spread
Long straddle
Credit combination
A
A spread is the simultaneous purchase and sale of options of the same class (both calls or puts), on the same underlying security, with different strike prices and/or expiration months. A debit spread is created when the premium of the option purchased is greater than the premium of the option sold. The September 80 call, which is the right to buy the Canadian dollar at 80, is more valuable than an option that provides the right to buy at 82. Therefore, the call purchase will be the controlling factor in the spread. Since buying calls is bullish, a call debit spread is a bullish strategy. (15-22, 15-29)
Mr. Green, a new client, decides to short 100 shares of TANDY at $18 per share. What is the initial margin requirement for this trade?
$2.50 per share
30% of current market value
$1,800
$2,000
D
Industry rules require a minimum deposit of $2,000 on a short sale when it is the initial transaction in the account. For a purchase, the initial requirement is $2,000 or 100% of the purchase price, whichever is less. (13-15)
A customer owns an AMF October 30 call option. If AMF should split 2 for 1, the customer would own:
1 AMF October 30 call for 100 shares
1 AMF October 15 call for 200 shares
2 AMF October 15 calls each for 100 shares
2 AMF October 30 calls each for 100 shares
C
When a stock splits 2 for 1 (an even split), the number of contracts increase and the strike price is reduced proportionately. The number of shares representing each listed option remains at 100 shares. The customer would now have 2 calls for 100 shares each at the adjusted strike price of $15 or 2 AMF October 15 calls for 100 shares each of AMF. Listed options are adjusted for stock splits, stock dividends, and rights offerings but are not adjusted for cash dividends. (16-9)
A client wishes to make a purchase based on his belief that interest rates will decline over the next fifteen years. The recommendation of which of the following securities would NOT be consistent with the client's belief?
A 5-year noncallable bond
A TAN
Floating rate notes
A 15-year bond with a 5-year put feature
I and IV only
II and III only
I, III, and IV only
II, III, and IV only
B
Since the client believes interest rates will decline, he wants to lock in a high yield for the next 15 years. A TAN is a short-term security and a floating rate note's interest rate would be adjusted downward with prevailing interest rates. Neither would lock in the high return. The 5-year noncallable bond would lock in a high return without the possibility of being redeemed prior to maturity. The 15-year bond locks in the high return and the 5-year put feature permits the investor to redeem the bond after 5 years or keep it to maturity. This decision would depend on the prevailing rates in 5 years. (8-18, 5-18, 8-20)
Which of the following orders is a specialist prohibited from accepting on his book?
An open (GTC) order
A day order
A market order
A not-held order
I and III
I and IV
II and III
III and IV
D
A specialist may accept open GTC orders and day orders on his book. The specialist may not accept market orders and not-held orders on his book. A not-held order allows a floor broker to use his expertise with regard to the proper time and price for execution of the order. The term "not-held" means the floor broker is not held to a specific price for the stock. The specialist is not involved with not-held orders. Market orders would not be placed on the specialist's book, as the specialist would immediately execute any market order. (11-25)
A corporation has $10,000,000 of a 5% preferred stock issue outstanding. If the corporation were able to replace the preferred stock with $10,000,000 of 5% subordinated debentures, what effect would it have on earnings per share?
Increase
Decrease
Remain the same
Cannot determine
A
The company would pay the same amount ($500,000) whether it was interest on the subordinated debentures or dividends on the preferred stock. However, interest is deducted before taxes while dividends are taken from net income. (22-23)
A bond counsel would issue an unqualified legal opinion for a municipal bond issue to state that:
The issuer has defaulted on previous issues of bonds
The official statement has not been filed with the SEC
The bonds are very risky and are not a qualified investment for some investors
There are no limitations or pending lawsuits that hinder the issuance of the bonds
D
A bond counsel would render an unqualified legal opinion if there were no situations in existence that could adversely effect the legality of the issue. (10-3)
Which of the following statements best defines the term duration?
A measure of a fixed-income security's relative interest rate risk
A measure of a fixed income portfolio's average yield
The period of time before a fixed-income security will be called
The measure of volatility that compares an equity security to the S&P 500 index
A
Duration measures price sensitivity for fixed-income securities given changes in interest rates. For example, a bond with a 7-year duration would experience a 7 percent change in price for every one percent change in market interest rates. (22-44
An investor buys an 8% New York City bond at a 10.00 basis. If the bond is held to maturity, the investor's net yield will be:
8%
Between 8% and 10%
10%
Greater than 10%
B
Since the yield (10%) is higher than the coupon (8%), the bond was purchased at a discount. The interest on the bond is exempt from taxation but the discount will represent ordinary income at maturity. Since the investor must pay tax on the ordinary income, the net yield will be between 8% and 10%. (21-11, 5-9)
Four municipal bonds have the same maturity date. Which of the following would cost an investor the greatest dollar amount when purchased?
A 4 3/4% coupon bond offered on a 5.10 basis
A 5 1/4% coupon bond offered on a 5.00 basis
A 5 3/4% coupon bond offered on a 6.00 basis
A 6 1/4% coupon bond offered on a 6.50 basis
B
When bonds are purchased at a discount (below the $1,000 par value) the yield-to-maturity (basis) will be greater than the coupon rate (nominal yield). This is the case in all of the choices listed except where the coupon rate of 5 1/4% is greater than the yield-to-maturity of 5%. This would mean that an investor purchased the bond at a premium (above the $1,000 par value) and paid the greatest dollar amount. (5-10)
A tombstone ad states that Southern California Gas is issuing 8 3/4% first mortgage bonds at a price of 96.35% of their par value.
Which of the following are true?
The bonds are being sold to yield 9.635% annually.
The bonds will pay interest of $87.50 annually.
The bonds are subject to the Trust Indenture Act of 1939.
I and II only
I and III only
II and III only
I, II, and III
C
The rate of interest stated in the tombstone is 8 3/4%. This means the company will pay 8 3/4% of $1,000 or $87.50 per year in interest. The bonds are corporate bonds being issued by Southern California Gas Company (not the State of California) and would be subject to the Trust Indenture Act of 1939. (6-1, 5-7)
Collateralized mortgage obligations can be backed by securities issued by:
FNMA
GNMA
FHLB
FFCB
I and II only
II and III only
I, II, and III only
I, II, III, and IV
A
CMOs can be backed by securities issued by FNMA, GNMA, and FHLMC. Federal Farm Credit Bank (FFCB) arranges loans for agricultural purposes. The Federal Home Loan Banks (FHLB) issues securities and uses the funds to provide liquidity for savings and loan institutions. (7-17)
A customer requests that a broker-dealer sell stock that he owns and use the proceeds of the sale to purchase a different stock. In determining the amount of markup that he will charge, the broker-dealer:
Must consider each transaction separately
May charge a markup on the sale only
Should only consider the amount of money involved in the sale to the customer?
As a percentage of the par value
By dividing the annual income by the current price
To the final maturity date
To the call date)
An investor buys a 5% municipal bond at 102 1/2. The bond has a yield-to-maturity of 4 1/2%. If the investor holds the bond to maturity, he will have a loss for tax purposes of:
0
$25
$50
$100
A
The IRS requires that a premium paid for a municipal bond be amortized over the life of the bond. At maturity, the investor would have an adjusted cost (after amortization) of par ($1,000). Since this is the amount received at maturity, there is no loss for tax purposes. (21-12)
The Federal Reserve would most likely take measures to ease the money supply in all of the following cases EXCEPT if:
Unemployment is high
The economy is in a recession
The gross national product has been declining
There is high inflation
D
The Fed is likely to ease money supply if the economy is sluggish (decrease in GDP), providing inflation is not a problem. If unemployment is high, the economy is probably either in a recession, a trough, or in the early stages of an expansion. Therefore, stimulating the economy through the easing of credit would be advisable. A high rate of inflation is normally indicative of too much money in circulation chasing a limited supply of goods and services. To combat high inflation, the Fed would normally take measures to reduce (tighten) the money supply. (22-12)
Which of the following would probably have the greatest fluctuation in price when interest rates move up or down?
Commercial paper
Treasury bills
Treasury notes
Treasury bonds
Treasury bonds would have the greatest fluctuation in price. They have the longest maturity and would be exposed to the risks of the marketplace for the longest period of time. (5-11, 7-1)
A registered representative is discussing the investment merits of ABC stock with a customer. The registered representative may say:
"Let's buy ABC stock because we expect it to go up 4 points in the next two weeks."
"Our mergers department is working on a leveraged buyout for ABC Corporation. Let's buy it now before it is announced to the public."
"One of our analysts just issued a favorable research report for public use on ABC stock which estimates a 10% growth in earnings over the next three years. It appears to be a good situation for you."
"Let's buy ABC stock because we are in a bull market and all stocks go up in a bull market."
C
This is the only statement that would not be a violation since it is a statement of fact, coupled with an opinion or estimate of what should happen in the future. The other statements are violations because they definitely state an event (the stock will go up) will occur, which cannot be known in advance. Spreading rumors is also a violation. (11-5)
Municipal securities dealers would consider all of the following when determining a markup EXCEPT the:
Dollar amount involved in the transaction
Availability of the securities
Expenses incurred in doing the trade)
An investor has purchased two municipal bonds. Bond A is bought at a discount and Bond B is bought at a premium. If the investor holds both bonds to maturity, the tax consequences will be:
Ordinary income on A, no capital loss on B
A capital gain on A, a capital loss on B
A capital gain on A, ordinary income on B
A capital loss on A, a capital loss on B
A
The tax consequences will be ordinary income on Bond A since it was purchased at a discount. No capital loss will be allowed on Bond B because it was the investor's choice or preference to buy the municipal bond at a premium. According to IRS rules, the loss on the premium bond cannot be taken as a loss for tax purposes. (21-11)
What is meant by 4.50% less 3/4 for a municipal bond selling in the secondary market?
$1,000 bond at 4.50 yield - $0.75
$1,000 bond at 4.50 yield - $7.50
$5,000 bond at 4.50 yield - $0.75
$5,000 bond at 4.50 yield - $7.75
B
Quotes for serial municipal bonds are usually per $1,000 and on a yield-to-maturity basis. The "less 3/4" represents the concession or discount offered to another dealer (3/4 point = $7.50). (12-31)
In a new municipal issue, what is a group order?
An order placed by 3 or more members
An institution purchasing bonds from a syndicate
All members will benefit from the order
A dealer buying for a group of investors
C
There are four types of orders that can be placed with a syndicate.
A pre-sale order is any order placed before the syndicate actually purchases the issue from the issuer.
A group order is when all members of the syndicate share in the profit.
A designated order is usually placed by a large institution which designates two or more members to receive credit for the sale.
A member order is any order placed by members for their customers. (10-9)
A floor broker goes to a trading post to execute an order. When told of the floor broker's order, the specialist replies "you're stopped at 21." This means:
The floor broker cannot trade the stock until it hits 21
The floor broker is guaranteed a price of 21
The stock stopped trading at 21
The floor broker will enter a limit order at 21
B
When a specialist stops stock, she is guaranteeing a price. Stopping stock may only be done for a public order. (11-16)
All of the following statements are TRUE regarding yield curves EXCEPT:
In an ascending curve, short-term rates are lower than long-term rates
They are fixed and may only be changed by commercial banks
In a descending curve, short-term rates are greater than long-term rates
In a flat yield curve, both short-term and long-term rates are equal
B
Yield curves are ascending (upward sloping from the shorter to longer maturities) when money is "easy." When this occurs, short-term rates are lower than long-term rates. A descending yield curve, which is indicative of a tight money situation, will show short-term rates higher than long-term rates. A flat yield curve will indicate that short-term and long-term rates are approximately the same. (5-14)
An investor purchases a two-year ABC call. Which of the following accurately describes the exercise of the option?
European style, next business day settlement
European style, three business days settlement
American style, next business day settlement
American style, three business days settlement
D
Long-term anticipation securities (LEAPs) may be exercised on any day prior to expiration (American style). Exercise settlement is in the underlying stock, in three business days. (16-3)
Which of the following terms relates to the graph of optimal portfolios resulting from a comparison of risk and return?
CAPM
Efficient frontier
Duration
Alpha
B
According to modern portfolio theory , a graph of optimal portfolios may be created known as an efficient frontier. (22-42)
The market price of XYZ Company's stock is $60. The price-earnings ratio is 10 and earnings per share is $6.00. If the stock were to split 2 for 1, which of the following are true?
The price-earnings ratio will be reduced to 5.
The price-earnings ratio will remain at 10.
The earnings per share will be reduced to $3.00 per share.
The earnings per share will remain at $6.00 per share.
I and III
I and IV
II and III
II and IV
C
A stock split will increase the number of shares outstanding while decreasing the market price of the stock. The split will also have the effect of reducing earnings per share since the number of shares outstanding will increase. The 2-for-1 split will reduce the market price to $30 ($60 x 1/2) and the earnings per share to $3.00 ($6.00 EPS x 1/2). However, the price-earnings ratio (market price/EPS), which was 10 before the split, will remain the same since both the market price and the earnings per share were reduced by the same percentage ($30/$3.00 EPS = 10). (22-28, 4-8)
The transfer of bonds from one party to another may be accomplished by an endorsement on the back of the bond certificate or through a:
Letter of credit
Letter of notification
Power of attorney
Bond power
D
A bond power may be attached to the bond certificate and used to transfer ownership in lieu of completing (endorsing) the assignment form on the back of the bond certificate. (12-13)
A specialist has an order on its book from a public customer to buy stock at $34.70 and another order from a public customer to sell stock at $34.90. The specialist may:
Buy stock for its own account at $34.65
Buy stock for its own account at $34.75
Sell stock from its own account at $34.90
Sell stock from its own account at $34.95
B
A specialist is not permitted to compete with public orders when trading for its own account. The specialist may buy stock at a higher price or sell stock at a lower price. In doing so, the specialist has narrowed the spread (the difference between the bid and ask). The specialist, buying stock at $34.75, is permitted since this price is higher than the price of the public order ($34.70). The other choices would result in the specialist buying lower or selling at a price equal to or higher than the public customer's order. (11-16)
Which of the following is subject to the Penny Stock Rule?
Unsolicited orders for a non-Nasdaq stock trading at $3.00 per share
Solicited orders for a non-Nasdaq stock trading at $3.00 per share
Unsolicited orders for a Nasdaq stock trading at $3.00 per share
Solicited orders for a Nasdaq stock trading at $3.00 per share
B
Listed stocks and Nasdaq securities are not penny stocks under SEC rules. Unsolicited orders are exempt from the penny stock regulations. (12-20)
Which two of the following conditions are generally TRUE when the yield curve inverts?
Interest rates are relatively low.
Interest rates are relatively high.
Interest rates are expected to fall.
Interest rates are expected to rise.
I and III only
I and IV only
II and III only
II and IV only
C
The yield curve often inverts when interest rates are relatively high but are expected to fall in the near future. In such an environment, investors prefer to lock in relatively high long-term rates. The increased demand for long-term debt drives these prices up (and their yields down), as compared to short-term debt, causing the yield curve to invert. (5-14)
Why is the maturity of commercial paper 270 days or less?
Because it coincides with the historical 9-month business cycle
It is an attractive alternative to 6-month Treasury bills
Because short-term corporate debt of 270 days or less is exempt from registration
All of the above
C
Commercial paper has a maximum maturity of 270 days so that it will be exempt from the registration requirements of the Securities Act of 1933. (9-19, 7-21)
A T-bond put option is quoted at 3-28. The purchase of one option at this price would require payment of:
$328.00
$387.50
$3,280.00
$3,875.00
A purchase of a put option at 3-28 is $3,875. Three (3) points equals $3,000 and 28/32 equals $875. Each 1/32 of a point equals $31.25. Therefore, 28 x $31.25 equals $875. (15-47)
An investor has purchased 1,000 shares of XYZ stock. Which of the following option transactions would provide the most effective means of reducing the cost of the stock?
Buying 10 XYZ puts
Selling 10 XYZ puts
Buying 10 XYZ calls
Selling 10 XYZ calls
D
The investor would take in additional income by selling a call option. If the investor sold puts, she would be obligated to purchase XYZ stock if the price fell. The most effective means of reducing the price of a stock purchase is to write a covered call. (15-2)
Volume and holding period restrictions do not apply to the resale of private placements (Reg D offerings) when:
Purchasers' representatives assist investors
Both parties are accredited investors
The transaction is initiated by a registered principal
The purchaser is a qualified institutional investor
D
Under Rule 144A of the Securities Act of 1933, the owner of securities obtained through a private placement may resell those securities to a qualified institutional buyer without the volume and holding period restrictions of Rule 144. (9-21)
A corporation has $7,000,000 in income after paying preferred dividends of $500,000. The company has 1,000,000 shares of common stock outstanding. The market price of the stock is $56. What is the price-earnings ratio?
6.5 times
7.5 times
8 times
8.6 times
C
The price-earnings ratio is the market price ($56) of the stock divided by the earnings per share ($7) which equals 8 times. The earnings per share of $7.00 is found by dividing the $7,000,000 of available income to the common stockholders by the 1,000,000 shares of common stock outstanding. (22-28)
The current market price of XYZ Corporation stock is $52. A customer enters an order to sell 100 shares of XYZ Corporation at $50 stop but will not accept less than $49. He has entered a:
Stop order
Limit order
Market order
Stop-limit order
D
A sell order with a stop at one price (to sell at $50) and also with a limit price ($49), is called a stop-limit order. A round lot sale at or below $50 would activate the order, but then the stock would have to be sold at the limit price of $49 or better for the order to be executed. Sell stop-limit orders are entered below the market. (11-23)
A member of a municipal new issue syndicate is entering an order for an accumulation account being used for a unit investment trust that the firm underwrites. This order must be entered as a(n):
Pre-sale order
Related portfolio order
Contingency order
AON order
B
MSRB rules require a syndicate member to disclose to the syndicate an order for a unit investment trust or an accumulation account to be used for a unit investment trust. The disclosure is accomplished by entering the order as a related portfolio order. (10-11)
According to MSRB rules, which of the following is TRUE regarding a secondary market joint account?
It is a violation of the rules if it contains less than three members.
Its members are not permitted to disseminate more than one quote relating to the account's securities.
It needs to submit an underwriting fee to the MSRB.
It would be considered to have a control relationship with the issuer.
B
Members of a secondary market joint account must publish the same offering (quote). (12-32)
Mr. Thomas calls his registered representative with an order to buy up to 2,000 shares of XYZ at $35 per share right now and do not leave the unexecuted portion on the specialist book. Mr. Thomas has entered
a(n):
Order that cannot be accepted
Immediate-or-cancel order
Limit order
Day order
B
An order that dictates to fill as much of the order as you can right now and cancel the rest is called an immediate-or-cancel order. Limit orders are placed as either day or GTC orders and the unexecuted portions are placed on the specialist book. Mr. Jones entered this kind of order when he said he wanted by buy 2,000 shares of XYZ. (11-25)
According to MSRB rules, which of the following is FALSE regarding written complaints?
A report must be sent immediately to the MSRB
The complaint must be properly recorded and retained by the firm for six years
The firm must send a copy of the MSRB investor brochure
A principal must take appropriate action)
An option contract for RFQ is for 108 shares. This would most likely be a result of which of the following circumstances?
Never, since an option contract always represents 100 shares
If there had been a stock split
If there had been a stock dividend
If there had been a cash dividend
The number of shares would be adjusted for a stock dividend or an odd stock split. 108 shares would most likely represent an 8% stock dividend. (16-10
An investor has been making payments into a variable annuity for the last 20 years. The investor decides to annuitize and selects a straight-life payout. Which TWO of the following statements are TRUE?
The investment risk is assumed by the insurance company.
The investment risk is assumed by the customer.
The amount of the payment to the customer is guaranteed by the insurance company.
The amount of the payment to the customer is not guaranteed.
I and III
I and IV
II and III
II and IV
d
Unlike a fixed annuity, the customer assumes the investment risk in a variable annuity. The amount of the payment depends on the performance of the separate account. The payment could increase, decrease, or remain the same, since the amount of the payment is not guaranteed. (19-2)
Which of the following best describes painting the tape?
A market maker's failure to honor a firm quote
Individuals entering into transactions in which ownership does not actually change, in order to give the impression of trading volume in a security
Employees of a broker-dealer purchasing shares of a hot new issue at the public offering price for their own account
A registered representative with discretion over a client's account conducting excessive trading to generate commissions
B
Painting the tape is a technique whereby individuals acting in concert repeatedly sell a security to one another without actually changing ownership of the securities. This is intended to give an impression of increased trading volume. The regulators considered this to be a type of manipulation. (11-5)
If the FOMC enters into a repurchase agreement, what is the immediate effect on the amount of money in the banking system?
Has no effect on the amount
Decreases the amount
Increases the amount
May increase or decrease the amount
C
In a repurchase agreement (Repo) , the Federal Open Market Committee would first buy the government securities. This action adds money to the banking system. A short time later the dealer would repurchase the securities from the Fed. (22-11)
Because of its multiplier effect on the economy, the Federal Reserve Board is reluctant to change:
The reserve requirement
Margin requirements
The discount rate
Its open market policy
A
Changing bank reserve requirements has a multiplier effect. This means that a small change in the reserve requirement can have a large effect on the money supply and the economy. This makes the results of changing the reserve requirement difficult to control, and the FRB is hesitant to use this tool. (22-9)
Wireless Communications is offering 2,000,000 common shares (par value $.10) at $15. Which two of the following describe the financial impact on the company?
An increase in paid-in capital
A reduction in the long-term debt ratio
A reduction in liquidity
An increase in fixed assets by $30,000,000
I and II
I and IV
II and III
III and IV
A
The company will receive cash from the sale of the stock, so liquidity will increase. The common stock account and the paid-in capital account, which are part of stockholders' equity, will also increase. The long-term debt ratio will fall as the equity capital rises and since the company is raising cash, current assets will increase. Finally, fixed assets will be unchanged. (22-26, 22-30)
The 5% markup policy would apply to a:
Municipal bond trade
Transaction on the NYSE
Proceeds transaction
Purchase of mutual fund shares)
An investor purchases 200 shares of STC at $35 and subsequently purchases 2 STC Jan 35 puts at 2.
At what market price must STC trade for the investor to have a profit?
32
34
36
38
D
If an investor is long stock and long a put, he will have a profit if the market price exceeds the cost of his stock plus the premium for the option. The stock must trade above 37 (35 cost + 2 premium). (15-4)
The Board of Directors of a corporation is responsible for establishing all of the following EXCEPT the:
Declaration date
Payable date
Ex-date
Record date
C
The ex-dividend date is standardized in the securities industry and is normally two business days prior to the record date. (4-7)
Mrs. Jones is interested in selling 500 shares of her REIT. The sale would be handled in a manner similar to the:
Redemption of an open-end fund
Sale of a fund listed on the NYSE
Liquidation of a real estate limited partnership
Redemption of EE bonds
B
There is a secondary market for REITs (Real Estate Investment Trusts); the vast majority trade on the NYSE with prices determined by supply and demand. Closed-end funds are funds that are often bought and sold on the NYSE that trade in a similar manner. (18-30)
An investor who sells a July 50 put and buys a July 60 put on the same stock is establishing:
A bull spread
A bear spread
A long straddle
A short straddle
B
A bear spread always involves buying the higher exercise price and selling the lower exercise price. This applies to both call spreads and put spreads.
A bull spread always involves buying the lower exercise price and selling the higher exercise price. This applies to both call spreads and put spreads. (15-29)
Supplemental documentation would be required when opening all of the following types of accounts EXCEPT:
Guardian
Partnership
Uniform Transfers to Minors
Account for an estate
A copy of the court appointment of the guardian is necessary for choice (A). To open a partnership account, a copy of the partnership articles should be obtained. In the case of an account for an estate, documentation should be obtained that shows the executor or administrator is properly authorized. Many new account forms contain UTMA/UGMA as one of the standard ownership choices, making additional documentation unnecessary. (2-8)
A type of new municipal issue sale where the underwriter is appointed by the issuer is a(n):
Eastern Account
Western Account
Negotiated Issue
Competitive Issue
C
For a negotiated issue, the underwriter is appointed by the issuer. Eastern and Western accounts are types of syndicates formed by underwriters. (10-2, 9-4)
Which two of the following are normally TRUE of money market mutual funds?
They are load funds.
They are no-load funds.
Dividends are computed daily and credited monthly.
Dividends are computed weekly and credited monthly.
I and III
I and IV
II and III
II and IV
C
Money market funds are normally no-load, open-end investment companies . Their portfolio consists of short-term fixed income securities such as Treasury bills, commercial paper, and bankers' acceptances. Dividends on money market fund shares are usually computed daily and credited monthly. Investors may elect to reinvest the dividends each month, thereby buying more shares. (18-8)
A municipal bond with an 8% coupon and eight years to maturity is purchased for 106. If sold six years later, what would be the cost basis?
100
101.50
104.50
106
B
When a bond is purchased at a premium (above par value), the premium must be amortized (reduced) over its life. The premium in this example is six points which must be amortized over its eight-year life. It must be amortized 3/4 point each year (6 points divided by 8 years to maturity). After six years, it would be reduced by 4 1/2 points (3/4 x 6). Its cost basis would therefore be 101 1/2 (106 original cost - 4 1/2 points amortized premium). (21-12)
A broker-dealer has a financial advisory relationship with an issuer of municipal securities. Under MSRB rules, which TWO of the following statements are TRUE?
The broker-dealer is permitted to act as a syndicate member on securities issued by this municipality.
The broker-dealer is not permitted to act as a syndicate member on securities issued by this municipality.
The financial advisory relationship between the broker-dealer and the municipality must be in writing.
The financial advisory relationship between the broker-dealer and municipality is not required to be in writing.
I and III
I and IV
II and III
II and I)
Gross Domestic Product (GDP) has declined for two consecutive quarters in the U.S. Which of the following industries would most likely be negatively affected by this downturn in the economy?
Cosmetics
Transportation
Food
Medical
Two consecutive quarters of declining GDP figures would be considered recessionary by most economists. Transportation stocks (e.g., railroads, trucking, airlines) are cyclical and the performance of these companies would be directly affected by this event. (22-2)
What is the maximum allowable percentage that can be sold above the original size of the offering through a green shoe option?
10%
15%
20%
25%
The overallotment provision of an underwriting agreement may contain a green shoe clause, which allows the syndicate to increase the number of shares sold by 15% over the original number of shares in the offering. (9-7)
Fred's Auto Centers is looking to raise $10 million to expand its business. The company has entered into an agreement to raise the capital through Winco Securities, a local investment banking firm. Winco Securities has made no guarantee that it will be able to raise the full amount of the offering. Which of the following statements regarding this scenario is/are true?
This is an example of a firm commitment underwriting.
This is a best-efforts underwriting.
Winco is acting as an agent for Fred's Auto Centers.
Winco is acting as principal in this underwriting.
II only
I and III only
I and IV only
II and III only
The underwriting is being done best-efforts, since no guarantee to raise the $10 million has been made by Winco Securities. Winco is acting as an agent in the transaction because any unsold shares will be retained by Fred's Auto Centers. Winco will be compensated only for the shares it sells and assumes no liability in the deal. (9-2)
What department or section of the brokerage firm would be responsible for tendering stock?
P&S Department
Margin Department
Cashier's Department
Reorganization Department
D
The Reorganization ("Reorg") department handles the exchange of one security for another (i.e., tender offers or converting rights into stock). The P&S Department normally handles the function of computing and comparing trades. The Margin Department handles the enforcement of Regulation T. The Cashier's Department is concerned with the handling and protection of securities. (11-18)
Concerning mutual funds, what is meant by net investment income?
Interest only
Dividends only
Interest + dividends - expenses
Dividends + capital gains - expenses
C
Net investment income of a mutual fund is derived from the total interest and dividends earned by the fund's portfolio minus the expenses of the fund. (18-29)
Which of the following statements are TRUE regarding a limited partnership?
There may only be one general partner.
There must be more than one limited partner.
The partnership does not pay income taxes.
It is a form of ownership that passes its profits and losses through to its participants.
III and IV
I, II, and III
I, II, and IV
II, III, and IV
A
Limited partnerships provide a form of ownership in which there is undivided interest in equity that does not pay income taxes and passes its profits and losses through to its participants. There are no rules, however, regulating the number of limited and general partners that a limited partnership must contain as long as there are at least one of each. (20-1)
What is the basic balance sheet equation?
Total Assets + Total Liabilities = Stockholders' Equity
Total Liabilities = Total Assets + Stockholders' Equity
Total Assets = Total Liabilities - Stockholders' Equity
Total Assets = Total Liabilities + Stockholders' Equity
D
The balance sheet equation is Total Assets = Total Liabilities + Stockholders' Equity. (22-20)
ABC Brokerage, a broker-dealer, sells 200 shares of stock to a customer from its own inventory. In this transaction, ABC acted as
a(n):
Agent
Broker
Principal
Underwriter
C
When a broker-dealer sells securities to a customer from its own account (inventory) or buys securities from a customer for its own account, it is acting as a principal or dealer. If the firm matched up a buyer and a seller without involving its own account, it would be acting as an agent or broker. (11-2)
A 6% bond is selling at a 6.25% basis. The bond will mature in 25 years and has 3 call dates. Which of the following would give the investor the best return?
If the bond is called after 10 years at 103
If the bond is called after 15 years at 102
If the bond is called after 20 years at 101
If the bond is held to maturity
C
The bond is selling at a discount. The first call in 10 years at 103 would give the investor the best return. The investor receives the highest call price in the shortest number of years. (5-17, 5-9, 5-10)
Interest rates had been very high. During the past three years rates have decreased dramatically, reaching historically normal level. The present yield curve would most likely be:
Ascending
Positive
Inverted
Negative
I and II
I and III
II and III
III and IV
A
If rates have declined for the past three years and reached a normal level, the present yield curve would most likely be ascending which is also referred to as positive or upward sloping. (5-14)
Which of the following could not be accomplished in a cash account?
Covered call or put writing
The sale of preferred stock
Selling short
The purchase of options
C
Selling short can only take place in a margin account. (2-10)
Which of the following is another way of expressing the earnings multiple?
Debt-to-equity ratio
Dividend payout ratio
Price-earnings ratio
Operating profit ratio
C
The earnings multiple is called the price-earnings ratio. (22-28)
Duties of the specialist on the NYSE include which of the following?
Maintaining a fair and orderly market in selected securities
Appointing floor brokers
Resolving trade imbalances
Arbitrating disputes between member firms
I and II only
I and III only
I, III, and IV only
I, II, III, and IV
B
The responsibilities of the specialist include resolving trade imbalances, which may result from a temporary lack of supply or demand in a particular security. The specialist's role also includes maintaining liquidity and a fair and orderly market. Floor brokers are not appointed by the specialist. Arbitration disputes between member firms are handled under the Arbitration Code. (11-16)
What type of risk do zero-coupon bonds eliminate?
Credit risk
Purchasing power risk
Reinvestment risk
Market risk
C
Zero-coupon bonds are issued at a discount and do not pay semiannual interest. Therefore, there are no interest payments to reinvest, eliminating reinvestment risk. When investing in fixed-income investments, one of the uncertainties is whether interest rates will allow an investor to realize the total return that was calculated at the time of the investment (yield to maturity). Zero-coupon bonds do not have reinvestment risk, but they do have extreme interest-rate risk because the bonds' duration will equal the years to maturity. (5-4, 5
Where could a broker-dealer find bidding details for a new municipal bond issue?
Bond Buyer New Issue Worksheets
OTCBB
Munifacts
An investor has sold a stock short. If the present market value is $2.00 per share, the minimum maintenance requirement would be:
50%
$2.50 per share
$2.00 per share
30%
B
When selling short securities that have a market value less than $5 per share, a minimum maintenance requirement of $2.50 per share or 100% of the market value, whichever is greater, applies. Since $2.50 a share is greater than $2.00 per share, this is the correct answer. (13-16)
Which of the following represents the percentage of new municipal issues brought to market during a particular week that has already been sold?
The Bond Buyer Index
The Blue List
The Visible Supply
The Placement Ratio
D
The placement ratio represents the percentage of new municipal bond issues of $5,000,000 or more
Which bond has the most interest-rate risk?
A 3-month Treasury bill
A zero-coupon 30-year Treasury STRIPS
A 6%-coupon 30-year Treasury bond
A 3%-coupon 5-year Treasury note
B\The bond with the most interest-rate risk or price volatility is the one with the longest maturity and lowest coupon. (5-11)
To compute equity in both a short and long margin account, the formula would be:
The long market value plus the credit balance minus the short market value minus the debit balance
The long market value minus the credit balance minus the short market value minus the debit balance
The long market value plus the debit balance minus the short market value minus the credit balance
The long market value plus the credit balance plus the debit balance minus the short market value
A
The long market value plus the credit balance minus the short market value minus the debit balance equals the equity in both a long and short margin account. (13-19)
An investor sells uncovered calls and, just prior to their expiration, sells short the underlying stock. The intent is to keep the price from rising above the exercise price. Such an action is called:
Pegging
Supporting
Capping
Frontrunning
C
Writers of uncovered calls will benefit if they can prevent the price of stock from rising above the exercise price. They could accomplish this by capping the stock (entering sell orders to prevent the price from rising above a certain level). Capping is considered a manipulative activity and is a violation of securities law. (11-5
Where are the priority provisions for allocating bonds listed for a new municipal bond offering?
Official statement
Underwriters agreement
Offering circular
Settlement letter
B
The priority provisions of how bonds will be allocated to members of the municipal bond syndicate will be indicated in the underwriters agreement (syndicate letter). The usual priority of orders is pre-sale, group net, designated, and member orders at the takedown. (10
An investor purchases a $100m face value municipal bond with a 5-year maturity at 105. After two years, the bond is sold at 95. For tax purposes, the investor has a(n):
$2,000 loss
$4,000 loss
$8,000 loss
$10,000 loss
When a municipal bond is purchased at a premium, the bond's premium must be amortized to find an adjusted cost basis. If the bond is sold above the adjusted cost basis, the result is a capital gain. If the bond is sold below the adjusted cost basis, the result is a capital loss. If the bond is held to maturity, there is neither a loss nor a gain for tax purposes. This is because the adjusted basis would equal the par value after the premium is amortized.
This bond is purchased at $105,000 with a 5-year maturity. The premium of $5,000 ($105,000 - $100,000 = $5,000) must be amortized over a 5-year period ($5,000 divided by 5 years equals $1,000 per year). Therefore, each year the original cost of the bond is reduced by $1,000.
If the bond is sold after 2 years, the adjusted cost basis is $103,000 ($105,000 - $2,000 = $103,000). Since the bond is sold at $95,000, there is a capital loss of $8,000 ($103,000 - $95,000). (21-12
Roundville Bank is considering an investment in Roundville County bonds. The bonds contain a provision which permits banks to deduct 80% of the interest cost being paid to depositors on the funds used to purchase the bonds. These securities are known as:
Alternative minimum tax bonds
Bank qualified bonds
Private activity bonds
Moral obligation bonds
B
Bank qualified municipal bonds allow banks to deduct 80% of the interest cost paid to depositors on the funds used to purchase the bonds. This is done to encourage banks to invest in municipal securities. To qualify, a municipality may only issue up to $10,000,000 annually. (8-5)
Which of the following would give the best indication of current interest rates on revenue bonds?
Visible supply
Placement ratio
List of 20 bonds
List of bonds with 30 year maturities
D
The Bond Buyer computes the Revenue Bond Index which is the average yield of 25 revenue bonds with 30-year maturities. (12-27)
An investor is interested in purchasing an interest in a real estate limited partnership. To exhibit suitability, the investor could provide:
Past tax returns
Notarized document attesting that the investor is an expert in managing real estate
Executed copies of subscription agreements from other programs in which he is a limited partner
A completed subscription agreement
A.
When investing in a DPP the customer must verify that he meets all suitability standards. This can be accomplished by furnishing documents such as past tax returns and a statement of net worth. (20-3)
An insider of XYZ Corp. buys company stock in the open market at $63/share. Ten months later, the insider wishes to sell the stock at the current market price of $68/share. Which two of the following statements are TRUE regarding this transaction?
The sale is subject to the six-month holding period under Rule 144.
This sale is not subject to the six-month holding period under Rule 144.
The sale is subject to the volume limitations under Rule 144.
The sale is not subject to the volume limitations under Rule 144.
I and III
I and IV
II and III
II and IV
C
Rule 144 requires that restricted (unregistered) stock be held for six months before it can be resold. Control stock (registered stock purchased by insiders) is not subject to a holding period requirement under Rule 144. Both restricted and control stock are subject to the volume limitations under the Rule. (9-21)
An investor purchasing $1,000,000 par value of Treasury notes at a price of 101-03 would pay:
$1,010,300
$1,010,937.50
$10,101,300
$10,109,375
Treasury notes are quoted as a percentage of par in 32nds of a percent. A quote of 101-03 equals 101 3/32% or 101.09375% (3/32 = .09375). 101.09375% x $1,000,000 = $1,010,937.50. (7-2)
XYZ Corporation has 4,000,000 shares of common stock authorized and 2,500,000 shares issued of which 100,000 are treasury stock. The corporation is issuing an additional 1,000,000 shares through a standby underwriting. If only 600,000 shares are subscribed to in the corporation's offering, the number of outstanding shares will:
Remain the same since the entire issue was not fully subscribed
Increase by 600,000 to 3,000,000 shares
Increase by 600,000 to 3,100,000 shares
Increase by 1,000,000 to 3,400,000 shares
D
Since 100,000 shares of the 2,500,000 shares issued is treasury stock (repurchased by corporation), there are 2,400,000 shares outstanding prior to the new issue. On a standby underwriting, the underwriting syndicate agrees to purchase any shares which the corporation does not sell. Since the corporation only sold 600,000 shares, the underwriters would purchase the remaining 400,000 shares. After the new issue, there would be 3,400,000 shares outstanding (2,400,000 + 1,000,000). (9-2, 4-6)
A registered representative writes a letter to see if his clients have any interest in trading options. The letter is generic and describes the advantages and disadvantages of options trading. This letter:
Must be approved prior to use by a ROP
Need not be approved prior to use so long as it does not contain recommendations
Must be accompanied by a risk disclosure document
Need not be accompanied by a risk disclosure document
I and III only
I and IV only
II and III only
II and IV only
B
All advertising, sales literature, and educational material must be approved by a ROP prior to being sent to a customer. Since there are no specific recommendations, the OCC disclosure document does not need to precede or accompany the letter. However, the customer must receive the risk disclosure document at or before the account is approved for options trading. (16-12)
A customer contends that his registered representative made unauthorized trades in his account, and will take this matter to an arbitration panel. Regarding the makeup of this panel, which of the following statements is TRUE?
A majority of the arbitration panel must come from outside the securities industry.
A majority of the arbitration panel will come from within the securities industry.
All arbitrators must come from inside the securities industry or must be attorneys.
All arbitrators must come from outside the securities industry.
A
Under the Code of Arbitration, if a public customer takes a member firm to arbitration to resolve a dispute, the majority of the panel must come from outside of the securities industry unless the customer requests a panel with a majority of industry arbitrators. Neither the broker-dealer nor the customer may actually pick the arbitrators and arbitrators do not have to be attorneys. (1-14)
Mr. Jones bought an 8% debenture at a 7.20 basis. If the bonds are currently trading 15 basis points higher:
Mr. Jones' yield-to-maturity has increased to 7.35%
The bond's coupon has increased to 8.15%
The bond's market price has decreased
Mr. Jones' investment has not been affected
C
When the investor bought the bond, he established a yield-to-maturity of 7.20%. This will remain the same over the life of his investment. The coupon rate was established when the bonds were issued and will never change. However, when yields in the market increase, the market price of outstanding bonds will decrease. (5-7)
Mr. Blue's margin account has a market value of $20,000 and a debit balance of $9,000.
If Mr. Blue purchases $2,000 of options, he would have to deposit:
0
$1,000
$2,000
$3,000
B
The margin requirement when purchasing options is 100% of the purchase price (premium). Since the purchase price of the options is $2,000, Mr. Blue may use the $1,000 SMA and would be required to deposit an additional $1,000. The SMA is found by subtracting the required equity, $10,000 ($20,000 x 50%) from the current equity in the account ($11,000). (13-20, 16-5)
If ABC Corporation pays a $0.25 dividend to its shareholders, all of the following would result EXCEPT:
Retained earnings remain the same
Working capital is decreased
Current assets are decreased
Current liabilities are decreased
B
When a corporation pays a dividend, cash (a current asset) is reduced and the dividend is no longer a current liability (liabilities are reduced). Since current assets (CA) and current liabilities (CL) are reduced by the same amount, working capital (CA - CL) remains the same. The total assets and total liabilities are also reduced by the same amount and therefore retained earnings (which is part of stockholders' equity) will not be affected. Retained earnings are reduced when the corporation declares the dividend. (22-29)
A registered representative receives an order from the President of XYZ Corporation to sell unregistered XYZ shares. The client purchased the shares in a private placement 90 days ago. This order:
Will require the filing of Form 144 with the SEC
May be executed without any restrictions
Must be approved by a principal prior to execution
Is a violation of Rule 144 if executed
According to Rule 144, an affiliated person (e.g., the president of a company) must hold unregistered (restricted) stock for at least six months before it may be sold. Since the President of XYZ Corporation only owned the stock for 90 days, the order to sell would violate Rule 144 if executed. (9-20)
A customer's account does not require approval to trade penny stocks if the:
Trade is recommended
Trade is not recommended
Account is established
Account is new
I and III
I and IV
II and III
II and IV
The approval of an account to trade penny stocks is not required if the account has been in existence for more than one year or if all transactions in penny stocks are non-recommended. (12-21)
To calculate the total interest cost to the issuer for a competitive bid, the syndicate would need:
The total par value of the offering
The maturity schedule and coupon rates
Whether the bid includes any discounts or premiums
The dated date for the bonds
I and II only
III and IV only
I, II, and III only
I, II, III, and IV
D
When computing a competitive bid, the broker-dealer must calculate the total interest cost to the issuer. To calculate total interest cost, the coupon (interest rate), par value, maturity schedule, amount of any discount or premium, and dated date are required. (10-7)
Kyle, a client at TLC brokerage firm, anticipates a decline in the earnings of LPOP. LPOP is a thinly traded issue. Which of the following statements BEST describes what the RR should disclose to Kyle?
The stock may be difficult to sell short because the shares may not be available to borrow.
All securities may be sold short provided the client has a margin account.
As long as the order ticket is marked sell long, the stock could be sold short.
Exchange-traded put options are available on all securities and would be a less risky method to profit.
A client may sell short or buy a put to profit from a decline in the value of a security is anticipated. In order to sell short, the broker-dealer is required to borrow the security. Although short sales may only be executed in a margin account, if an issue is thinly traded, it may be difficult or impossible to borrow the security. A put option may be an attractive alternative to selling short; however, put options are unlikely to be available on a thinly traded security. (11-4, 12-15, 13-14, 14-15)
In an oil and gas drilling program, a sharing arrangement where the sponsor pays a small amount of all of the program's costs in return for a larger amount of the revenues is known as:
Functional allocation
Overriding royalty
Reversionary working interest
Disproportionate
In a disproportionate sharing arrangement, the sponsor (general partner) shares in the costs of the program and receives a portion of the profits. It is disproportionate because the percentage share of profits is much larger than the percentage share of costs. (20-17)
A self-employed individual has total income of $120,000. If the individual wants to open a Keogh plan:
It must be opened by the time he files his tax return
It must be opened by the end of the tax year
A maximum deductible contribution of $24,000 is permitted
A maximum deductible contribution of $51,000 is permitted
I and III
I and IV
II and III
II and IV
C
A Keogh plan must be opened by the end of the tax year (December 31st). However, contributions are permitted until the filing deadline for the tax return (April 15th). A self-employed individual may deduct 20% of self-employed income or $51,000, whichever is less, to a Keogh. 20% of $120,000 is $24,000 and would be the maximum allowable deductible contribution. (17-10)
Which TWO of the following persons would be permitted to purchase an equity IPO?
An employee of a FINRA member whose sister is a director of the issuer
A portfolio manager of a mutual fund purchasing for his personal account
Employees of the issuer if the issuer is a FINRA member
An attorney hired by the issuer to assist in the IPO
I and II
I and III
II and III
II and IV
Issuer-directed securities provide an exemption for certain individuals under the New Issue Rule. Under this provision issuers may direct securities to the parent company of an issuer, the subsidary of an issuer, and employees and directors of an issuer. The issuer-directed provision also permits immediate family members to participate in the offering, provided they are employees or directors of the issuer. Registered representatives are also allowed to purchase shares of an equity IPO if the issuer is that person's employing broker-dealer or is the parent or subsidary of the broker-dealer.
An attorney hired to assist in the IPO has a restricted status because he is not employed by the broker-dealer. A portfolio manager of a fund may not purchase for his personal account. A purchase could be made on behalf of the fund. (9-7)
An XYZ Corporation convertible bond is selling in the market at $1,248.75. It is convertible at $30. XYZ common stock's market price is 37.50. The bond has been called at 103. Which of the following is the least attractive alternative for a holder of the bond?
Sell the bond
Convert to common and sell the common
Allow the bond to be called
Convert common stock to a bond
The holder could sell the bond and receive $1,248.75. If he converted, he would receive 33 1/3 shares ($1,000 par divided by $30 per share conversion feature) with a total value of $1,249.88 (33 1/3 times $37.50). The least attractive alternative is to allow the bond to be called and receive $1,030. (6-6)
The Trust Indenture Act of 1939 regulates:
A purchase of $5,000,000 of Treasury bonds
A private placement of $3,000,000 of corporate notes
A $20,000,000 sale of corporate bonds sold interstate
A sale by a brokerage firm throughout the country of $25,000,000 of corporate debentures
I and II only
I and III only
III and IV only
I, II, III, and IV
The Trust Indenture Act of 1939 regulates the public issuance of corporate securities that are sold interstate. It does not cover U.S. government securities or private placements. A $20,000,000 sale of corporate bonds sold interstate and a sale by a brokerage firm throughout the country (also interstate) of $25,000,000 of corporate debentures would be covered under the Trust Indenture Act of 1939. (6-1)
A Treasury bond is quoted 105.04 - 105.24. The purchase price that a customer would expect to pay would be:
$1,051.25
$1,052.40
$1,054.00
$1,057.50
U.S. Treasury notes and bonds are quoted in 32nds of a point. When purchasing the bond, the customer would pay the offering price of 105.24. To convert 105.24 into a dollar price:
step 1: 105.24 is equal to 105 24/32
step 2: convert 24/32 into a decimal, which is .75
step 3: convert 105.75% into a dollar price
(105.75% x $1,000 = $1,057.50)
The customer would pay $1,057.50. (7-2)
A bond is convertible into stock at $50 per share. The market price of the stock is 65. The market price of the bond is 120. To profit from this arbitrage opportunity, an investor should:
Buy 5 bonds
Buy 100 shares of stock
Sell 5 bonds short
Sell 100 shares of stock short
I and III
I and IV
II and III
II and IV
Since the bond is convertible into 20 shares of stock ($1,000 par divided by 50) and the bond is priced at 120, the parity for the stock is $60 per share ($1,200 bond price divided by 20 shares). An arbitrage situation exists because the stock is selling at a 5-point premium to parity (65 market price - 60 parity price).
An investor would profit from this situation by purchasing bonds at 120 and shorting the stock at 65. Each bond may be converted into 20 shares of stock at a cost of $60 per share. These shares may then be used to cover the short sale, establishing a 5-point profit (65 short sale price - 60 cost). (6-6)
Keystone Chocolate Co. plans to sell shares of a new issue only in the state of Pennsylvania. In order to qualify for a registration exemption under Rule 147, what percentage of the corporation's assets must be located in Pennsylvania, and what percentage of its revenues must be derived from Pennsylvania sources, at the time of the offering?
70%
80%
90%
100%
Keystone is eligible to offer shares in Pennsylvania (PA) under the intrastate exemption (Rule 147) if 80% of its assets are located in PA, 80% of its revenues are derived from PA sources, and 80% of the proceeds from the sale are used in PA. In addition, to qualify for the exemption, 100% of the purchasers of the offering must be residents of PA. (9-19)
Which of the following orders would you place for a customer who wants to hold her auction rate security if the interest rate is set at 3.4% or higher:
Hold order
Limit order
Bid order
Sell order
A current holder of an auction rate security may indicate the desire to continue to hold the security only if the rate is set at or above a specified rate. If the clearing rate sets below the interest or dividend rate that the holder or prospective holder specifies in her bid, the holder will be required to sell the securities subject to her bid, and the prospective buyer will not acquire the securities. Auction dealers refer to bids by prospective holders as "buy" orders and bids by holders as "roll-at-rate" orders. (8-19)
A municipal dealer purchased $100,000 face value of 6.00% bonds at a 6.00 basis. If the dealer reoffered the bonds, which of the following could be considered reasonable?
101
108
5.80 basis
4.00 basis
I and II only
I and III only
III and IV only
I, III, and IV only)
An individual with $10,000 to invest would not usually be able to purchase:
Money-market fund shares
Dealer-placed commercial paper
Municipal bonds
Treasury STRIPS
The minimum requirement for an investment in dealer-placed commercial paper is normally $100,000. The minimum requirement for investment in Treasury STRIPS is usually $1,000 and the minimum denomination for municipal bonds is $1,000. (7-21)
Which of the following option orders may be accepted by an order book official?
Discretionary
Limit
Spread
Not held
An order book official on the floor of an options exchange is only permitted to accept limit orders. (16-3)
Approval by a principal is required when sending a customer all of the following EXCEPT a(n):
Abstract from an official statement
Form letter
Research report
Red herring:
The inventory cost of 28
The highest bid on the Nasdaq system
The lowest offer on the Nasdaq system
A price that is fair and reasonable
When selling stock to a customer, a markup should be based upon the lowest offer on the Nasdaq system, not the price the dealer paid to purchase the stock (dealer's inventory cost). (12-3)
When an investor sells an interest in a limited partnership, his or her cost basis for tax purposes is the:
Original investment
Adjusted basis
Accredited value
Original investment plus accretion
An investor's basis will be reduced by any claimed losses and any cash distributions. This reduced (adjusted) basis is the cost basis at the time of sale. (20-6)
The strike price of a T-bond option contract is expressed as a percentage of the:
Total premium
Face amount of the underlying bonds
Underlying market value
Discount yield
Both the strike price and premium for a T-bond option are expressed as a percentage of the face value of the underlying bonds. (15-47)
Which of the following persons control positions in secondary market municipal bonds for a broker-dealer?
Underwriter
Trader
Agent
Principal
A trader is responsible for positioning (carrying inventory) secondary market municipal bonds. An underwriter is involved in new issues. (12-1, 10-1)
Which of the following communications must be filed with FINRA ten days prior to use?
Options advertising
Mutual fund advertising
CMO advertising
Unit Investment Trust advertising
I and II only
I and III only
II, III, and IV only
I, II, III, and IV
Advertising for most products, including options and CMOs, must be filed with FINRA 10 days prior to use. However, investment company ads, including ads for mutual funds and unit investment trusts, must be filed within 10 days following initial use. (2-1, 7-19, 16-12)
A customer's margin account has a long market value of $30,000 and a debit balance of $12,000. FRB initial margin requirement is 50%.
What is the purchasing power in the account?
0
$3,000
$6,000
$18,000
The account has $18,000 equity which is $3,000 more than the FRB initial requirement of $15,000 ($30,000 market value x 50% requirement). This excess equity is journaled to SMA and creates buying (purchasing) power of $6,000 (2 x SMA). (13-9)
Which of the following is not used to determine the winning bid in a competitive bond offering?
Reoffering Yields
Bond years
NIC
TIC
Reoffering yields are based on the price paid by purchasers and are not a part of the underwriters bid. Bond years are calculated by multiplying the number of $1,000 par value bonds by the years to maturity. When the bond years of each maturity are multiplied by the respective coupon rate and then totaled, the result is the net interest cost for this issue. NIC and TIC are used to determine the cost of the bid to the issuer. (10-7)
Mrs. Ima Holder purchased 10 RFQ July 60 calls. RFQ declares a 50% stock dividend. After the dividend has been distributed, Mrs. Holder would now own:
10 contracts for 100 shares each
10 contracts for 150 shares each
15 contracts for 100 shares each
15 contracts for 150 shares each
When the underlying stock has a stock split or stock dividend, the option contract must be adjusted. For a stock dividend or an odd split, the number of contracts is kept constant while the number of shares per contract is increased. Therefore, Mrs. Holder would still have 10 contracts but the number of shares per contract would be increased by 50% to 150. (16-9)
A GNMA pass-through is quoted 98.10 to 98.18. This quote represents a spread per $1,000 face value of:
$0.08
$0.80
$2.50
$8.00
GNMA pass-through certificates (like T-notes and T-bonds) are quoted in 32nds. The spread of .08 represents 8/32 or 1/4 (.25) and has a value of $2.50 per $1,000. (7-11, 7-2)
The major disadvantage to a limited partner in a DPP is:
Lack of control
Lack of liquidity
Flow through of income and expense
Limited liability
An investor has limited control (management) in equity investments and no control (management) in bond or DPP investments. The major disadvantage of a DPP is the lack of liquidity meaning that the investor cannot easily sell his portion of ownership. (20-1)
A customer's margin account has a current market value of $10,000, debit balance of $8,000, and SMA of $1,000. The customer could meet a maintenance call with:
$100 cash
$500 SMA
$500 cash
$1,000 SMA
A long margin account must maintain an equity equal to 25% of the market value. The account is $500 below the minimum ($2,500 required minus $2,000 equity). Using SMA will increase the debit balance and therefore reduce equity.
An investor purchased a T-bond 96 call at a premium of 0.24. If the underlying security is $100,000 face value of T-bonds, what was the investor's total cost for the option contract?
$240
$750
$7,500
$24,000
The premium on a T-bond option is expressed in percentage of face value using points (whole percent) and 32nds of a point (1/32%). A premium of 0.24 represents 24/32 of a percent or 3/4 of a percent (0.75%). To determine the total cost, multiply the face value ($100,000) by 0.75%. The total cost is therefore $750. (15-47)
|
http://quizlet.com/21908511/series-7-qa-8-wrong-answers-flash-cards/
|
CC-MAIN-2015-06
|
refinedweb
| 12,979
| 59.74
|
ElasticSearch Interview Questions And Answers
What.
What is the is use of attributes- enabled, index and store?.
What is an Analyzer in ElasticSearch?
While indexing data in ElasticSearch, data is transformed internally by the Analyzer defined for the index.
Analy.
What is Character Filter in Elasticsearch Analyzer? from the stream.
An analyzer may have zero or more character filters, which are applied in order.
What is Token filters in Elasticsearch Analyzer?.
What is a Tokenizer in ElasticSearch?
What are the advantages of Elasticsearch?
- Elasticsearch is implemented on Java, which makes it compatible on almost every platform.
- Elasticsearch is Near Real Time (NRT), in other words after one second the added document is searchable in this engine.
- Elasticsearch cluster is distributed, which makes it easy to scale and integrate in any big organizations.
- Creating full backups of data are easy by using the concept of gateway, which is present in Elasticsearch.
- Elasticsearch REST uses JSON objects as responses, which makes it possible to invoke the Elasticsearch server with a large number of different programming languages.
- Elasticsearch supports almost every document type except those that do not support text rendering.
- Handling multi-tenancy is very easy in Elasticsearch when compared to Apache Solr.
What is Elasticsearch REST API and use of it?
What are the Disadvantages of Elasticsearch?
Elasticsearch does not have multi-language support in terms of handling request and response data in JSON while in Apache Solr, where it is possible in CSV, XML and JSON formats.
Elasticsearch have a problem of Split Brain situations, but in rare cases.
Does ElasticSearch have a schema?
Yes, Elasticsearch can.
What is a cluster in ElasticSearch?.
What is a node in ElasticSearch?
Node is a single server that is part of the cluster. It stores the data and participates in the clusters indexing and search capabilities.
What is Ingest Node in Elasticsearch?
Ingest nodes can execute pre-processing an ingest pipeline to a document in order to transform and enrich the document before indexing. With a heavy ingest load, it makes sense to use dedicated ingest nodes and to mark the master and data nodes as false and node.ingest=true.
What is Data Node in Elasticsearch?
Data nodes hold the shards/replica that contain the documents that was indexed. Data Nodes perform data related operation such as CRUD, search aggregation etc. Set node.data=true (Default) to make node as Data Node.
Data Node operations are I/O-, memory-, and CPU-intensive. It is important to monitor these resources and to add more data nodes if they are overloaded.The main benefit of having dedicated data nodes is the separation of the master and data roles.
What is Master Node and Master Eligible Node in Elasticsearch?
Master Node control cluster wide operations like creating or deleting an index, tracking which nodes are part of the cluster, and deciding which shards to allocate to which nodes. It is important for cluster health to have a stable master node. Master Node elected based on configuration properties node.master=true (Default).
Master Eligible Node decide based on below configuration
discovery.zen.minimum_master_node : number (default 1)
and above number decide based (master_eligible_nodes / 2) + 1
What is Tribe Node and Coordinating Node in Elasticsearch?
Tribe node, is special type of node that coordinate to connect to multiple clusters and perform search and others operation across all connected clusters. Tribe Node configured by settings tribe.
Coordinating Node behave like Smart Load balancer which able to handle master duties, to hold data, and pre-process documents, then you are left with a coordinating node that can only route requests, handle the search reduce phase, and distribute bulk index.
What is an index in ElasticSearch?
Index is like a ‘database’ in a relational database. It has a mapping which defines multiple types. An index is a logical namespace which maps to one or more primary shards and can have zero or more replica shards.
MySQL => Databases
ElasticSearch => Indices
What is inverted index in Elasticsearch?
Inverted Index is backbone of Elasticsearch which make full-text search fast. Inverted index consists of a list of all unique words that occurs in documents and for each word, maintain a list of documents number and positions in which it appears.
For Example: There are two documents and having content as:
1: FacingIssuesOnIT is for ELK.
2: If ELK check FacingIssuesOnIT.
To make inverted index each document will split in words (also called as terms or token) and create below sorted index .
Now when we do some full-text search for String will sort documents based on existence and occurrence of matching counts.
Usually in Books we have inverted indexes on last pages. Based on the word we can thus find the page on which the word exists..
GetJob Ready with ElasticSearch Training With Live Project By Experts
What is a replica in ElasticSearch?
Each shard in ElasticSearch has 2 copy of the shard. These copies are called replicas. They serve the purpose of high-availability and fault-tolerance.
What is a document in ElasticSearch?
Document is similar to a row in relational databases. The difference is that each document in an index can have a different structure (fields), but should have same data type for common fields.
MySQL => Databases => Tables => Columns/Rows
ElasticSearch => Indices => Types => Documents with Properties
What are the basic operations you can perform on a document?
The following operations can be performed on documents
INDEXING A DOCUMENT USING ELASTICSEARCH.
FETCHING DOCUMENTS USING ELASTICSEARCH.
UPDATING DOCUMENTS USING ELASTICSEARCH.
DELETING DOCUMENTS USING ELASTICSEARCH.
What is a type in ElasticSearch?
Type is a logical category/partition of index whose semantics is completely upto the user.
What are common area of use Elasticsearch?
It’s useful in application where need to do analysis, statics and need to find out anomalies on data based on pattern.
It’s useful where need to send alerts when particular condition matched like stock market, exception from logs etc.
It’s useful with application where log analysis and issue solution provide because of full search in billions of records in milliseconds.
It’s compatible with application like Filebeat, Logstash and Kibana for storage of high volume data for analysis and visualize in form of chart and dashboards.
|
https://tekslate.com/elasticsearch-interview-questions-answers/
|
CC-MAIN-2017-51
|
refinedweb
| 1,037
| 58.28
|
Setup
import numpy as np import tensorflow as tf from tensorflow import keras
Introduction.".
Freezing layers: understanding the
trainable attribute [==============================] - 1s 640ms/step - loss: 0.0945
Do not confuse the
layer.trainable attribute with the argument
training in
layer.__call__() (which controls whether the layer should run its forward pass in
inference mode or training mode). For more information, see the
Keras FAQ.
Recursive setting of the
trainable attribute.=...)
Fine-tuning.
- When you set.
- When you unfreeze a model that contains.
Transfer learning & fine-tuning with a custom training loop.)
Using random.RandomFlip("horizontal"), layers[0])) plt.axis("off")
2021-09-01 18:45:34.772284:.
Build a model
Now let's built a model that follows the blueprint we've explained earlier.
Note that:
- We add a
Rescalinglayer to scale input values (initially in the
[0, 255]range) to the
[-1, 1]range.
- We add a
Dropoutlayer before the classification layer, for regularization.
- We make sure to pass scaled # from (0, 255) to a range of (-1., +1.), the rescaling layer # outputs: `(inputs * scale) + offset` scale_layer = keras.layers.Rescaling(scale=1 / 127.5, offset=-1) x = scale_layer(x) #()
Downloading data from 83689472/83683744 [==============================] - 2s 0us/step 83697664/83683744 [==============================] - 2s 0us/step: 2,049 Non-trainable params: 20,861,480 _________________________________________________________________ 151/291 [==============>...............] - ETA: 3s - loss: 0.1979 - binary_accuracy: 0.9096 Corrupt JPEG data: 65 extraneous bytes before marker 0xd9 268/291 [==========================>...] - ETA: 1s - loss: 0.1663 - binary_accuracy: 0.9269 Corrupt JPEG data: 239 extraneous bytes before marker 0xd9 282/291 [============================>.] - ETA: 0s - loss: 0.1628 - binary_accuracy: 0.9284 Corrupt JPEG data: 1153 extraneous bytes before marker 0xd9 Corrupt JPEG data: 228 extraneous bytes before marker 0xd9 291/291 [==============================] - ETA: 0s - loss: 0.1620 - binary_accuracy: 0.9286 Corrupt JPEG data: 2226 extraneous bytes before marker 0xd9 291/291 [==============================] - 29s 63ms/step - loss: 0.1620 - binary_accuracy: 0.9286 - val_loss: 0.0814 - val_binary_accuracy: 0.9686 Epoch 2/20 291/291 [==============================] - 8s 29ms/step - loss: 0.1178 - binary_accuracy: 0.9511 - val_loss: 0.0785 - val_binary_accuracy: 0.9695 Epoch 3/20 291/291 [==============================] - 9s 30ms/step - loss: 0.1121 - binary_accuracy: 0.9536 - val_loss: 0.0748 - val_binary_accuracy: 0.9712 Epoch 4/20 291/291 [==============================] - 9s 29ms/step - loss: 0.1082 - binary_accuracy: 0.9554 - val_loss: 0.0754 - val_binary_accuracy: 0.9703 Epoch 5/20 291/291 [==============================] - 8s 29ms/step - loss: 0.1034 - binary_accuracy: 0.9570 - val_loss: 0.0721 - val_binary_accuracy: 0.9725 Epoch 6/20 291/291 [==============================] - 8s 29ms/step - loss: 0.0975 - binary_accuracy: 0.9602 - val_loss: 0.0748 - val_binary_accuracy: 0.9699 Epoch 7/20 291/291 [==============================] - 9s 29ms/step - loss: 0.0989 - binary_accuracy: 0.9595 - val_loss: 0.0732 - val_binary_accuracy: 0.9716 Epoch 8/20 291/291 [==============================] - 8s 29ms/step - loss: 0.1027 - binary_accuracy: 0.9566 - val_loss: 0.0787 - val_binary_accuracy: 0.9678 Epoch 9/20 291/291 [==============================] - 8s 29ms/step - loss: 0.0959 - binary_accuracy: 0.9614 - val_loss: 0.0734 - val_binary_accuracy: 0.9729 Epoch 10/20 291/291 [==============================] - 8s 29ms/step - loss: 0.0995 - binary_accuracy: 0.9588 - val_loss: 0.0717 - val_binary_accuracy: 0.9721 Epoch 11/20 291/291 [==============================] - 8s 29ms/step - loss: 0.0957 - binary_accuracy: 0.9612 - val_loss: 0.0731 - val_binary_accuracy: 0.9725 Epoch 12/20 291/291 [==============================] - 8s 29ms/step - loss: 0.0936 - binary_accuracy: 0.9622 - val_loss: 0.0751 - val_binary_accuracy: 0.9716 Epoch 13/20 291/291 [==============================] - 8s 29ms/step - loss: 0.0965 - binary_accuracy: 0.9610 - val_loss: 0.0821 - val_binary_accuracy: 0.9695 Epoch 14/20 291/291 [==============================] - 8s 29ms/step - loss: 0.0939 - binary_accuracy: 0.9618 - val_loss: 0.0742 - val_binary_accuracy: 0.9712 Epoch 15/20 291/291 [==============================] - 8s 29ms/step - loss: 0.0974 - binary_accuracy: 0.9585 - val_loss: 0.0771 - val_binary_accuracy: 0.9712 Epoch 16/20 291/291 [==============================] - 8s 29ms/step - loss: 0.0947 - binary_accuracy: 0.9621 - val_loss: 0.0823 - val_binary_accuracy: 0.9699 Epoch 17/20 291/291 [==============================] - 8s 29ms/step - loss: 0.0947 - binary_accuracy: 0.9625 - val_loss: 0.0718 - val_binary_accuracy: 0.9708 Epoch 18/20 291/291 [==============================] - 8s 29ms/step - loss: 0.0928 - binary_accuracy: 0.9616 - val_loss: 0.0738 - val_binary_accuracy: 0.9716 Epoch 19/20 291/291 [==============================] - 8s 29ms/step - loss: 0.0922 - binary_accuracy: 0.9644 - val_loss: 0.0743 - val_binary_accuracy: 0.9716 Epoch 20/20 291/291 [==============================] - 8s 29ms/step - loss: 0.0885 - binary_accuracy: 0.9635 - val_loss: 0.0745 - val_binary_accuracy: 0.9695 <keras.callbacks.History at 0x7f849a3b2950>
Do a round of fine-tuning of the entire model: 20,809,001 Non-trainable params: 54,528 _________________________________________________________________ Epoch 1/10 291/291 [==============================] - 43s 131ms/step - loss: 0.0802 - binary_accuracy: 0.9692 - val_loss: 0.0580 - val_binary_accuracy: 0.9764 Epoch 2/10 291/291 [==============================] - 37s 128ms/step - loss: 0.0542 - binary_accuracy: 0.9792 - val_loss: 0.0529 - val_binary_accuracy: 0.9764 Epoch 3/10 291/291 [==============================] - 37s 128ms/step - loss: 0.0400 - binary_accuracy: 0.9832 - val_loss: 0.0510 - val_binary_accuracy: 0.9798 Epoch 4/10 291/291 [==============================] - 37s 128ms/step - loss: 0.0313 - binary_accuracy: 0.9879 - val_loss: 0.0505 - val_binary_accuracy: 0.9819 Epoch 5/10 291/291 [==============================] - 37s 128ms/step - loss: 0.0272 - binary_accuracy: 0.9904 - val_loss: 0.0485 - val_binary_accuracy: 0.9807 Epoch 6/10 291/291 [==============================] - 37s 128ms/step - loss: 0.0284 - binary_accuracy: 0.9901 - val_loss: 0.0497 - val_binary_accuracy: 0.9824 Epoch 7/10 291/291 [==============================] - 37s 127ms/step - loss: 0.0198 - binary_accuracy: 0.9937 - val_loss: 0.0530 - val_binary_accuracy: 0.9802 Epoch 8/10 291/291 [==============================] - 37s 127ms/step - loss: 0.0173 - binary_accuracy: 0.9930 - val_loss: 0.0572 - val_binary_accuracy: 0.9819 Epoch 9/10 291/291 [==============================] - 37s 127ms/step - loss: 0.0113 - binary_accuracy: 0.9958 - val_loss: 0.0555 - val_binary_accuracy: 0.9837 Epoch 10/10 291/291 [==============================] - 37s 127ms/step - loss: 0.0091 - binary_accuracy: 0.9966 - val_loss: 0.0596 - val_binary_accuracy: 0.9832 <keras.callbacks.History at 0x7f83982d4cd0>
After 10 epochs, fine-tuning gains us a nice improvement here.
|
https://www.tensorflow.org/guide/keras/transfer_learning/?hl=nb-NO
|
CC-MAIN-2021-39
|
refinedweb
| 933
| 58.45
|
Dear valuable contributors,
Our company has SAP BusinessObjects BI Platform 4.1 Support Pack 7 Version: 14.1.7.1853.
I have downloaded .Net SDK for platform 4.1 and installed on a dev machine with VS 2010.
I have downloaded project ViewReport_CS from wssdk_net_samples_12.zip and mofified it. The aim is to connect to our CMS and retrieve a list of docs thru web service.
However, when I compile the project, I get the error or missing namespace BusinessObjects.DSWS.
I came across this thread that mentions that this namespace is obtained by installing universe designer, which I am going to do.
Can anyone please advise.
Thank you.
Add comment
|
https://answers.sap.com/questions/155357/where-is-namespace-businessobjectsdsws.html
|
CC-MAIN-2019-13
|
refinedweb
| 111
| 70.29
|
How to Define Templates and Send Email with Joystick
May 13th, 2022
What You Will Learn in This Tutorial
How to set up an SMTP service, prepare an email template using Joystick components, and send an email using the email.send() function in @joystick.js/node.:
Terminal
cd app && joystick start
After this, your app should be running and we're ready to get started.
Configuring SMTP
Before we focus on code, in order to actually send our email, we'll need access to an SMTP provider. There are quite a few options out there. If you have a favorite, feel free to use that, but for this tutorial we're going to recommend Postmark. Postmark is a great SMTP service that offers—in my opinion—the best SMTP product on the market.
If you don't already have an account, head over to their sign up page and create one. Once logged in, Postmark will automatically create a "server" (a server in Postmark is the project related to the app you're sending email for) called "My First Server."
Once logged in, you should see something like this:
From here, you will want to click the "API Tokens" tab just to the right of the highlighted "Message Streams" tab.
If you hover the populated input next to "Server API Tokens," you will be given an option to click and copy the value in the box. Go ahead and do this and then open up the
/settings.development.json file at the root of the Joystick app we created above.
/settings.development.json
{ "config": { "databases": [ { "provider": "mongodb", "users": true, "options": {} } ], "i18n": { "defaultLanguage": "en-US" }, "middleware": {}, "email": { "from": "<Default Email To Send From>", "smtp": { "host": "smtp.postmarkapp.com", "port": 587, "username": "<Paste Your Server API Token Here>", "password": "<Paste Your Server API Token Here>" } } }, "global": {}, "public": {}, "private": {} }
In this file, under the
config object, locate the
username and
password fields, we want to paste in the value you just copied (as we'll see, when sending email, this is how Postmark authenticates your account and knows to send the email from your Postmark account).
Next, for the
host field we want to enter
smtp.postmarkapp.com and for the
port we want to enter the number
587 (this is the secure email port). Finally, for the
from field, you want to enter the default email address you're going to be sending from (e.g.,
support@myapp.com). For this tutorial, it's wise to use the same email address that you created your Postmark account with as they will enable only that address for sending email by default. Email sent
from any other address will be rejected until Postmark approves your account (they have an approval process that's fairly quick and helps to eliminate spammers from harming the sender reputation for legitimate accounts).
Once this is set, back on the Postmark site, we want to head to the Sender Signatures page to make sure that the email you just entered for
from above is set up.
You should check the email address you signed up with as Postmark should have sent you a "Verify Email Address" email. If you click the link in this email, it will enable sending from this address.
If it's on the list, just check that email address and click the verify link. If the address you entered is not on the list, head to the "Add a New Signature" page and add it so Postmark doesn't block your messages.
Once this is done—and your address is verified—sending should work as expected. If it's not working, Postmark will tell you in the "Activity" tab of your server.
That's all we need to do for config. Now, let's jump into wiring up our email template.
Creating an email template
Just like pages and other components in Joystick, email templates are authored using Joystick components. This means that you can use the same familiar API you use to build your application's UI to write your emails (at the end of the day, you're still just writing HTML and CSS for your emails so there's no learning curve).
In your project, now, we want to create a special folder
invoice.js:
import ui from '@joystick.js/ui'; const Invoice = ui.component({ render: () => { return ` <div> </div> `; }, }); export default Invoice;
For our example, we'll be building an email template that represents an invoice for a customer, taking in an address and some line items as props. Because the content isn't terribly important here, let's go ahead and populate our skeleton template above with our content and walk through what it's doing:
import ui from '@joystick.js/ui'; const Invoice = ui.component({ render: ({ props, each }) => { return ` <div class="invoice"> <h4>Invoice</h4> <address> ${props.name}<br /> ${props.address}<br /> ${props.suite}<br /> ${props.city}, ${props.state} ${props.zipCode} </address> <table> <thead> <tr> <th class="text-left">Item</th> <th>Price</th> <th>Quantity</th> <th>Total</th> </tr> </thead> <tbody> ${each(props.items, (item) => { return ` <tr> <td>${item.description}</td> <td class="text-center">$${item.price}</td> <td class="text-center">x${item.quantity}</td> <td class="text-center">$${item.price * item.quantity}</td> </tr> `; })} </tbody> <tfoot> <tr> <td colspan="2"></td> <td colspan="1" class="text-center"><strong>Total</strong></td> <td colspan="1" class="text-center"> $${props.items.reduce((total, item) => { total += (item.price * item.quantity); return total; }, 0)} </td> </tr> </tfoot> </table> </div> `; }, }); export default Invoice;
Updating our
render() function to include our full HTML here, we've got three core components:
- An
<h4></h4>tag describing our template as an "Invoice."
- An
<address></address>tag rendering the address of the person we're sending the invoice to.
- A
<table></table>to render out line items.
For our
render() function signature, we've added a single argument that's being destructured (in JavaScript, this means to "pluck off" properties from an object, assigning those properties to variables in the current scope of the same name) to give us two variables:
props and
each.
The first,
props, will contain the props or properties that we pass to our template when we send our email. The second,
each is a function (known as a render function in Joystick) which helps us to loop over an array and return some HTML for each item in the array. Here, for each of our line items in
props.items we want to output a table row outputting the content of that item (and doing some multiplication on its
price and
quantity fields).
The only other thing to call attention to here is in the
<tfoot></tfoot> part of our table. Here, we're adding up all of the line items using a plain JavaScript
Array.reduce() function to "reduce down" the array of
items into a single value, in this case, an integer representing the total of all items in the
props.items array.
Confused by
.reduce()and want to go in-depth? Give this tutorial a read.
That's it for our HTML. Now, real quick before we move on to sending, let's add in some CSS to pretty things up a bit:
import ui from '@joystick.js/ui'; const Invoice = ui.component({ css: ` .invoice { padding: 20px; } h4 { margin: 0; font-size: 20px; } address { margin: 20px 0; } .text-left { text-align: left; } .text-center { text-align: center; } table { width: 100%; border: 1px solid #eee; } table tr th, table tr td { border-bottom: 1px solid #eee; padding: 10px; } table tfoot tr td { border-bottom: none; } `, render: ({ props, each }) => { return ` <div class="invoice"> ... </div> `; }, }); export default Invoice;
Not much happening here: just cleaning up spacing and adding some borders to our table so it looks more presentable and easy to read in our email.
What's neat about this is that when we send our email, Joystick will automatically take the CSS we've just added and inline it in our HTML (this means adding
style attributes to the appropriate elements in our HTML) to make it more friendly for HTML email clients.
With that, next, let's move on to testing and sending our email.
Sending an email
Before we wire up our send, real quick, let's take a look at how we can test and preview our HTML email. Because our email is just a Joystick component, just like any other page or component in our app, we can render it using the
res.render() function Joystick gives us in our router.
/index.server.js
import node from "@joystick.js/node"; import api from "./api"; node.app({ api, routes: { "/": (req, res) => { res.render("ui/pages/index/index.js", { layout: "ui/layouts/app/index.js", }); }, "/email/invoice": (req, res) => { res.render(`email/invoice.js`, { props: { name: 'Bert', address: '1234 Sesame St.', suite: '#123', city: 'Sesame', state: 'ST', zipCode: '12345', items: [ { description: 'Basketball', price: 10.00, quantity: 2 }, { description: 'Football', price: 7.00, quantity: 5 }, { description: 'Baseball', price: 4.95, quantity: 20 } ], }, }); }, "*": (req, res) => { res.render("ui/pages/error/index.js", { layout: "ui/layouts/app/index.js", props: { statusCode: 404, }, }); }, }, });
In our
/index.server.js file created for us when we ran
joystick create app earlier, here, we're adding a route called
res.render('email/invoice.js'). This tells Joystick that we want to render the component at the specified path. Additionally, because we know our component will expect some props, via the options object passed as the second argument to
res.render() we're specifying a
props value which is passed an object of
props we want to hand down to our component.
Here, we're passing all of the expected values for our template, specifically, the address of the recipient and the items they ordered. Now, if we open up in a browser, we should see our template rendered to screen:
While this won't give us a perfect representation of how our email will look in an email client (email clients are notoriously difficult and inconsistent for rendering/styling), it's a great way to debug our template without having to send a bunch of emails.
Now that we can verify our template is working, next, let's actually send it off. To do it, we're going to use the
@joystick.js/node:
/index.server.js
import node, { email } from "@joystick.js/node"; import api from "./api"; node.app({ api, routes: { "/": (req, res) => { res.render("ui/pages/index/index.js", { layout: "ui/layouts/app/index.js", }); }, "/email/send": (req, res) => { email.send({ to: 'ryan.glover@cheatcode.co', from: 'business@cheatcode.co', subject: 'Invoice', template: 'invoice', props: { name: 'Bert', address: '1234 Sesame St.', suite: '#123', city: 'Sesame', state: 'ST', zipCode: '12345', items: [ { description: 'Basketball', price: 10.00, quantity: 2 }, { description: 'Football', price: 7.00, quantity: 5 }, { description: 'Baseball', price: 4.95, quantity: 20 } ], }, }); res.send('Sent'); }, "/email/invoice": (req, res) => { ... }, "*": (req, res) => { res.render("ui/pages/error/index.js", { layout: "ui/layouts/app/index.js", props: { statusCode: 404, }, }); }, }, });
Up top, we've imported the
@joystick.js/node and down in our routes, we've added an additional route
/email/send (this just makes it easy to send—in reality you'd want to call
email.send() in response to real user behavior in something like a setter endpoint) and inside, we're calling to
email.send(). This function will send our email using the SMTP connection we set up earlier (via Postmark if you're following along or whatever provider you specified).
Here, we pass a few different values:
towhich is the email address we want to send our test email to.
fromwhich is the email we want to send from (if you omit this, Joystick will use the
fromyou specified in your
config.email.fromfield in
/settings.development.json).
subjectwhich is the subject line the recipient will see in their inbox.
templatewhich is the name of the file under the
propswhich are the props we want to pass to our template before rendering/sending.
That's it! To make sure our route responds in a browser when we call it, we call to
res.send() passing a string "Sent" to notify us that the code has been called properly.
Assuming that our SMTP configuration is correct, if we visit in our browser, after a few seconds we should receive our email at the specified recipient.
Wrapping Up
In this tutorial, we learned how to create an email template using Joystick components. We learned how to wire up the component itself, accepting props, and how to style the template using CSS. Next, we learned how to test our email template in the browser to make sure it looked right and finally, how to send it off using
Get the latest free JavaScript and Node.js tutorials, course announcements, and updates from CheatCode in your inbox.
No spam. Just new tutorials, course announcements, and updates from CheatCode.
|
https://cheatcode.co/tutorials/how-to-define-templates-and-send-email-with-joystick
|
CC-MAIN-2022-27
|
refinedweb
| 2,152
| 64.1
|
grantpt - grant access to the slave pseudo-terminal device
#include <stdlib.h> int grantpt(int fildes);
The grantpt() function changes the mode and ownership of the slave pseudo-terminal device associated with its master pseudo-terminal counter part. The fildes argument is a file descriptor that refers to a master pseudo-terminal device. The user ID of the slave is set to the real UID of the calling process and the group ID is set to an unspecified group ID. The permission mode of the slave pseudo-terminal is set to readable and writable by the owner, and writable by the group.
The behaviour of the grantpt() function is unspecified if the application has installed a signal handler to catch SIGCHLD signals
Upon successful completion, grantpt() returns 0. Otherwise, it returns -1 and sets.
None.
None.
None.
open(), ptsname(), unlockpt(), <stdlib.h>.
|
http://pubs.opengroup.org/onlinepubs/7990989775/xsh/grantpt.html
|
CC-MAIN-2014-15
|
refinedweb
| 142
| 62.88
|
I'm being taught to use fflush( stdin ) to clean the buffer, use gets to read strings, etc, etc
But it seems these are not the proper methods to read input from keyboard.
I've made this functions in order to read the input based on this useful recommendations.
So I would like to know if there is any way to improve them, if you see any kind of problem with them or if something it's already in the c library and therefore it's not necessary a custom function.
Mostly because I will be using it in all my future c programs.
Code:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int getCharacter( char * );
int getNumber ( int* );
void getString( char* string );
int main( void )
{
int number;
char character;
char string[ BUFSIZ ];
printf( "Character: " );
getCharacter( &character );
printf( "Number: " );
getNumber( &number );
printf( "String: " );
getString( string );
printf( "Character -> %c\n", character );
printf( "Number -> %d\n", number );
printf( "String -> %s\n", string );
system( "pause" );
}
int getCharacter( char* c )
{
char buffer[ BUFSIZ ];
fgets(buffer, sizeof( buffer ), stdin);
return sscanf( buffer, "%c", c);
}
int getNumber ( int* num )
{
char buffer[ BUFSIZ ];
fgets(buffer, sizeof( buffer ), stdin);
return sscanf( buffer, "%d", num);
}
void getString( char* string )
{
char buffer[ BUFSIZ ];
char *p;
fgets( buffer, sizeof( buffer ), stdin );
if ((p = strchr( buffer, '\n')) != NULL)
*p = '\0';
strcpy( string, buffer );
}
|
http://cboard.cprogramming.com/c-programming/126951-best-way-read-characters-strings-numbers-printable-thread.html
|
CC-MAIN-2015-40
|
refinedweb
| 222
| 64.75
|
Programming Reference/Librarys
Question & Answer
Q&A is closed
Q&A is closed
#include <stdlib.h> unsigned long int strtoul(const char *str, char **endptr, int base);
strtoul Converting a string to an unsigned long integer
str: the string to be converted
endptr: address of a pointer variable that points to the last character read, so the character that is not owned by a number
base: base between 2 and 36
Be found unprefixed numbers, these numbers in base 10 are accepted.
If the base '0 'is specified, the strings make a reference to the basis used
(leading “0x” for hexadecimal, “0” for octal, numbers).
/* * strtoul example code * */ #include <stdio.h> #include <stdlib.h> int main ( void ) { char string[] ="20405"; char *endptr; unsigned long int number; number = strtoul(string, &endptr, 10); printf("string is %s\n",string); printf("unsigned long int is %li\n", number); return 0; }
user@host:~$ ./strtoul string is 20405 unsigned long int is 20405
|
https://code-reference.com/c/stdlib.h/strtoul?s%5B%5D=string
|
CC-MAIN-2019-47
|
refinedweb
| 158
| 59.03
|
Dropdowns on comboboxes not updating properlySOGC Jun 29, 2010 5:07 PM
I have been experiencing this problem for some time now - the comboboxes in my Flex 4 application aren't changing to match the comboboxes' dataproviders. In other words, when a dataprovider changes for a combobox, the combobox's dropdown may still display data from the last dataprovider. This problem is intermittent and inconsistent. It seems to also be a problem for itemrenderer comboboxes inside datagrids, when the datagrids are sorted, or when the datagrids' dataproviders are changed.
I've tried doing various invalidate methods / validateNow() on the comboboxes, and it doesn't update the dropdowns.
Any help will be greatly appreciated
1. Re: Dropdowns on comboboxes not updating properlyFlex harUI
Jun 29, 2010 5:21 PM (in response to SOGC)
This is a known issue in the 3.5 SDK. I thought it was fixed for 4.0 before
we shipped.
2. Re: Dropdowns on comboboxes not updating properlySOGC Jun 29, 2010 5:30 PM (in response to SOGC)
Ah - I am using the 3.5 SDK.
I'm afraid to switch to 10.0, as there will be people out there with Flash player 9.0, that will be forced to update their player, before viewing the site.
Is my concern valid, or do most people have version 10?
3. Re: Dropdowns on comboboxes not updating properlyFlex harUI
Jun 29, 2010 6:09 PM (in response to SOGC)
10.0 has significant penetration. But there will be a 3.6 in a month or two
if you have time to play safe.
4. Re: Dropdowns on comboboxes not updating properlyarunbiji Jul 2, 2010 1:55 AM (in response to SOGC)
The issue can be fixed by updating the dropdown also along with the combobox. PFB a sample code snippet,
cmbSample.dataProvider = acData;
cmbSample.dropdown.dataProvider = acData;
where,
cmbSample --> is the combo box
acData --> is the arraycollection object with data
Here,
cmbSample.dataProvider = acData --> updates the combobox with new data
and
cmbSample.dropdown.dataProvider = acData --> updates the drop down of the combo box with new data
Hope this will solve your issue.
5. Re: Dropdowns on comboboxes not updating properlySOGC Jul 2, 2010 10:14 AM (in response to arunbiji)
Thanks, arunbiji - I just updated the compiler to 4.0, and many problematic items that I've noticed in the past have been resolved
6. Re: Dropdowns on comboboxes not updating properlySpaceCase Jul 6, 2010 1:27 PM (in response to arunbiji)
Thank you for posting this solution arunbiji, however, this workaround is at best problematic.
It seems that when setting the dataProvider for both the ComboBox and the dropdown, the dropdown's width is not set correctly.
I've tried every combination of invalidateDisplayList(), invalidateSize(), invalidateProperties(), and invalidateList() on both the ComboBox and the dropdown to no avail.
I'm currently working around this by setting the dropdown's percentWidth to 100, but this is mostly dissapointing as the dropdown will be scaled to sizes smaller than the corresponding ComboBox to match the text width. I've also tried using comboBox.dropdown.width = comboBox.width, but this again provides unexpected results, sometimes setting the width of the dropdown to something greater than that the ComboBox!
I would love to move to SDK 4, unfortunately, flash player 10 saturation among my organization's clientele is not where it needs to be, and due to the size of their user bases, updating is a prohibitive process. Also, the move could potentially require a significant amount of rebuild, which may not be in the budget for my department.
Has anyone had success with setting the dataProvider for both the comboBox and the dropDown AND having the dropDown scale correctly? If so, can you please describe the workaround you've used?
Question to the adobe gurus: will updating from SDK 3.5 to 4.0 resolve the issue without moving to spark components (ie - can I release a version using SDK 4's halo components for those clients who do upgrade to FP 10?), or will the app have to be updated to use the spark component set in order to function as desired? Would updating to one of the nightly 3.x builds help us resolve this issue?
Many thanks.
7. Re: Dropdowns on comboboxes not updating properlyKevinFauth Jul 9, 2010 11:52 AM (in response to SpaceCase)
I found this problematic as well, but if you bind your ComboBox's dataProvider to an ArrayCollection, and then update that ArrayCollection with your new values, invalidateSize of the ComboBox, then everything works fine. The problem comes with setting a new dataProvider on the controls.
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:
<mx:Script>
<![CDATA[
import mx.collections.ArrayCollection;
[Bindable]
private var myAC:ArrayCollection = new ArrayCollection( [ 1, 2, 3, 4, 5 ] );
private var newAC:ArrayCollection = new ArrayCollection( [ "item 11", "item 12", "item 13", "item 14", "item 15" ] );
private function changeDataProvider():void
{
myAC.removeAll();
for each ( var item:* in newAC )
{
myAC.addItem( item );
}
myAC.refresh(); /* don't necessarily need to do this here, but it's good practice when updating an ArrayCollection */
myComboBox.invalidateSize();
}
// this will duplicate the clipping/resizing issue brought up.
// to see how this works, call this instead of "changeDataProvider" in the click handler of the button
private function changeDataProviderBad():void
{
myComboBox.dataProvider = newAC;
myComboBox.dropdown.dataProvider = newAC;
}
]]>
</mx:Script>
<mx:Button
<mx:ComboBox
</mx:Application>
Hope that helps!
- Kevin
8. Re: Dropdowns on comboboxes not updating properlysss999sss Jun 1, 2011 4:38 AM (in response to arunbiji)
Thanks arunbiji, its working. Great............
|
https://forums.adobe.com/message/2952677
|
CC-MAIN-2017-09
|
refinedweb
| 926
| 55.64
|
README.md
The FX C Library
This directory contains the sources of the FxLibc Library. See
CMakeLists.txt
to see what release version you have.
The FxLibc is the standard system C library implementation for all Casio fx calculators,.
Dependencies
FxLibc requires a GCC compiler toolchain the PATH to build for any calculator.
You cannot build with your system compiler! The tutorial on Planète Casio
builds an
sh-elf toolchain that supports all models using multilib.
For Vhex and gint targets, the headers of the kernel are also required.
Building and installing FxLibc
FxLibc supports several targets:
- Vhex on SH targets (
vhex-sh)
- CASIOWIN for fx-9860G-like calculators (
casiowin-fx)
- CASIOWIN for fx-CG-series calculators (
casiowin-cg)
- gint for all targets (
gint)
Each target supports different features depending on what the kernel/OS provides.
Configuration and support
Configure with CMake; specify the target with
-DFXLIBC_TARGET. For SH
platforms, set the toolchain to
cmake/toolchain-sh.cmake.
The FxLibc supports shared libraries when building with Vhex (TODO); set
-DSHARED=1 to enable this behavior.
You can either install FxLibc in the compiler's
include folder, or installl
in another location of your choice. In the second case, you will need a
-I
option when using the library.
To use the compiler, set
PREFIX like this:
% PREFIX=$(sh-elf-gcc -print-file-name=.)
To use another location, set
PREFIX manually (recommended):
% PREFIX="$HOME/.sh-prefix/"
Example for a static Vhex build:
% cmake -B build-vhex-sh -DFXLIBC_TARGET=vhex-sh -DCMAKE_TOOLCHAIN_FILE=cmake/toolchain-sh.cmake -DCMAKE_INSTALL_PREFIX="$PREFIX"
Building
Build in the directory specified in
cmake -B.
% make -C build
To install, run the
install target.
% make -C build install
Contributing
Bug reports, feature suggestions and especially code contributions are most welcome.
If you are interested in doing a port, or want to contribute to this project, please, try to respect these constraints:
- Document your code.
- One function per file (named like the function).
- Use the same formatting conventions as the rest of the code.
- Only use hardware-related code (DMA, SPU, etc) in target-specified files when the target explicitly supports it.
Using FxLibc
Include headers normally (
#include <stdio.h>); on SH platforms where
sh-elf-gcc is used, link with
-lc (by default the
-nostdlib flag is
used for a number of reasons).
If you're installing in a custom folder, you also need
-I "$PREFIX/include"
and
-L "$PREFIX/lib". If you're installing in the GCC install folder, you
don't need
-I but you still need the
-L as the default location for
libraries is at the root instead of in a
lib subfolder.
Licences
This work is licensed under a CC0 1.0 Universal License. To view a copy of this license, visit: Or see the LICENSE file.
FxLibc also includes third-party code that is distributed under its own license. Currently, this includes:
- A stripped-down version of the TinyMT random number generator (GitHub repository) by Mutsuo Saito and Makoto Matsumoto. See
3rdparty/tinymt32/LICENSE.txt.
- A stripped-down version of the Grisu2b floating-point representation algorithm with α=-59 and γ=-56, by Florian Loitsch. See
3rdparty/grisu2b_59_56/READMEfor details, and the original code here.
Special thanks to
- Lephenixnoir - For all <3
- Kristaba - For the idea with the shared libraries workaround !
|
https://gitea.planet-casio.com/Vhex-Kernel-Core/fxlibc/src/commit/dbfefe51727544e0e791a1cd9785716af181a169
|
CC-MAIN-2022-27
|
refinedweb
| 547
| 57.16
|
A cron job scheduler for Go
ticktock is a cron job scheduler that allows you to define and run periodic jobs written in Golang. ticktock also optionally provides automatic job retry if the job has failed with an error. ticktock supports delayed and repeating jobs.
Note: Work in progress, don't use it on prod yet.
// Schedule a job to email reminders once in every 3mins 10 secs. ticktock.Schedule("email-reminders", job, &t.When{Each: "3m10s"}) ticktock.Start()
Import and
go getthe ticktock package.
import "github.com/rakyll/ticktock"
The jobs you would like to schedule needs to implement
ticktock.Jobinterface by providing runnable. The following example is a sample job that prints the given message.
type PrintJob struct { Msg string }
func (j *PrintJob) Run() error { fmt.Println(j.Msg) return nil }
Once you've defined a Job, you need to schedule an instance of the defined job and start the scheduler. Each registered job should have a unique name, otherwise an error will be returned.
// Prints "Hello world" once in every seconds err := ticktock.Schedule( "print-hello", &PrintJob{Msg: "Hello world"}, &t.When{Every: t.Every(1).Seconds()})
If the scheduler has been started before, the job will be managed to run automatically. Otherwise, it will wait for the scheduler to be started. The scheduler can be started with the following line.
// typically, you schedule all jobs here and start the scheduler ticktock.Start()
Not all of the scheduled jobs need to run every once a while. You can also schedule a job to run at a time for only once. "Hello world" will be printed once on the next Sunday at 12:00.
ticktock.Schedule( "print-hello-once", &PrintJob{Msg: "Hello world"}, &t.When{Day: t.Sun, At: "12:00"})
Scheduler provides automatic retry on jobs failures. In order to configure a retry count, schedule the job with additional options, providing a retry count. In the following case, we schedule the print job to be retried 2 times if it fails. (In this sample case, the job will never be retried, because
Runalways returns
nilthough.)
// Prints "Hello hi" once in every week, on Saturday at 10:00 ticktock.ScheduleWithOpts( "print-hi", &PrintJob{Msg: "Hello hi"}, &t.Opts{RetryCount: 2, When: &t.When{Every: &t.Every(1).Weeks(), Day: t.Sat, At: "10:00"}})
Use the unique name to cancel the job. If the job is currently running, scheduler will wait for it to be completed and cancel the future runs.
// print-hi job will not run again ticktock.Cancel("print-hi")
This section provides some valid interval samples.
// Every 2 minutes t.When{Each: "2m"}
// Every 100 milliseconds t.When{Every: t.Every(100).Milliseconds()}
// Every hour at :30 t.When{Every: t.Every(1).Hours(), At: "**:30"}
// Every day at the next beginning of an hour :00 t.When{Every: t.Every(1).Days(), At: ":00"}
// Every 2 weeks on Saturdays at 10:00 t.When{Every: &t.Every(2).Weeks(), On: t.Sat, At: "10:00"}
// Saturday at 15:00, not repeated t.When{Day: t.Sat, At: "15:00"}
// Every week on Sun at 11:00, last run was explicitly given. // If your process shuts down at 10:00 on Sunday, it allows scheduler // to schedule the job to run in a hour on an immediate restart. t.When{LastRun: lastRun, Every: &t.Every(1).Weeks(), On: t.Sun, At: "10.
|
https://xscode.com/rakyll/ticktock
|
CC-MAIN-2022-05
|
refinedweb
| 567
| 69.79
|
Solving the Ising Model using a Mixed Integer Linear Program Solver (Gurobi)
I came across an interesting thing, that finding the minimizer of the Ising model is encodable as a mixed integer linear program.
The Ising model is a simple model of a magnet. A lattice of spins that can either be up or down. They want to align with an external magnetic field, but also with their neighbors (or anti align, depending on the sign of the interaction). At low temperatures they can spontaneously align into a permanent magnet. At high temperatures, they are randomized. It is a really great model that contains the essence of many physics topics.
Linear Programs minimize linear functions subject to linear equality and inequality constraints. It just so happens this is a very solvable problem (polynomial time).
MILP also allow you to add the constraint that variables take on integer values. This takes you into NP territory. Through fiendish tricks, you can encode very difficult problems. MILP solvers use LP solvers as subroutines, giving them clues where to search, letting them step early if the LP solver returns integer solutions, or for bounding branches of the search tree.
How this all works is very interesting (and very, very roughly explained), but barely matters practically since other people have made fiendishly impressive implementations of this that I can’t compete with. So far as I can tell, Gurobi is one of the best available implementations (Hans Mittelman has some VERY useful benchmarks here), and they have a gimped trial license available (2000 variable limit. Bummer.). Shout out to CLP and CBC, the Coin-Or Open Source versions of this that still work pretty damn well.
Interesting Connection: Quantum Annealing (like the D-Wave machine) is largely based around mapping discrete optimization problems to an Ising model. We are traveling that road in the opposite direction.
So how do we encode the Ising model?
Each spin is a binary variable $ s_i \in {0,1}$
We also introduce a variable for every edge. which we will constrain to actually be the product of the spins. $ e_{ij} \in {0,1}$. This is the big trick.
We can compute the And/Multiplication (they coincide for 0/1 binary variables) of the spins using a couple linear constraints. I think this does work for the 4 cases of the two spins.
$ e_{ij} \ge s_i +s_j - 1$
$ e_{ij} \le s_j $
$ e_{ij} \le s_i $
The xor is usually what we care about for the Ising model, we want aligned vs unaligned spins to have different energy. It will have value 1 if they are aligned and 0 if they are anti-aligned. This is a linear function of the spins and the And.
$ s_i \oplus s_j = s_i + s_j - 2 e_{ij}$
Then the standard Hamiltonian is
$ H=\sum B_i s_i + \sum J_{ij} (s_i + s_j - 2 e_{ij})$
Well, modulo some constant offset. You may prefer making spins $ \pm 1$, but that leads to basically the same Hamiltonian.
The Gurobi python package actually let’s us directly ask for AND constraints, which means I don’t actually have to code much of this.
We are allowed to use spatially varying external field B and coupling parameter J. The Hamiltonian is indeed linear in the variables as promised.
After already figuring this out, I found this chapter where they basically do what I’ve done here (and more probably). There is nothing new under the sun. The spatially varying fields B and J are very natural in the field of spin glasses.
For a while I thought this is all we could do, find the lowest energy solution, but there’s more! Gurobi is one of the few solvers that support iteration over the lowest optimal solutions, which means we can start to talk about a low energy expansion.
Here we’ve got the basic functionality. Getting 10,000 takes about a minute. This is somewhat discouraging when I can see that we haven’t even got to very interesting ones yet, just single spin and double spin excitations. But I’ve got some ideas on how to fix that. Next time baby-cakes.
(A hint: recursion with memoization leads to some brother of a cluster expansion.)
from gurobipy import * import matplotlib.pyplot as plt import numpy as np # Create a new model m = Model("mip1") m.Params.PoolSearchMode = 2 m.Params.PoolSolutions = 10000 # Create variables N = 10 spins = m.addVars(N,N, vtype=GRB.BINARY, name='spins') links = m.addVars(N-1,N-1,2, vtype=GRB.BINARY, name='links') xor = {} B = np.ones((N,N)) #np.random.randn(N,N) J = 1. #antialigned H = 0. for i in range(N-1): for j in range(N-1): #for d in range(2) m.addGenConstrAnd(links[i,j,0], [spins[i,j], spins[i+1,j]], "andconstr") m.addGenConstrAnd(links[i,j,1], [spins[i,j], spins[i,j+1]], "andconstr") xor[i,j,0] = spins[i,j] + spins[i+1,j] - 2*links[i,j,0] xor[i,j,1] = spins[i,j] + spins[i,j+1] - 2*links[i,j,1] H += J*xor[i,j,0] + J*xor[i,j,1] for i in range(N): #m.addGenConstrAnd(links[i,N-1,0], [spins[i,N-1], spins[i+1,j]], "andconstr") #m.addGenConstrAnd(links[N-1,j,1], [spins[i,j], spins[i,j+1]], "andconstr") #m.addGenConstrAnd(links[i,N-1,1], [spins[i,N-1], spins[i,0]], "andconstr") #m.addGenConstrAnd(links[N-1,j,0], [spins[i,j], spins[i,j+1]], "andconstr") for j in range(N): H += B[i,j]*spins[i,j] #B[i,j] = 1. #quicksum([2*x, 3*y+1, 4*z*z]) #LinExpr([1.]*N*N,spins) #x = m.addVar(vtype=GRB.BINARY, name="x") #y = m.addVar(vtype=GRB.BINARY, name="y") # = m.addVar(vtype=GRB.BINARY, name="z") # Set objective m.setObjective(H, GRB.MINIMIZE) # Add constraint: x + 2 y + 3 z <= 4 #m.addConstr(x + 2 * y + 3 * z <= 4, "c0") # Add constraint: x + y >= 1 #m.addConstr(x + y >= 1, "c1") m.optimize() #for v in m.getVars(): # print(v.varName, v.x) print('Obj:', m.objVal) print('Solcount:', m.SolCount) for i in range(m.SolCount): m.Params.SolutionNumber = i #set solution numbers print("sol val:", m.Xn) print("sol energy:", m.PoolObjVal) print(spins[0,0].Xn) ising = np.zeros((N,N)) for i in range(N): for j in range(N): ising[i,j] = spins[i,j].Xn plt.matshow(ising) plt.show()
Here’s the ground antiferromagnetic state. Cute.
|
https://www.philipzucker.com/solving-the-ising-model-using-a-mixed-integer-linear-program-solver-gurobi/
|
CC-MAIN-2021-39
|
refinedweb
| 1,103
| 59.8
|
import "golang.org/x/crypto/blake2s"
Package blake2s implements the BLAKE2s hash algorithm defined by RFC 7693 and the extendable output function (XOF) BLAKE2Xs.
For a detailed specification of BLAKE2s see and for BLAKE2Xs see
If you aren't sure which function you need, use BLAKE2s (Sum256 or New256). If you need a secret-key MAC (message authentication code), use the New256 function with a non-nil key.
BLAKE2X is a construction to compute hash values larger than 32 bytes. It can produce hash values between 0 and 65535 bytes.
blake2s.go blake2s_amd64.go blake2s_generic.go blake2x.go register.go
const ( // The blocksize of BLAKE2s in bytes. BlockSize = 64 // The hash size of BLAKE2s-256 in bytes. Size = 32 // The hash size of BLAKE2s-128 in bytes. Size128 = 16 )
OutputLengthUnknown can be used as the size argument to NewXOF to indicate the the length of the output is not known in advance.
New128 returns a new hash.Hash computing the BLAKE2s-128 checksum given a non-empty key. Note that a 128-bit digest is too small to be secure as a cryptographic hash and should only be used as a MAC, thus the key argument is not optional.
New256 returns a new hash.Hash computing the BLAKE2s-256 checksum. A non-nil key turns the hash into a MAC. The key must between zero and 32 bytes long.
Sum256 returns the BLAKE2s-256 checksum of the data.
type XOF interface { // Write absorbs more data into the hash's state. It panics if called // after Read. io.Writer // Read reads more output from the hash. It returns io.EOF if the limit // has been reached. io.Reader // Clone returns a copy of the XOF in its current state. Clone() XOF // Reset resets the XOF to its initial state. Reset() }
XOF defines the interface to hash functions that support arbitrary-length output.
NewXOF creates a new variable-output-length hash. The hash either produce a known number of bytes (1 <= size < 65535), or an unknown number of bytes (size == OutputLengthUnknown). In the latter case, an absolute limit of 128GiB applies.
A non-nil key turns the hash into a MAC. The key must between zero and 32 bytes long.
Package blake2s imports 5 packages (graph) and is imported by 11 packages. Updated 2017-11-30. Refresh now. Tools for package owners.
|
https://godoc.org/golang.org/x/crypto/blake2s
|
CC-MAIN-2017-51
|
refinedweb
| 390
| 77.03
|
I've posted two proposals:
Advertising
Proposes a mechanism for easily using adapters in TALES expressions.
I'm not at all clear on how the proposed mechanism is superior to the implementation of path segment prefixes that exists in Zope 2. This differs from the namespace proposal that you cite, in that the object named on the left of the colon is not a namespace or adapter. It is simply a registered name that is associated with code that interprets the text to the right of the colon. This allows constuctions such as "a/sequence/index:2" (a.sequence[2]), "a/bag/var:x/call:" (a.bag[x]()), and "an/object/adapt:foo.bar" (foo.bar(an.object)).
proposes a mechanism for qualifying names defined in TAL and used in TALES expressions.
This would of course conflict with prefixes as currently defined, and seems weakly motivated. I can imagine getting a similar effect with a more consistent syntax by allowing tal:define="x/y foo" when x supports an ITALESNamespace interface. The path "x/y/z" then does exactly what the user would expect. A namespace "x" can be created without any new TAL syntax by providing a built-in constructor, as in tal:define="x namespace", that creates an empty dictionary-like object with support with ITALESNamespace.
Cheers,
Evan @ 4-am
_______________________________________________
Zope-Dev maillist - [EMAIL PROTECTED]
** No cross posts or HTML encoding! **
(Related lists - )
|
https://www.mail-archive.com/zope-dev@zope.org/msg16501.html
|
CC-MAIN-2017-51
|
refinedweb
| 236
| 55.44
|
Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
A programmer’s cleaning guide for messy sensor data
A programmer’s cleaning guide for messy sensor data
If you have never used Pandas before and know the basics of Python, this tutorial is for you.
Get the newsletter
In this tutorial, I'll explain how to use Pandas and Python to work with messy data. If you have never used Pandas before and know the basics of Python, this tutorial is for you.Weather data is a good real-world example of a messy dataset. It comes with mixed content, irregular dropouts, and time zones, all of which are common pain points for data scientists. I'll will go through examples of how to deal with mixed content and irregular dropouts. To work with odd time zones, refer to Mario Corchero’s excellent post, How to work with dates and time with Python.
Let’s start from scratch and turn a messy file into a useful dataset. Entire source code is available on GitHub.
Reading a CSV file
You can open a CSV file in Pandas with the following:
- pandas.read_csv(): Opens a CSV file as a DataFrame, like a table.
- DataFrame.head(): Displays the first 5 entries.
DataFrame is like a table in Pandas; it has a set number of columns and indices. CSV files are great for DataFrames because they come in columns and rows of data.
import pandas as pd
# Open a comma-separated values (CSV) file as a DataFrame
weather_observations = \
pd.read_csv('observations/Canberra_observations.csv')
# Print the first 5 entries
weather_observations.head()
Looks like our data is actually tab-separated by \t. There are interesting items in there that look to be time.
pandas.read_csv() provides versatile keyword arguments for different situations. Here you have a column for Date and another for Time. You can introduce a few keyword arguments to add some intelligence:
- sep: The separator between columns
- parse_dates: Treat one or more columns like dates
- dayfirst: Use DD.MM.YYYY format, not month first
- infer_datetime_format: Tell Pandas to guess the date format
- na_values: Add values to treat as empty
Use these keyword arguments to pre-format the data and let Pandas do some heavy lifting.
# Supply pandas with some hints about the file to read
weather_observations = \
pd.read_csv('observations/Canberra_observations.csv',
sep='\t',
parse_dates={'Datetime': ['Date', 'Time']},
dayfirst=True,
infer_datetime_format=True,
na_values=['-']
)
Pandas nicely converts two columns, Date and Time, to a single column, Datetime, and renders it in a standard format.
There is a NaN value here, not to be confused with the “not a number” floating point. It’s just Pandas' way of saying it’s empty.
Sorting data in order
Let’s look at ways Pandas can address data order.
- DataFrame.sort_values(): Rearrange in order.
- DataFrame.drop_duplicates(): Delete duplicated items.
- DataFrame.set_index(): Specify a column to use as index.
Because the time seems to be going backward, let’s sort it:
# Sorting is ascending by default, or chronological order
sorted_dataframe = weather_observations.sort_values('Datetime')
sorted_dataframe.head()
Why are there two midnights? It turns out our dataset (raw data) contains midnight at both the end and the beginning of each day. You can discard one as a duplicate since the next day also comes with another midnight.
The logical order here is to discard the duplicates, sort the data, and then set the index:
# Sorting is ascending by default, or chronological order
sorted_dataframe = weather_observations.sort_values('Datetime')
# Remove duplicated items with the same date and time
no_duplicates = sorted_dataframe.drop_duplicates('Datetime', keep='last')
# Use `Datetime` as our DataFrame index
indexed_weather_observations = \
sorted_dataframe.set_index('Datetime')
indexed_weather_observations.head()
Now you have a DataFrame with time as its index, which will come in handy later. First, let’s transform wind directions.
Transforming column values
To prepare wind data for weather modelling, you can use the wind values in a numerical format. By convention, north wind (↓) is 0 degrees, going clockwise ⟳. East wind (←) is 90 degrees, and so on. You will leverage Pandas to transform:
- Series.apply(): Transforms each entry with a function.
To work out the exact value of each wind direction, I wrote a dictionary by hand since there are only 16 values. This is tidy and easy to understand.
# Translate wind direction to degrees
wind_directions = {
'N': 0. , 'NNE': 22.5, 'NE': 45. , 'ENE': 67.5 ,
'E': 90. , 'ESE': 112.5, 'SE': 135. , 'SSE': 157.5 ,
'S': 180. , 'SSW': 202.5, 'SW': 225. , 'WSW': 247.5 ,
'W': 270. , 'WNW': 292.5, 'NW': 315. , 'NNW': 337.5 }
You can access a DataFrame column, called a Series in Pandas, by an index accessor like you would with a Python dictionary. After the transformation, the Series is replaced by new values.
# Replace wind directions column with a new number column
# `get()` accesses values fomr the dictionary safely
indexed_weather_observations['Wind dir'] = \
indexed_weather_observations['Wind dir'].apply(wind_directions.get)
# Display some entries
indexed_weather_observations.head()
Each one of the valid wind directions is now a number. It doesn’t matter if the value is a string or another kind of number; you can use Series.apply() to transform it.
Setting index frequency
Digging deeper, you find more flaws in the dataset:
# One section where the data has weird timestamps ...
indexed_weather_observations[1800:1805]
00:33:00? 01:11:00? These are odd timestamps. There is a function to ensure a consistent frequency:
DataFrame.asfreq(): Forces a specific frequency on the index, discarding and filling the rest.# Force the index to be every 30 minutes
regular_observations = \
indexed_weather_observations.asfreq('30min')
# Same section at different indices since setting
# its frequency :)
regular_observations[1633:1638]
Pandas discards any indices that don’t match the frequency and adds an empty row if one doesn’t exist. Now you have a consistent index frequency. Let’s plot it to see how it looks with matplotlib, a popular plotting library:
import matplotlib.pyplot as plt
# Make the graphs a bit prettier
pd.set_option('display.mpl_style', 'default')
plt.rcParams['figure.figsize'] = (18, 5)
# Plot the first 500 entries with selected columns
regular_observations[['Wind spd', 'Wind gust', 'Tmp', 'Feels like']][:500].plot()
Looking closer, there seem to be gaps around Jan 6th, 7th, and more. You need to fill these with something meaningful.
Interpolate and fill empty rows
To fill gaps, you can linearly interpolate the values, or draw a line from the two end points of the gap and fill each timestamp accordingly.
- Series.interpolate(): Fill in empty values based on index.
Here you also use the inplace keyword argument to tell Pandas to perform the operation and replace itself.
# Interpolate data to fill empty values
for column in regular_observations.columns:
regular_observations[column].interpolate('time', inplace=True, limit_direction='both')
# Display some interpolated entries
regular_observations[1633:1638]
NaN values have been replaced. Let’s plot it again:
# Plot it again - gap free!
regular_observations[['Wind spd', 'Wind gust', 'Tmp', 'Feels like']][:500].plot()
Congratulations! The data is now ready to be used for weather processing. You can download the example code on GitHub and play with it.
Conclusion
I've shown how to clean up messy data with Python and Pandas in several ways, such as:
- reading a CSV file with proper structures,
- sorting your dataset,
- transforming columns by applying a function
- regulating data frequency
- interpolating and filling missing data
- plotting your dataset
Pandas offers many more powerful functions, which you can find in the documentation, and its excellent 10-minute introduction. You might find a few gems in there. If you have questions or thoughts, feel free to reach me on Twitter at @Xavier_Ho.
Happy data cleaning!
More resources
- SciPy Interpolate: More than just linear interpolation to fill your datasets.
- XArray and Pandas: Working with datasets bigger than your system memory? Start here.
- Visualising Data with Python: Talk video by Clare Sloggett at PyCon AU 2017.
3 Comments
Very nice article!! I'm starting to see why people like Python. I haven't warmed up to it yet but I'm getting there.
Hi, I am an editor of InfoQ China site. Could we translate your article in Chinese and publish it to the InfoQ China site? We will provide a link back to this original article. Thank you.
Hi Carol, yes you can. Please let me know when the link is up so I can share it on Twitter!
|
https://opensource.com/article/17/9/messy-sensor-data
|
CC-MAIN-2018-22
|
refinedweb
| 1,380
| 58.48
|
![if !IE]> <![endif]>
FRUITFUL FUNCTIONS
The..
Eg:
def my_func():
x = 10
print("Value inside function:",x)
x = 20
my_func()
print("Value outside function:",x)
Output:
Value inside function: 10
Value outside function: 20
Local Scope and Local Variables
A local variable is a variable that is only accessible from within a given function. Such variables are said to have local scope .
A global variable is a variable that is defined outside of any function definition. Such variables are said to have global scope .
Variable max is defi ned outside func1 and func2 and therefore “global” to each. Function Composition))
Related Topics
Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.
|
https://www.brainkart.com/article/Fruitful-Functions---Python_35972/
|
CC-MAIN-2022-40
|
refinedweb
| 114
| 59.3
|
Data Points - Run EF Core on Both .NET Framework and .NET Core
By Julie Lerman | October 2016 | Get the Code: C# VB
The technology formerly known as Entity Framework 7 (EF7) was renamed to Entity Framework Core (EF Core) in early 2016. EF Core 1.0.0 introduces some great new capabilities, though overall it does have a smaller feature set than EF6. But this doesn’t mean EF runs only on .NET Core. You can use EF Core in APIs and applications that require the full .NET Framework, as well as those that target only the cross-platform .NET Core. In this column, I’ll walk you through two projects that explore these options. My goal is to alleviate any worries the “Core” moniker might imply: that EF Core only runs on .NET Core. At the same time, I’ll explain the steps involved in creating each solution.
EF Core in Full .NET Projects
I’ll begin with a project that targets the full .NET Framework. Keep in mind that in Visual Studio 2015, the tooling requires that you have Visual Studio 2015 Update 3, as well as the latest Microsoft ASP.NET and Web Tools. At the time of writing this column (August 2016), the best guide for those installations is the documentation at bit.ly/2bte6Gu.
To keep my data access separate from whatever app will be using it, I’ll create it in its own class library. Figure 1 shows that there’s a template that specifically targets a .NET Core class library; but I’m selecting Class Library, the standard option that’s always been available for targeting .NET. The resulting project (Figure 2) is also “normal”—you can see that there are no project.json files or any .NET Core project assets. Everything looks just the way it always has.
Figure 1 Creating a Class Library for a Full .NET API
Figure 2 A Plain Old (and Familiar) .NET Class Library
So far, none of this is tied to EF in any way. I could choose EF6 or EF Core at this point, but I’ll add EF Core into the project. As always, I can use either the NuGet Package Manager to find and select EF Core or the Package Manager Console window. I’ll use the console. Remember that the “entityframework” package is for EF6. To get EF Core, you need to install one of the Microsoft.EntityFrameworkCore packages. I’ll use the SqlServer package, which will bring in what EF needs to communicate with SqlServer:
Because that package depends on the main Microsoft.EntityFrameworkCore package, as well as the Microsoft.EntityFrameworkCore.Relational package, NuGet will install those for me at the same time. And because the EF Core package depends on other packages, they’ll be installed, too. In all, this process adds the three EF Core packages, as well as 23 others from the newer, more composable .NET on which EF Core relies. Rather than fewer large packages, I get more small packages—but only those my software needs. These will all play well with the standard .NET libraries already in the project.
Next, I’ll add in a simple domain class (Samurai.cs) and a DbContext (SamuraiContext.cs) to let EF Core persist my data into a database, as shown in Figure 3. EF Core doesn’t have the magical connection string inference that EF6 has, so I have to let it know what provider I’m using and what connection string. For simplicity, I’ll stick that right in the DbContext’s new virtual method: OnConfiguring. I’ve also created a constructor overload to allow me to pass in the provider and other details as needed. I’ll take advantage of this shortly.
public class Samurai { public int Id { get; set; } public string Name { get; set;} } public class SamuraiContext : DbContext { public DbSet<Samurai> Samurais { get; set; } protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) { { if (optionsBuilder.IsConfigured == false) { optionsBuilder.UseSqlServer( @"Data Source=(localdb)\\mssqllocaldb;Initial Catalog=EFCoreFullNet; Integrated Security=True;"); } base.OnConfiguring(optionsBuilder); } } public SamuraiContext(DbContextOptions<SamuraiContext> options) : base(options) { } }
Because I’m using the full .NET, which also means I’m targeting full-blown Windows, I have Windows PowerShell available. And this means I get to use the same migrations commands I’ve always used: add-migration, update-database and so forth. There are some new commands, as well, and you can check out my January 2016 column (msdn.com/magazine/mt614250) to learn all about the EF Core migrations commands. Also, remember I mentioned that packages are smaller and composable? Well, if I want to use migrations, I need to add in the package that contains these commands. As I’m writing this, the tools are still in preview mode so I need to use the -pre parameter. I’ll add that package, then I can add a new migration:
This works as it always has: it creates a new Migrations folder and the migration file, as shown in Figure 4. EF Core did change the way it stores model snapshots, which you can read about in the aforementioned January 2016 column.
Figure 4 EF Core Migrations in My Full .NET Class Library
With the migration in place, the update-database command successfully creates the new EFCoreFullNet database for me in SQL Server localdb.
Finally, I’ll add a test project to the solution from the same Unit Test Project template I’ve always used in Visual Studio. I’ll then add a reference to my EFCoreFullNet class library. I don’t need my test project to use the database to make sure EF Core is working, so rather than installing the SqlServer package, I’ll run the following NuGet command against the new test project:
The InMemory provider is a blessing for testing with EF Core. It uses in-memory data to represent the database and the EF cache, and EF Core will interact with the cache in much the same way it works with a database—adding, removing and updating data
Remember that extra constructor I created in the SamuraiContext? The TestEFCoreFullNet tests, shown in Figure 5, take advantage of it. Notice that in the constructor of the test class, I created a DbContextOptions builder for the SamuraiContext and then specified it should use the InMemory provider. Then, in the method when I instantiate SamuraiContext, I pass in those options. The SamuraiContext OnConfiguring method is designed to check to see if the options are already configured. If so, it will use them (in this case, the InMemory provider); otherwise, it will move ahead with setting up to work with SqlServer and the connection string I hardcoded into the method.
using EFCoreFullNet; using Microsoft.EntityFrameworkCore; using Microsoft.VisualStudio.TestTools.UnitTesting; using System.Linq; namespace Tests { [TestClass] public class TestEFCoreFullNet { private DbContextOptions<SamuraiContext> _options; public TestEFCoreFullNet() { var optionsBuilder = new DbContextOptionsBuilder<SamuraiContext>(); optionsBuilder.UseInMemoryDatabase(); _options = optionsBuilder.Options; } [TestMethod] public void CanAddAndUpdateSomeData() { var samurai = new Samurai { Name = "Julie" }; using (var context = new SamuraiContext(_options)) { context.Add(samurai); context.SaveChanges(); } samurai.Name += "San"; using (var context = new SamuraiContext(_options)) { context.Samurais.Update(samurai); context.SaveChanges(); } using (var context = new SamuraiContext(_options)) { Assert.AreEqual("JulieSan", context.Samurais.FirstOrDefault().Name); } } } }
This test method takes advantage of some specific EF Core features that don’t exist in EF6. I wrote about these and other change-tracking features in EF Core in my August 2016 Data Points column (msdn.com/magazine/mt767693). For example, after creating the new samurai object, I add it to the context using the DbContext.Add method, letting EF determine to which DbSet it needs to be tied. Then I save that to the data store, in this case some type of list in memory that the InMemory provider is managing. Next, I modify the samurai object, create a new instance of DbContext and use the new EF Core Update command to make sure SaveChanges will update the stored samurai rather than create a new one. Finally, I query the context for that samurai and use an Assert to ensure that the context does indeed return the updated name.
The particular features I’m using are not the point, however. The point is that I’m doing all of this work with EF Core in a “plain old .NET” project in Windows.
EF Core for CoreCLR: Same Code, Different Dependencies
I could stay in Windows and in Visual Studio 2015 Update 3 to next show you how I can use the same EF Core APIs, the same code and the same tests to target the CoreCLR runtime, but that looks too similar to targeting Windows. Therefore, I’ll go to the other extreme and create the CoreCLR variation on my MacBook, explaining the steps as I go through them.
.NET Core doesn’t rely on Windows or its tooling. Besides Visual Studio 2015, I could use … well, I suppose Emacs was the popular non-Visual Studio editor of old. However, there are some cross-platform IDEs I can pick from, not just for writing the code but also to get features like debugging and Git support. For example, in the August 2016 issue of MSDN Magazine, Alessandro Del Sole walked through building an ASP.NET Core Web site using Visual Studio Code (msdn.com/magazine/mt767698). I could see from his screenshots that he was in Windows, but otherwise, the experience is essentially the same on a Mac.
Another cross-platform option is Rider from JetBrains. Rider is designed specifically for C# and the best way to describe it is “ReSharper in its own IDE.”
I’ve already been using Visual Studio Code in Windows and OS X (not just for C#, but also for Node.js) and that’s what I’ll use to show you EF Core in an app built to target CoreCLR. In fact, because I’m building this solution in OS X, targeting CoreCLR is my only option. The array of available APIs for my library is more limited. However, EF Core is the same set of APIs as when I used it in the full .NET library in the first project.
As you’ll see, most of the effort will be in setting up the projects and the dependencies that are specific to targeting CoreCLR. But I can use the same SamuraiContext class to define my EF Core data model and the same CanAddAndUpdateSomeData test method from the previous project to do the same work. The code is the same even though I’m now targeting the more limited runtime and working in an environment that can’t use anything but .NET Core.
Creating a Library Similar to the .NET Class Library
I’ve created a folder to contain both my Library and the Test projects, with subfolders for each project. Inside the Library subfolder, I can call dotnet new to create the Library project. Figure 6 shows that command, along with a confirmation that the project was created. Listing the contents of the folder shows that only two files were created—most important, project.json, which contains the list of required NuGet packages and other relevant project details. The Library.cs file is just an empty class file that I’ll delete.
Figure 6 Creating a New CoreCLR Library with the dotnet Command
Next, I’ll open this new library project in Visual Studio Code. I can just type “code” at the prompt. Visual Studio Code opens with this as the target folder, automatically recognizes the packages listed in the json file and offers to run dotnet restore to fix up the unresolved dependencies. I happily accept the offer.
The project.json file looks like the code in Figure 7.
Pretty simple. Libraries don’t need all of the ASP.NET Core stuff I’m used to using, just the NETStandard Library. The .NET Standard Library encapsulates what’s common across the various places .NET can now run. From the .NET Standard documentation (bit.ly/2b1JoHJ), “The .NET Standard Library is a formal specification of .NET APIs that are intended to be available on all .NET runtimes.” So this library I’m building can be used with .NET Core and ASP.NET Core and even .NET applications starting with .NET 4.5. You can see a compatibility grid on the documentation page.
My next step is to add EF Core to the project. Keep in mind that because I’m on a Mac, SqlServer isn’t an option. I’ll use the PostgreSQL provider for EF Core instead, which goes in the currently empty dependencies section of project.json:
As before, I plan to do migrations. Normally, I’d also add a dependency to the Microsoft.EntityFrameworkCore.Tools package that contains the commands, just as I did for the full .NET version of this library. But because of a current limitation with the Preview 2 tooling, I’ll postpone this until a later step in the process. However, I do need to be able to access the commands from this library’s folder, so I add the package into a special “tools” section of project.json, as the preceding code shows.
Restoring the packages pulls in not only these two packages, but their dependencies, as well. If you check in the project.lock.json file that’s created, you can see all of the packages, including Microsoft.EntityFrameworkCore and Microsoft.EntityFrameworkCore.Relational—the same packages you saw added into the earlier .NET solution.
Now I’ll just copy in my Samurai.cs and SamuraiContext.cs files. I have to change the OnConfiguring class to use PostgreSQL and its connection string instead of SQL Server. This is what that bit of code now looks like:
It should be time to run the migrations, but here you’ll run into a known limitation of the current Preview2 version of the EFCore tools outside of Visual Studio, which is that an executable project is required to find critical assets. So, again, it’s a bit of a pain on first encounter, but not too much in the way of extra effort. Read more about that at bit.ly/2btm4OW.
Creating the Test Project
I’ll go ahead and add in my test project, which I can then use as my executable project for the migrations. Back at the command line, I go to the Oct2016DataPointsMac/Test subfolder I created earlier and run:
In Visual Studio Code, you’ll see the new project.json listed in the Test folder. Because this project will be responsible for making sure the EF command lines can run, you have to add a reference to the EF Core Tools packages into the dependencies. Additionally, the test project needs a reference to the Library, so I’ve also added that into the project.json dependencies. Here’s the dependencies section after these additions:
Now I can access the EF Core commands from my Library folder. Notice that in Figure 8 the command points to the project in the Test folder with the --startup-project parameter. I’ll use that with each of the migrations commands.
Figure 8 Leveraging an Executable Project to Enable a Library to Use the EF Migrations Commands
Running Migrations against the EF Model from the .NET Library
Remember, as I laid out in my column on EFCore migrations, the dotnet ef migrations commands look different from the PowerShell commands, but they lead to the same logic in the migrations API.
First I’ll create the migration with:
This gives the same result as in the .NET solution: a new Migrations folder with the migration and the migration snapshot added.
Now I can create the database with:
I then verified that the PostgreSQL database, tables and relationships got created. There are a number of tools you can use in OS X to do this. On my Mac, I use the JetBrains cross-platform DataGrip as my database IDE.
Running the Tests in CoreCLR
Finally, I copy the TestEFCoreFullNet class from the earlier solution into my Test folder. Again, I have to make infrastructure changes to use xUnit instead of MS Test: a few namespace changes, removing the TestClass attribute, replacing TestMethod attributes with Fact and replacing Assert.AreEqual with Assert.Equal. Oh and, of course, I rename the class to TestEFCoreCoreClr.
Project.json also needs to know about the InMemory provider, so I add:
to the dependencies section, as well, then run “dotnet restore” yet again.
My xUnit test project uses the xUnit command-line test runner. So I’m back to my terminal window to run the tests with the command dotnet test. Figure 9 shows the output of running the test, which passed with flying colors—except the command-line test runner doesn’t provide the satisfying green output for passing tests.
Figure 9 xUnit Test Output of the Passing Test
.NET or CoreCLR: Same APIs, Same Code
So now you can see that the code and assemblies related to EF Core are the same whether you target your software to run solely on Windows with the full .NET Framework at your disposal or on CoreCLR on any of the supported environments (Linux, OS X, Windows). I could’ve done both demonstrations in Visual Studio 2015 on my Windows machine. But I find that focusing the CoreCLR work in an environment that’s completely unavailable to the full .NET Framework is an eye-opening way of demonstrating that the EF APIs and my EF-related code are one and the same in both places. The big differences, and all of the extra work, are only related to the target platforms (.NET vs CoreCLR). You can watch me creating a full ASP.NET Core Web API using EF Core 1.0.0 on my MacBook in the video, “First Look at EF Core 1.0” (bit.ly/PS_EFCoreLook). For an abbreviated and entertaining demo of the same, check out the video of my DotNetFringe session at bit.ly/2ci7q0T. juliel.me/PS-Videos.
Thanks to the following Microsoft technical expert for reviewing this article: Jeff Fritz
Jeffrey T. Fritz is a senior program manager in Microsoft’s ASP.NET team working on Web Forms and ASP.Net Core. As a long time web developer with experience in large and small applications across a variety of verticals, he knows how to build for performance and practicality.
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
Data Points - Run EF Core on Both .NET Framework and .NET Core t...
Oct 3, 2016
|
https://msdn.microsoft.com/magazine/mt742867
|
CC-MAIN-2018-51
|
refinedweb
| 3,094
| 65.01
|
reject Lauren Weinstein's notion that an organization must register its name in every tld. The domain name system works just fine whether delta.com goes to Delta faucets or Delta airlines and consumers are not confused by this. To quote the old chestnut coined by domain name attorney John Berryhill "nobody turns on the tap and expects an airline schedule".
The trademark lobby has spent untold tens if not hundreds of millions of dollars blocking the development of new top level domains, for there us a fundamental clash between trademark space and domain space.
Trademarks are granted for a specific set of good and services within a specific geographical area, whereas domain name are globally unique. This is the real and actual cause of the friction that impedes the development of new tlds. They may win the battle, but ultimately the will lose the war, and ICANNs concessions to them pu the very organization itself at risk. Justice delayed is justice denied.
There have always been alternatives to the ICANN root servers that define the top level to the domain name hierarchy. ICANN itsel recognizes it's own subset of alternative roots, and alternative roots exist outside the ICANN system, and these predate ICANN by a ew years.
Recently other initiatives have appeared on the scene: Lauren Weinstein has begun an effort () visible at to replace ICANN.
The Pirate Party is now experimenting with a peer-to-peer name resolution system to replace the central authority single-point-of-failure model that is icann. Note for example works for those that know how to use it and can not be taken down by anyone but the site owner.
And out in left field there is which few people understand. Keep in mind "Zoom" is the brand of cable modems that are near ubiquity. Standards sometimes emerge from an installed base, not a standards making process.
We live in interesting.times!
It's nice to see my distinguished colleagues Messrs. McCarthy and van Couvering chime in, being entertaining and informative as usual but of course it must be understood they are part of the ICANN ecosystem (as is Alexa Raad) and have more or less a vested interest in the positive outcome of the new top level debate which continues to move forward albeit at a glacial pace. There's a saying in the newtld industry I am apparantly credited with "new top level domains are two years away from whenever you ask" and in light of the recent rather stern letter from ICANN's overlords at the department of (dot)Commerce I've upgraded this from two, to three.
There is quite a long history behind this movement, going back to 1996 when the National Science Foundation instructed its contractor, Network Solutions to begin charging for domain names; up to that point they had been free.
The software that does the overwhelming majority of the work in translating domain names to the IP addresses computers can actually use (domains are for human convenience, nothing more) was written at the Digital Western Research Labs in the 1980s by Paul Vixie. Brian Reid was the director of the lab (until it was closed by Compaq when they bought them over a decade later) and funded this development and said of the "domain name mess" in 1988 "I feel like a dork paying for my domain names but don't know what to do about it".
One of the many problems ICANN has is that "when all you have is a hammer, everything looks like a nail" - and with .COM being the unquestionable market leader, dollar signs flash before everybody's eyes and the perception exists that any fool with $2M to spend (the $185K is just for starters) can be the next Network Solutions/Verisign, who own .com and .net.
But, that way lies folly, and the fault is of course, in general it's not possible to generalize. There does not exist one set of rules that work perfectly well for all cases, and all tlds are not created equal. .com is not the same as .int which is not the same as .arpa, the latter being the tld that would show up most often on a network traffic analyzer, yet few poeple have ever heard of it.
The idea of cloning .com is a very old one, and it was a letter from John Postel in 1996 to Rick Adams, then of uunet, inviting him to help clone .com that was the call to arms for the new tld movement, John's vision was to create hundreds of new top level domains, at least 300, and to create 150 that year. This was at a time when the 250 "country code top level domains" such as .uk, .de, .se and others, already existed. In fact ICANN was created with a mandate from the US government that created and oversees it to do three things: 1) devolve the Network Solutions .com monopoly (done) 2) do something about the collision between domain and trademark space (hello UDRP, done) and 3) create new top level domains.
It is noteworthy to point out the first two were completed within six months of ICANNs creation, yet here we sit twelve years later with the fantasy in some peoples minds as to whether we need new top level domains. Sorry, but we've seen this movie already and like the push button phone, and the motor car, progress moves ever onward, and no, the telephone system will not melt and no the horses will not really be frightened despite rabid assertions to the contrary.
So, assuming the domain name system will be with us for a while (and do keep in mind the web is a subset of the Internet, one of many many protocols and any discussion of Internet names must consider them all, not just the web) it is utterly inevitable that new top level domains will (albeit eventually) see the light of day.
I disageee vehements with Lauren Weinstein's assertion that this increase in the namespace will make the net difficult to use. Certainly the rise in deployment from the first 7 names to the almost 300 top level domains we have today during the 1995-2005 timeframe didn't seem to cause too much fuss. I'd argue that "junk" sites in .com are the larger problem - "parked" domains that exist to simply capture traffic for the almighty ad click without providing any real value or original content.
And as I'm fond of reminding the US government when the occasion arises: "look on the bright side, half of .com *isn't* porn".
Just because there are 10,000 newspapers doesn't mean you have to read them all. Similarly so, no matter how many top level domains exist, it's not like you have a menu in front of you with 100 million names and have to read them all. Search engines so this and any real (not perceived) problem that occurs due to a rabid plethora of names will be addressed, and properly so, at the application layer that is the search engine user interface. Just as the search engine was a new concept in the 90s there are perhaps new inventions as yet not thought of that will help users navigate an increasingly complex namespace.
(end pt. 1, hello 5000 char limit)
Where I do really agree with you is that domain names may play less of a role in the future. At least those hard to remember or with no chance to stand out.
Consolidation of many sites in one click is already on the way. Brief.ly does it much better than bit.ly or Facebook. Have a look on this article's compilation:
Also available as:
http://✯.ws/~dA
http://➸.ws/~dA
...
Choose any address you like. Still, the content will remain the king.
Out of all TLDs listed on the tree, two actual.ly do not exist: .adult and .movie (not yet at least). We are yet to see .xxx to be approved.
Then .museum is misspelled. Shame, a bit of editing would put more confidence that writers spent a while on the subject.
Your article seemed to imply that with a myriad other options to Top Level Domains, DNS will lose its significance. As a correction, the DNS and the Top Level Domains are not one and the same thing. The DNS is the routing mechanism. Top Level Domains are merely the human recognizable label. TLDs may lose their significance as many other labels or navigational aids are added to the arsenal of the user (think Quick Response codes, mobile apps, tiny URLs, FB pages etc) but the distributed, scalable and de-centralized architecture of the DNS is the genius plan that makes it possible. Yes, some of the many proposed TLDs will fail, as they struggle to find a meaningful reason why users should register them, build content and traffic behind them, and educate the end-user that they even exist. For example, many auto fill applications will not recognize the email addresses associated with some new TLDs, and that will cause frustration no doubt. But there may be a few who prevail, but those that do, must consider not only the value they may bring, but also improve upon what the other "substitutes" provide. And even though many of the substitutes for TLDs are technical substitutes, in other words, another way of labeling/accessing content that is easier or better (think mobile apps which navigate with a click) there are other considerations such as security and privacy. Can the market be defined by operators who make a distinction on who better preserves the privacy of the registrant? Or the security of the user? I believe it can.
Which brings me to my second, related point.
Your article is right about the perils of giving governments too much control, as in many societies that has led to censorship and deprivation of citizens from either accessing content or the right of self expression. If there is a positive side, it is this: so long as users have choice (to say register addresses other than those controlled by their state) there is still hope, that the service providers (TLD operators, registrars etc) outside the jurisdiction of the country in question will have the good sense not to cower under pressure. Of course, that has not always proven to be the case. But the more choices there are, the better the chances.
"And the naming systems of Facebook and other social networks are becoming more important."
Thanks for the laugh. Remember the AOL Keywords?
Anyway, as someone already wrote, companies have no problem paying hundreds of thousand$ for one commercial but refuse to pay for a top domain name they will have for "eternity."
The price of a domain names and TLDs should maybe vary according to their length. .muesum should be cheaper than .tel, and yahoo.com should be cheaper than msn.com. If both initial and annual charges vary, there would be less need for ICANN, as many of the current disagreements would find their solutions in who is willing to pay the most.
The meat of this article is really divorced from the title of it.
The relationship between "Routing it right" and control over TLD is really about how the world can be assured traffic to their servers are not subject to American whims.
The U.S. literally has her hands on the balls of everyone's Internet. They want to make sure theirs can't be squeezed.
Building on what Antony has said, the fact is that Facebook is found at Facebook.com and Twitter at Twitter.com.
If the history of the Internet has taught us anything it's that these companies come and go - look at GeoCities or MySpace or Friendster.
What the DNS gives everyone - including the new Googles and Facebooks is an ability to build their own world online and then entice people to it.
What top-level domains do is pull us away from the dot-com model that we have got so used to that alot of people can't imagine the Internet as working any other way.
It is very likely that the next revolution will come as a result of the DNS being opened, not despite it.
The author's wish that domain names and the DNS become a "mere technicality," ceding its space to Facebook and similar ventures, misses the real reason why the DNS is important -- it assures some public control over important resources, and procedures that are subject to public scrutiny and have at least a goal of being fair.
I wonder if any of your readers have tried to recover their trademarked name from Twitter or Facebook. There are no clear procedures for doing so; no published guidelines, no appeals procedures. Basically these companies make their own decisions, based on criteria that they (and they alone) know and determine. This dynamic is true for other issues as well: freedom of speech, morality issues, privacy, etc.
A naming and addressing system run in the public interest, with policy proposed and developed by a multi-stakeholder community, is superior to a private company's profit-based decisions. This is what ICANN, for all its imperfections, provides. People should think long and hard before deciding that private companies should make global Internet policy.
Antony Van Couvering
CEO, Minds + Machines
A useful summary except for the last two paragraphs which are in cloud-cuckoo land.
I have no idea where you got Lauren Weinstein from but I've not seen here at any ICANN meetings for the past ten years. And the “domain-industrial complex” line of argument is embarassingly bad for the Economist.
There's no shortage of people and information out there about new Internet extensions. A little more research would definitely have paid off.
Good article, but seriously lacks the reasons why this image driven society needs exclusive cyber brands; a small TV commercial can cost a million so what is the fuss on $187K for a gTLD, a device to ensure global cyber branding platform.
|
http://www.economist.com/node/17627815/comments?sort=1
|
CC-MAIN-2015-11
|
refinedweb
| 2,368
| 68.2
|
The Samba-Bugzilla – Bug 6280
Linux mknod does not work when syncing fifos and sockets from Solaris
Last modified: 2009-04-26 13:49:49 UTC
When syncing specials, namely pipes and sockets from a Solaris host, rsync -a prints error 22. Remote version is 2.6.3pre1.
sk@noether(~/rsnyc/rsync-3.0.6pre1)> ./rsync -a klein:/tmp/xhyjxhy .
rsync: mknod "/home/sk/rsnyc/rsync-3.0.6pre1/xhyjxhy" failed: Invalid argument (22)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1505) [generator=3.0.6pre1]
Reason: rsync calls mknod ("filename", ..., 0xffffffff), which Linux rejects, as allowed by POSIX.
rsync should clear the third parameter before calling mknod.
Suggested patch in syscall.c, do_mknod:
111 #ifdef HAVE_MKNOD
112 if (S_ISSOCK (mode) || S_ISFIFO(mode)) {
113 /* this variable is not ignored by Linux 2.6. */
114 dev = 0;
115 }
116 >-------return mknod(pathname, mode, dev);
117 #else
Three additional comments:
1. Updating the Solaris side of things to rsync-3.06 does not help.
2. Pushing to instead of pulling from Linux does not help.
3. Reproduce the behaviour with this code#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
int main () {
if (mknod ("test_fifo_fifi", 0777 | S_IFIFO, (dev_t)-1)) {
perror ("mknod");
return 1;
}
unlink ("test_fifo_fifi");
return 0;
}
This works on Solaris, and does not work on Linux. The Linux docs are not completely clear as to if this should work.
Does Solaris allow mknod("test_fifo_fifi", 0777 | S_IFIFO, (dev_t)0) to work? If so, the fix I'm contemplating is setting rdev to 0 down in flist.c's make_file():
#ifdef HAVE_STRUCT_STAT_ST_RDEV
if (IS_DEVICE(st.st_mode)) {
tmp_rdev = st.st_rdev;
st.st_size = 0;
} else if (IS_SPECIAL(st.st_mode)) {
tmp_rdev = 0;
st.st_size = 0;
}
#endif
I wonder if that would break any OS? Perhaps a safer change would be for that final tmp_rdev assignment to be:
tmp_rdev = st.st_rdev == (dev_t)-1 ? 0 : st.st_rdev;
Hi.
from Solaris mknod(2):
If mode does not indicate a
block special or character special device, dev is ignored.
Similar prose is in Linux mknod(2), but in the case of Solaris, it is actually true :-)
In short, dev=0 works on Linux and Solaris.
I'd prefer the fix being in the receiving end of rsync, that is, the one that does the actual mknod. This would maintain compatibility work legacy rsyncs on the remove host. Is make_file on the receiving end?
Greetings, Sebastian
Created attachment 4082 [details]
Changes to excise rdev use with special files
Here's the patch I came up with. It has the following features:
- Should avoid the error when talking to an older rsync version, regardless of the direction of transfer.
- Makes the transmission of special files a little more efficient (and will be made even more efficient in protocol 31, which is being created for 3.1.0).
- Saves memory by not remembering a useless rdev value for special files.
I looked at the Linux kernel code, and all the file systems I saw validate that the dev value is a valid device number before figuring out what kind of device/special-file is being created. So, the code above ensures that mknod() should always see a valid device number.
Fix checked into git.
I tried the Git version in all combinations of patched/unpatched and Solaris/Linux. It works beautifully, tough I don't quite see the reason for a protocol change. Thank you very much for your help!
|
https://bugzilla.samba.org/show_bug.cgi?id=6280
|
CC-MAIN-2018-51
|
refinedweb
| 582
| 67.86
|
...I still do not understand how to check a user's input to ensure that it is a numeric character. I want to use the isdigit() function and I read the FAQ regarding the syntax of this function, but either the FAQ is too ambiguous or I'm not grasping the concept. (Probably the latter). I know what the isdigit() function is supposed to do -- and it's exactly what I need for my program -- but writing the blasted thing is my problem. I need more than just:
#include <ctype.h>
int isdigit(int c);
to help me write the function.
Furthermore, I want this thing to work inside a loop that already checks the input to make sure it falls within the range of acceptable numbers. Now I need the isdigit() function to first make sure the input is a number and not a letter, then it can check to see if it is within the range of acceptable numbers.
Any help would be appreciated.
CaptainJack
|
http://cboard.cprogramming.com/c-programming/50663-i-read-faq-isdigit-but.html
|
CC-MAIN-2014-52
|
refinedweb
| 167
| 70.13
|
» Publishers, Monetize your RSS feeds with FeedShow: More infos (Show/Hide Ads)
Most Rails developers are familiar with generating RESTful URLs polymorphically by simply passing an object to one of many helper methods that expects a URL, such as link_to and form_for:
# In Routes resources :articles # In View: form_for @article do |f|
This capability extends beyond just single objects, supporting nested routes and specific action targeting; for example:
# In Routes: namespace :admin do resources :categories do resources :articles end end # In View form_for [:admin, @category, @article] do |f|
Problem
One problem I’ve run into with some frequency, however, is using this polymorphic path approach when the class name of the ActiveRecord model does not quite correspond directly with the resource name. This occurs most frequently when you want a little more context in your model naming which may not be necessary in routes. For example, lets look a domain model with customers having many customer locations:
# Models: class Customer < ActiveRecord::Base has_many :locations, :class_name => "CustomerLocation" end class CustomerLocation < ActiveRecord::Base belongs_to :customer end # Routes: resources :customers do resources :locations end
In the above example, the model name is "CustomerLocation", but the resource name as specified in the routes is just "locations", since the context of customers is already well-established from the nesting. The problem with this is when we try to use our regular polymorphic path solution:
form_for [@customer, @location] # Tries to generate: customer_customer_location_path(@customer, @location)
Many people when running into this will just do away with using the clean polymorphic path solution entirely and instead provide the URL explicitly:
form_for @location, :url => customer_location_path(@customer, @location)
This of course works but isn't exactly ideal.
Solution
While it might look like it's using the class name to construct the url helper method name, it's in fact using "model_name" instead (which defaults to the class name). But, this can be overridden!
class CustomerLocation < ActiveRecord::Base def self.model_name ActiveModel::Name.new("Location") end end
After this, the polymorphic path [@customer, @location] works as we would expect.
Another common situation where this technique becomes useful is if you are working extensively with namespaced models. This is tricky because the namespace ends up becoming part of the model name, which almost surely does not map to your resource hierarchy:
# Model: class Core::Customer < Core::Base end # View: form_for @customer # Tries to generate: core_customer_path(@customer)
Overriding model_name will allow you to explicitly define "Customer" as the model name, despite it being namespaced within "Core". But we can do better than that - if you have a base model for your namespace (as I believe is always a good practice), just put this in the base model:
class Core::Base < ActiveRecord::Base def self.model_name ActiveModel::Name.new(name.split("::").last) end end
Although this technique has served me well in many apps, do be aware that the model name is used in some other instances throughout Rails such as error_messages_for, so do use this with care.
Though all of the above examples are for ActiveModel/ActiveRecord 3.0 or higher, the same technique will work in Rails 2.3 by simply using "ActiveSupport::ModelName" in place of "ActiveModel::Name".
Developers seem to rarely use validates_lengths_of with their models, despite there being an inherent maximum length on every string and text field – the one enforced by the database. Since table migrations in Rails set a fairly high maximum length for string attributes, most people don’t think twice about the possibly of that limit being exhausted. Even beyond this, there may be other reasons why the field limit is set fairly low, or perhaps you’re working with a legacy database.
Without validating maximum field lengths at the model level, ActiveRecord will still ship off the full field value as entered in an INSERT query. From there two things might happen, and this depends on the database itself.
1. It might fail the query with an ActiveRecord::StatementInvalid exception. Since this is an exception most developers don’t routinely handle, this could cause the entire request to fail. This occurs with SQL Server for sure, and possibly other databases.
2. It might accept the query and simply truncate the result to the field limit. This might sound better than #1, but IMO it’s actually worse: now you have an instance where the everything appeared to go along fine, but you might end up with missing data. This is the default behavior with MySQL.
There’s another reason why you should always validate field lengths, and that’s to limit the possibilities for a Denial of Service attack. If an attacker knows you aren’t validating field length and that whatever they’re providing is going straight into an SQL query and being sent to the database, they can craft extremely large requests that might fill up the pipe to the database. Even if the data doesn’t actually end up being inserted in full, they’ve tied up a database connection and – since the regular old “mysql” gem will block for the query result – a Rails process.
Unfortunately in order to do this properly you essentially have to write validates_length_of lines (or the equivalent ActiveRecord 3 form) for each attribute, with it’s maximum database length. To make this easy, I wrote a gem called validates_lengths_from_database that will introspect your database string field maximum lengths and automatically defines length validations for you.
To install, just include the gem in your Gemfile (works with Rails 2.3 and Rails 3.0):
gem "validates_lengths_from_database"
Then in your model you can activate validations:
class Post < ActiveRecord::Base validates_lengths_from_database end
It also supports filter-style :only and :except options:
class Post < ActiveRecord::Base validates_lengths_from_database :only => [:title, :contents] end class Post < ActiveRecord::Base validates_lengths_from_database :except => [:other_field] end
Note that this cannot be done at a global level directly against ActiveRecord::Base, since the validates_length_from_database method requires the class to have a table name (with the ability to load the schema).
For more information or to view the source, check out the project on GitHub: validates_lengths_from_database
- San Diego is a “big city”. I’ve always longed to live in a larger city where more opportunities of all kinds await.
- San Diego weather is outstanding. Being able to play tennis every single day is a major plus.
- San Diego has a lot of charm, particularly in the many communities outside of downtown such as North Park and Hillcrest.
- My parents are moving (for half of the year) to Palm Springs, a city only a couple hours away.
- Although paling in size to San Francisco, San Diego seems to have a decent tech community (and Ruby community).
- The entire area seems far more laid back, and people are friendlier.
- I do actually know a few people out there, which is more than I can see for most west coast cities.
If
I recently had a frustrating discussion with a developer friend of mine concerning building web applications to support Internet Explorer 6 that highlighted a recurring theme of technology people misunderstanding business and economic decision-making. In it I found myself trying to defend a deliberate decision in an application I develop not to support IE6. His take was that because so many internet users still use IE6, there’s just no reason why we should not build our applications to support the browser.
Now there’s no doubt that Internet Explorer 6 continues to have large penetration in the browser market. But this fact alone resulting in a categorical dismissal of the deliberately not supporting IE6 misses some key pieces of economic reasoning that must be considered.
Before outlining the reasoning, let’s just accept the false assumption that browser usage numbers alone should dictate your decision regarding browser support. If this is the case, what’s the breaking point? At what percentage use do you decide not to support – what’s the magic number? Making decisions this way is rather naive – any choice here is mostly arbitrary and bears to relevance to the trade-off of costs and benefits or the other considerations I outline in the rest of this post. The core objective here is to balance the benefit of having more users access to your applications (or more correctly, the marginal revenue derived from the additional customers using only IE6, since it isn’t safe to assume that browser and revenue-per-customer are uncorrelated) against the anticipated cost of making your application IE6-compatible. As such, the relevant decision should carefully consider this trade off. To help make this decision, there are considerations on both the cost and the benefit side:
Considerations on the Benefit Side
One thing that’s commonly done is to cite browser numbers as of today as if that’s the complete picture. As a case in point look at Google Chrome: numbers for this browser are quite low but many developers are rushing to provide support for it because of the future expectation of higher numbers. Basing your decision on current penetration numbers at a particular point in time would be like performing stock valuation with last fiscal years profit only. Many companies that aren’t even profitable are worth magnitudes larger than established revenue-sustaining firms. Therefore, when considering browser penetration, we really should be thinking not only about today’s numbers but also about anticipated numbers in the future, since presumably the application in development will last at least a year or two. Put simply, trends matter.Another important point to consider is your target audience.
It doesn’t come as a surprise that browser choice does has some correlation with technical ability, so if your application targets hip, young internet users well-versed in other web 2.0 applications, you can bet that the percentage of that group using IE6 is significantly less than the percentage of older folks who barely check their e-mail and use an older machine.
Yet another important consideration, particularly if you are working on an internal project, is what degree of control you have over your users’ web browser. For example, if you are working on an internal application used within your company, and the corporate standard is to use Firefox, then you aren’t going ot have much of a problem with IE6. Even some applications, especially at first, are introduced to a smaller set of users with whom you may actually speak to in person and be able to influence the browser used with the application. This is the case in one of the applications I work on and hence we haven’t found it worth it to support IE6 yet.
Considerations on the Cost Side
Implementing support for IE6 is not free. Every web developer on Earth has a profound hate for Internet Explorer 6 with all it’s quirks, bugs, and non-compliance with established standards. Creating semantic HTML/CSS markup that works well in IE6 is hard enough, but writing working JavaScript is five times more difficult to create and twenty times more difficult to debug. Even when it’s working, often times you’ll have implemented hacks and other code smell that detracts from quality of the codebase. Doing all this of course costs money since ultimately you are paying developers to produce the code and debug it for IE6. In many cases this can be a significant additional expense.
Most importantly, the magnitude of this additional expense is extremely dependent on the specifics of the application. A basic brochure ware website with no JavaScript would cost significantly less to support than a rich, web 2.0 application with significant interactivity.
Although it’s very difficult to quantify this cost (at least, more difficult than quantifying labor cost), there’s also something to be said for reducing code quality and structure for the sake of supporting IE6. Extra DIVs, CSS hacks, less straightforward CSS due to lack of newer CSS selectors, and conditional statements in JavaScript all have a “cost” in a most abstract sense to the elegance of your codebase and may have additional maintenance costs in the future as you try to maintain a less clean codebase.
Another misunderstanding in general when thinking about costs and decision-making at the margin is that saving $1,000 in developer pay may not be the “real” economic cost here since keeping $1,000 cash may not be (and most certainly is not) your next best alternative, a concent economists call “opportunity cost”, which is the real cost that you should consider when making decisions. Now in a more traditional setting your opportunity cost isn’t much higher than your cash expenditure so substituting cash isn’t a bad approach, but in a face-paced startup that’s growing very quickly, often times your opportunity cost can be enormous as new features and development have tremendous value. Saving $1,000 in labor can be substantial if that $1,000 in capital could otherwise be used towards implementing new features that you value at much, much more than $1,000.
The point here is not whether or not it makes sense for my particular application to support IE6. Certainly it makes sense for many sites, usually content sites and those sites not targetted to a younger crowd, to support IE6 in full. My point here is that decisions like these should not be shallowly-reasoned or be subject to categorical dismissals. In decisions like this, it pays to approach a decision analytically like an economist would – think carefully about the costs and benefits. And choosing which browsers your application supports is pretty trivial compared to all the other kinds of decisions in developing software that would benefit greatly from an application of economic reasoning.
I ran into a problem a while back of creating draft copies of ActiveRecord models for the purpose of establishing a draft/live system. I’ve since found a reason to resurrect this and publish it to GitHub and clean some things up. Check out has_draft on GitHub.
has_draft allows for multiple “drafts” of a model which can be useful when developing:
- Draft/Live Version of Pages, for examples
- A workflow system whereby a live copy may need to be active while a draft copy is awaiting approval.
The semantics of this as well as most of the inspiration comes from version_fu, an excellent plugin for a similar purpose of maintaining several “versions” of a model.
This was built to be able to be tacked on to existing models, so the data schema doesn’t need to change at all for the model this is applied to. As such, drafts are actually stored in a nearly-identical table and there is a has_one relationship to this. This separation allows the base model to really be treated just as before without having to apply conditions in queries to make sure you are really getting the “live” (non-draft) copy: Page.all will still only return the non-draft pages. This separate table is backed by a model created on the fly as a constant on the original model class. For example if a Page has_draft, a Page::Draft class will exist as the model for the page_drafts table.
Basic Example::
## First Migration (If Creating base model and drafts at the same time): class InitialSchema < ActiveRecord::Migration [:articles, :article_drafts].each do |table_name| create_table table_name, :force => true do |t| t.references :article if table_name == :article_drafts t.string :title t.text :summary t.text :body t.date :post_date end end end ## Model Class class Article < ActiveRecord::Base has_draft end ## Exposed Class Methods & Scopes: Article.draft_class => Article::Draft Article.with_draft.all => (Articles that have an associated draft) Article.without_draft.all => (Articles with no associated draft) ## Usage Examples: article = Article.create( :title => "My Title", :summary => "Information here.", :body => "Full body", :post_date => Date.today ) article.has_draft? => false article.instantiate_draft! article.has_draft? => true article.draft => Article::Draft Instance article.draft.update_attributes( :title => "New Title" ) article.replace_with_draft! article.title => "New Title" article.destroy_draft! article.has_draft? => false
Custom Options::
## First Migration (If Creating base model and drafts at the same time): class InitialSchema < ActiveRecord::Migration [:articles, :article_copies].each do |table_name| create_table table_name, :force => true do |t| t.integer :news_article_id if table_name == :article_copies t.string :title t.text :summary t.text :body t.date :post_date end end end ## Model Class class Article < ActiveRecord::Base has_draft :class_name => 'Copy', :foreign_key => :news_article_id, :table_name => 'article_copies' end
Method Callbacks:
There are three callbacks you can specify directly as methods.
class Article < ActiveRecord::Base has_draft def before_instantiate_draft # Do Something end def before_replace_with_draft # Do Something end def before_destroy_draft # Do Something end end
Block of Code Run for Draft Class:
Because you don’t directly define the draft class, you can specify a block of code to be run in its
context by passing a block to has_draft.
class Article < ActiveRecord::Base belongs_to :user has_draft do belongs_to :last_updated_user def approve! self.approved_at = Time.now self.save end end end
One of the most common needs in any application I build is to have some abstract way of handling messages to end users. Sometimes I’ll want to show a confirmation message or a warning. Other times I’ll want to show a confirmation message but also show ActiveRecord validations. While the error_messages_for helper in Rails works fairly well for showing ActiveRecord validation issues, I wanted a unified approach to handling this and flash messaging with multiple flash types in one package.
I blogged before about an approach I developed to solve this problem, I rolled my own message_block plugin. The README file explains things pretty so be sure to check it out for more details, but here is the intro:
Introduction:
Implements the common view pattern by which.)
This view helper acts as a replacement for error_messages_for by taking error messages from your models and combing them with flash messages (multiple types such as error, confirm, etc.) and outputting them to your view. This plugin comes with an example stylesheet and images.
Usage:
Once you install this, you should now have a set of images at public/images/message_block and a basic stylesheet installed at public/stylesheets/message_block.css. First you’ll want to either reference this in your layout or copy the declarations to your main layout. Then you can use the helper <%= message_block %> as described below:
The first argument specifies a hash options:
- :on – specifies one or many model names for which to check error messages.
- :model_error_type – specifies the message type to use for validation errors; defaults to ‘error’
- :flash_types – specifies the keys to check in the flash hash. Messages will be grouped in ul lists according to this type. Defaults to: %w(back confirm error info warn)
- :html – Specifies HTML options for the containing div
- :id – Specifies ID of the containing div; defaults to ‘message_block’
- :class – Specifies class name of the containing div; defaults to nothing.
Imagine you have a form for entering a user and a comment:
<%= message_block :on => [:user, :comment] %>
Imagine also you set these flash variables in the controller:
class CommentsController def create flash.now[:error] = "Error A" flash.now[:confirm] = "Confirmation A" # Note you can use different types flash.now[:warn] = ["Warn A", "Warn B"] # Can set to an array for multiple messages end end
And let’s say that you want to show these messages but also show the validation issues given that both user and comment fail ActiveRecord validation:
<div id="message_block"> <ul class="error"> <li>Error A</li> <li>User first name is required.</li> <li>Comment contents is required.</li> </ul> <ul class="confirm"> <li>Confirmation A</li> </ul> <ul class="warn"> <li>Warn A</li> <li>Warn B</li> </ul> </div>
Which will by default leave you with this look:
The addition of named_scope in Rails 2.1 has revealed several elegant approaches for modeling complex problem domains in ActiveRecord. One I came across recently while working on an app with a somewhat complex permissions system was a permission-based filtering mechanism. In this case I was dealing with permission for a given user to manage an “office”, while a user could be at one of three permission “levels”, one of which has specific office assignments (or it’s assumed all are manageable if user.can_manage_all_offices is true). Lot’s of necessary conditional logic there.
Now a normal approach to such a task to “show a list of offices that the user can manage” (for a drop-down for an interface perhaps) might be something like this:
# In controller: if current_user.can_manage_company? @offices = Office.find(:all) elsif current_user.office_access_level? && current_user.can_manage_all_offices? @offices.find(:all) elsif current_user.office_access_level? @offices.find(:all, :conditions => {:id => current_user.manageable_offices.map(&:id) }) else @offices = [] end
But this approach starts at the user level and, using a lot of conditional logic baked right into places we don’t want, makes different calls to Office, which isn’t very DRY, and certainly not consistent with fat models skinny controllers. So I considered inverting this approach and instead starting with an office, and asking it what “is manageable by” a given user. Consider this alternative:
# In office.rb named_scope :manageable_by, lambda {|user| case when user.can_manage_company? then {} when user.office_access_level? && user.can_manage_all_offices? then {} when user.office_access_level? then {:conditions => {:id => user.manageable_offices.map(&:id)}} else {:conditions => "1 = 0"} end } # Then in controller: @offices = Office.manageable_by(current_user)
This seems much more elegant to me. I’m in general finding a lot of opportunities for inverting the way I designed something without named_scope to be more model-centric, so this approach helps further the design principle of “fat models, skinny controllers”. Sure you could do this before by defining your own methods and using with_scope, but named_scope just makes it all the more elegant.
Another advantage with using the named scope stuff is that you can chain scopes together. Let’s say that in another controller I want to do the same thing but instead restrict results to active offices only. I can create an “active” named scope that scopes :conditions => {:active => true}, then in the controller simply do this instead:
@offices = Office.manageable_by(current_user).active
To
I.
Though I’ve been extremely happy with leopard (in fact, why I finally bought a Mac), little quirks with spaces were frustrating. Specifically, spaces seems to have been designed for users who want to place a separate application in each space. I use spaces in a task-oriented manner where each space typically has a Firefox window, tabbed terminal, and TextMate. Command-Tabbing and others workflow situations were just quirky. Also, a huge annoyance was going into the terminal, typing “mate .” and having it open up a new TextMate window in whatever space already had a TextMate window.
I’m very pleased to report that 10.5.3 solves all of these problems, with one new option in the spaces dialog called “When switching to an application, switch to a space with open windows for the application.” In fact, you want to uncheck this option if you are working in a task-oriented manner like me. There were also some bug fixes in 10.5.3 which transcend this new option.
With these frustrations gone I feel very solid working with 3-5 project workspaces at once and can move efficiently within and among them.
Upgrade to 10.5.3 now if you haven’t already done so!To start out, I used the following *excellent* SliceHost articles to get some of the basic packages installed such as Ruby and MySQL:Ubuntu Setup – Part 1Ubuntu Setup – Part 2MySQL and RailsApache and PHPApache Virtual HostsVirtual Hosts & PermissionsNoteUsing.soRailsSpawnServer /usr/lib/ruby/gems/1.8/gems/passenger-1.0.1/bin/passenger-spawn-serverRailsRuby /usr/bin/ruby1.8After you restart Apache, you are ready to deploy any numberof Ruby on Rails applications on Apache, without any furtherRubyTheRestarting a Rails app hosted by Passenger is easy – simply touch a file called tmp/restart.txt within the Rails application root:
$ touch tmp/restart.txt
CapistranoBend
Diagnosing ProblemsOne of the absolutely wonderful things about Passenger is that problem diagnostics (regarding starting your Rails app) is fairly easy because Passenger includes its own nice error pages:Database Error ExampleFramework Initialization ExampleMemory ManagementOneObviously.
I am convinced this is the next big thing: Cobol on Cogs. Definitely the future of web development. As of today I renounce Ruby on Rails and all it stands for!
Today I deployed a major update to my Jazz object model which implements a full REST web interface that is currently accessible through a web browser. Visit for the RDocs - there are plenty of examples at the bottom of the index file, or you can try this immediately:
Some fairly complex relationships can be explored in this manner just using your web browser (displaying XML). Note that I still do not have a whole lot of data in here; all the data is just my personal knowledge of Jazz theory. Eventually I will consult a theory reference and develop the database of jazz more thoroughly.
Something.
As a (wannabe) Jazz pianist, I’ve always had a strong interest in Jazz theory not only from a practical you-need-this-to-play standpoint but also a theoretical standpoint. At it’s core, music theory is all about a 0 - 11 number system where each number represents a pitch, with the added complication of letters (which makes Eb theoretically different from D#). Everything in music is based on this basic number system: scales and chords are simply sequences of these numbers. Changing key in a scale or chord mathematically is adding some integer to all of the index values and doing modulo 12. Changing modes within a scale simple involves a position shift of the sequence.
Last week I began exploring the idea of developing some sort of computerized object model to capture these musical theory concepts. As a pianist, knowing theory is extremely important and voicing chords can become quite complex, yet there is sanity in the complexity - a harmony of mathematical precision. Today I introduce “Jazz Toolbox”, the future front-end for which will be located at JazzToolbox.com. Right now I have only developed the back-end object model and calculation engine, which is definitely the foundation of the project. I’ve also written up extensive documentation. Here’s an introduction to the project straight out of my README file:
Jazz Toolbox
Jazz Toolbox is an online Jazz utility driven by a full object-relational and REST model of
concepts in Jazz theory, establishing relationships between chords and scales, and much more.
The web interface is a fully Ajaxified Web 2.0 interface towards exploring these relationships and
answering questions about the jazz science. The REST interface exposes the underlying data model
and relationships to other developers wishing to use Jazz Toolbox as a source for their own web
applications.
Architecture Overview
The core of Jazz Toolbox is a full Ruby object model representing concepts of Jazz theory,
distilled to their most basic concepts and architected in a very abstract manner. The system
is data-centric and all “rules” (for example, the tones in a C7 chord) in theory are
self-contained in the database.
All chord/scale/mode/etc. definitions are stored as a mathematical system (sequences of numbers)
which are then used to perform calculations. For example, putting some chord in a different key
is a matter of adding a semitone delta and doing modulo 12.
While there are currently many chord calculators in existance, to my knowledge this project is the
first one that attempts to fully represent the entirety of Jazz theory as a mathmetical/computational
system exposed through an elegant object model.
I’ve published the full RDocs which are available here and are very complete.
Examples
Any musician/programmer will appreciate these examples; far more are available in the rdocs:
Chord['Ebmaj7'].notes # => ['Eb', 'G', 'Bb', 'D'] # Or specify key context with chained methods like this... Chord['maj7'].in_key_of('Eb').notes Chord['Bmaj7#11'].notes # => ['B', 'D#', 'F#', 'A#', 'E#'] # Note E# - Correct theoretic value for this chord, not F Chord['Falt'].notes Chord['F7b9#9'].notes # => ['F', 'A', 'Eb', 'Gb', 'G#', 'C#'] Scale['Major'].in_key_of('Eb').modes['Dorian'].notes # => ['F', 'G', 'Ab', 'Bb', 'C', 'D', 'Eb'] Scale['Melodic Minor']['Lydian Dominant'].notes # => ['F', 'G', 'A', 'B', 'C', 'D', 'Eb'] Scale['Major']['Dorian'].chords.symbols # => ['min7', 'min6'] Chord['Amin7'].modes.names # => ['A Dorian'] Scale['Major'][4].chords.symbols # => ['maj7#11']
This is definately still a work in progress though it has the potential to become very powerful. I have high ambitions for the front-end web interface for this as well as it could become a valuable tool for Jazz musicians and others trying to nagivate the complex roads of jazz…
Occassionally I’ll decide to create some structured data in a migration by issuing create statements against my model classes. In a project I’m currently working on, I had to do this several levels deep. Rather than using temporary variables to create the sub items and so-on, all in a lexically flat structure, I came up with this simple one-liner:
def self.with(o); yield o; end
Doing this now allows me to issue those creates with nested data as a series of nested blocks, which is looks much better. For example:
with ChordQuality.create(:name => 'Major', :code => 'MAJ') do |q| with q.chords.create(:name => 'Major Triad') do |c| c.chord_symbols.create(:name => 'maj') c.chord_symbols.create(:name => 'M', :case_sensitive => true) end with q.chords.create(:name => 'Major 7') do |c| c.chord_symbols.create(:name => 'maj7') c.chord_symbols.create(:name => 'M7', :case_sensitive => true) with c.children.create(:name => 'Major 7 #11') do |cc| cc.chord_symbols.create(:name => 'maj7#11') cc.chord_symbols.create(:name => 'M7#11') cc.chord_symbols.create(:name => 'lyd') cc.chord_symbols.create(:name => 'lydian') end end with q.chords.create(:name => 'Major 6') do |c| c.chord_symbols.create(:name => 'maj6') c.chord_symbols.create(:name => 'M6', :case_sensitive => true) end end
Oh, the beauty of Ruby code blocks!
Since working with Ruby and other dynamic languages, I’ve thought a lot about the differences between statically-typed and dynamically-typed languages and what makes them different. Statically-typed languages guarantee some level of confidence of workability — but this comes at a huge cost of flexibility, and more importantly overlaps with software testing, not making efficient use of the concept of testing.
Software testing can be used to make up for some of the lost confidence resulting from dynamic typing, yet since they must be done regardless, the outcome is suboptimal with static languages - there is an overlap between the confidence generated from static typing and the confidence generated from the test suite. With dynamic languages there is much less overlap: the test suite makes up for the confidence lost from dynamic typing, yet is also there to ensure the software works as expected.
- Jay Fields touches upon this in an interesting post here: Static typing considered harmful.
- Bruce Eckel also examines this more in depth using Python as an example here, where he emphasizes “Strong test, not strong typing.”
- Robert Martin is a Java programmer who asks Are Dynamic Languages Going to Replace Static Languages?
ActiveRecord::Base exposes a great way of keeping access control DRY. A very common paradigm in web application development is showing “all” of something to administrators but only only “active” of something to regular users/visitors. In non-ORM PHP you might do something like like this in each SQL statement:
$sql = "SELECT * FROM entries WHERE type = 1 "; $sql .= (!$is_admin ? "AND active = 1" : "");
This isn’t a very well-architected approach since this type of tacking onto the SQL string would have to be done in any statement that involved this table, for all of your “actions” like show, edit, and index. It tightly couples this particularly piece of add-on logic to your base code. In Rails we can scope out this functionality to a “scope_access” method in our controller like so:
class EntriesController < ApplicationController around_filter :scope_access # (Action Methods) protected def scope_access unless current_user && current_user.is_admin? Entry.send(:with_scope, :find => {:conditions => 'active = 1'}) do yield end else yield end end end
We are declaring an around_filter which uses with_scope if the current user is an admin, otherwise it will do so without scope. This is an excellent example of how Rails allows you to architect your code to be very elegant. Note that with_method is now a protected method on ActiveRecord::Base hence our need to use ClassName.send.
The with_scope method is actually useful for many other purposes including running many but similar find statements. Aside from finders, this can actually scope attributes on create and more. I’d highly recommend you check it out in the Rails documentation and start making extensive use of it.
Antagonistic attitudes towards Ruby and in particular Ruby on Rails abound. Many web developers feel uneasy about the rise of a strange new framework that is attracting disciples daily. I’ve casually chatted with a few people who have negative attitudes about Ruby and the most striking aspect about everyone I’ve spoken with is that after prodding a bit at their understanding and experience level with Ruby I find they only have a surface-level understanding of Rails and no experience whatsoever. I have not once found someone who was well-versed in both Ruby on Rails (as in.. has actually built an application using it) and in any of the other number of web technologies out there, who was not at the very least appreciative about Rails and optimistic about its future.
What I’m trying to get at here is that to appreciate Ruby you have to know Ruby, and know it well. A surface-level examination of Rails by myself over one year ago left me in the same ignorant state that many Ruby antagonists are at right now, complaining about Ruby’s ability to scale and the restrictive nature the Rails framework involves. Even reading a few chapters out of the Agile book might leave one with this characterization. Yet the more I began to understand Rails and even more importantly the dynamic Ruby programming language, the more positive my attitudes towards Rails were.
It’s interesting that I find this particular understanding imbalance even more widespread and frustrating in economics: to appreciate the free market, you have to understand the free market. So few people truly understand economics and as a result their arguments are ripe with economic fallacies and mischaracterizations of fact. Economists on average believe that the free market works significantly better than non-economists. This isn’t a conspiracy: it’s because they actually understand economics.
There will always be people in any field that attack something they don’t understand for whatever reason. Those making arguments against Rails at least owe it to their opponents to actually understand what they are talking about. I find such is rarely the case.
|
http://reader.feedshow.com/show_items-feed=203be3724a0e4e3a0edf8038c319b1b1
|
CC-MAIN-2014-15
|
refinedweb
| 5,818
| 51.48
|
The bar code was first patented 57 years ago today, a fact unknown to me as I was putting the finishing touches on what I expected to be a fairly obscure post. So when articles started sprouting like mushrooms on the topic it seemed like a good time to get this finished. Maybe you've already had your fill of barcode stories today, but if you think you can handle one more then read on - after this one, a cup of tea and a lie down is definitely in order.
The PDF Library can create a number of Barcode symbols, both 1D and the more information-dense 2D formats. Your business may already be using barcodes as part of its workflow, but there are a few tricks you may not have thought of.
The code we're going to look at here is QR-code, which is ISO/IEC 18004:2006 - a public
domain 2D symbology
originally developed by Denso-wave in Japan. It's very popular there owing to it's
ability to store Japanese
characters, its very high information density, and the fact that many Japanese mobile
phones have the ability
to read QR-codes. It's common to see them on posters, in magazines and even on business
cards, so a potential
customer can photograph the code to import the information directly into their phone.
Since 2006 China Mobile has standardized on QR-code as well, which means a potential userbase of 500m people. Although adoption in the west is a little slower, I expect we'll be seeing more of QR-code.
Encoding Contact Details
Our first example shows how to work with contact details. NTT Docomo have defined a structured QR-Code for storing contact details - the full specification is (ed: link no longer exists in 2016). Called a "mecard" it's very similar to the industry standard vCard. Here's a quick demonstration we ran up to test this:
import java.io.*; import org.faceless.pdf2.*; public class TestQRCode { public static void main(String[] args) throws Exception { StringBuffer sb = new StringBuffer(); sb.append("MECARD:"); sb.append("N:Bremford,Mike;"); sb.append("T:+442073497053;"); sb.append("EMAIL:mike@bfo.com;"); sb.append("ADR:,,132-134 Lots Road,London,,SW10 0RJ,UK;"); sb.append("URL:;"); PDF pdf = new PDF(); PDFPage page = pdf.newPage("A4"); BarCode code = BarCode.newQRCode(sb.toString(), 1, 2, 0); PDFCanvas canvas = code.getCanvas(); float w = canvas.getWidth(); float h = canvas.getHeight(); page.drawCanvas(canvas, 100, 600, 100+w, 600+h); pdf.render(new FileOutputStream("out.pdf")); } }
We've chosen a unit size of 1 for the barcode, which creates quite a large image (you can see the PDF it creates here). This is because we're scanning the codes using the free QR app (ed: link no longer exists in 2016) iPhone application, so we're dependent on the fairly rubbish iPhone camera. Fire up this application and focus the camera on the code, and you'll be prompted to add my contact details to your phonebook.
Web addressesA far simpler thing to encode is a website address - simply encode the URL to create a barcode that will jump to that page when scanned. To add a URL as a bookmark, format your text like so
MEBKM:TITLE:BFO Homepage;URL:;;If your barcodes are going to be scanned by ChinaMobile customers, they use their own variation on the format which looks like this
\u0001\u0010BM:SUB:BFO Homepage;URL:;;
Form Contents
Adobe Acrobat allows the creation of "barcode fields" which reflect the contents of the document form. You can do a similar thing here, to make your paper based workflow much more streamlined. Acrobat stores the field names and values as tab-delimetered text, but the approach you take is up to you.
|
http://bfo.com/blog/2009/10/07/qr_code_and_other_barcodes/
|
CC-MAIN-2017-04
|
refinedweb
| 632
| 60.35
|
In this tutorial, I will show you how to use optical character recognition to extract text from an image using a Raspberry Pi camera and a Raspberry Pi. The Pi camera will capture an image and, using OpenCV and Tesseract, we will extract text from the image.
For step-by-step instructions covering how to connect your Pi camera to a Raspberry Pi, check out Raspberry Pi Security Camera with Face Recognition. To learn how to get OpenCV set up with your Raspberry Pi, read How to Set Up OpenCV on Raspberry Pi for Face Detection.
What is Optical Character Recognition?
Optical character recognition (OCR) refers to the process of electronically extracting text from images (printed or handwritten) or documents in PDF form. This process is also known as text recognition.
What is Tesseract?
Tesseract is a tool originally developed by Hewlett Packard between 1985 and 1994, with some changes made in 1996 to port to Windows, and some C++izing in 1998. Tesseract became open-source by HP in 2005, and Google has been further developing it since 2006.
Tesseract recognizes and reads the text present in images. It can read all image types — png, jpeg, gif, tiff, bmp, etc. It is also widely used to process everything from scanned documents.
Tesseract has unicode (UTF-8) support and can recognize more than 100 languages out of the box. In order to integrate Tesseract into C++ or Python code, we have to use Tesseract’s API.
How to Install Tesseract on a Raspberry Pi
First, you need to make sure your Raspberry Pi is up-to-date by typing the following commands:
sudo apt-get update sudo apt-get upgrade
These commands will update the installed packages on your Raspberry Pi to the latest versions.
Then type the following command in the terminal to install the required packages for OpenCV on your Raspberry Pi:
After that, type the following command to install OpenCV 3 for Python 3 on your Raspberry Pi.
Note: Pip3 means that OpenCV will get installed for Python 3.
sudo pip3 install opencv-contrib-python libwebp6
Next, install the Tesseract library by typing:
sudo apt install tesseract-ocr
Install the command line Tesseract tool by typing:
sudo apt install libtesseract-dev
Finally, install Python wrapper for Tesseract by typing:
sudo pip install pytesseract
Checking the Installations
Let’s double-check the versions on our newly-installed packages.
To check if OpenCV is installed or not, try importing OpenCV by typing:
Python3
import cv2
If no errors pop up, your installation was successful.
To know your OpenCV version, type:
cv2.__version__
Checking OpenCV installation
To check Tesseract's installation, type the following command in the terminal:
tesseract --version
If installed correctly, the terminal should show output similar to the one shown in the image below.
Checking Tesseract installation
Python Code
Copy and save this Python code in a text file with .py extension.
import cv2 import pytesseract from picamera.array import PiRGBArray from picamera import PiCamera camera = PiCamera() camera.resolution = (640, 480) camera.framerate = 30 rawCapture = PiRGBArray(camera, size=(640, 480)) for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True): image = frame.array cv2.imshow("Frame", image) key = cv2.waitKey(1) & 0xFF rawCapture.truncate(0) if key == ord("s"): text = pytesseract.image_to_string(image) print(text) cv2.imshow("Frame", image) cv2.waitKey(0) break cv2.destroyAllWindows()
Now run this code using the command python filename.py.
The code is working on Raspberry Pi.
You can see the text recognition in action in the video below.
OCR Code Walkthrough
Let's break down the sections of the above code to understand the role of each part.First, we import the required packages for this project:
- The OpenCV library helps to show the frames in the output window
- Pytesseract is a Python wrapper for Tesseract — it helps extract text from images.
- The other two libraries get frames from the Raspberry Pi camera
import cv2
import pytesseract
from picamera.array import PiRGBArray
from picamera import PiCamera
Then we initialize the camera object that allows us to play with the Raspberry Pi camera. We set the resolution at (640, 480) and the frame rate at 30 fps.
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 30
PiRGBArray() gives us a three-dimensional RGB array organized (rows, columns, colors) from an unencoded RGB capture. The advantage of using PiRGBArray is that it reads the frames from the Pi camera as NumPy arrays, making it compatible with OpenCV. It avoids the conversion from JPEG format to OpenCV format which would slow our process.
The code contains two arguments: the first is the camera object and the second is the resolution.
rawCapture = PiRGBArray(camera, size=(640, 480))
After that, we use the capture_continuous function to start reading frames from the Raspberry Pi camera module.
The capture_continuous function takes three arguments:
- rawCapture
- The format in which we want to read each frame. Since OpenCV expects the image to be in the BGR format rather than the RGB we must specify the format to be BGR.
- The use_video_port boolean. Making this true means that we are treating stream as video.
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
Once we have the frame, we can access the raw NumPy array via the .array attribute. After accessing, it shows in the output window.
cv2.waitkey() is a keyboard binding function and it waits for a specified number of milliseconds for any keyboard event. It takes only one argument which is the time in milliseconds. If a key is pressed during that time, the program will continue. Passing 0 means it will wait infinitely for a key.
image = frame.array
cv2.imshow("Frame", image)
key = cv2.waitKey(1) & 0xFF
Next, we clear the stream in preparation for the next frame by calling truncate (0) between captures.
rawCapture.truncate(0)
The code will then keep on looping until 'S' on the keyboard is pressed. When 'S' is pressed, it takes the last frame and extracts the text from the image. After extracting, it shows it on the terminal.
|
https://maker.pro/raspberry-pi/tutorial/optical-character-recognizer-using-raspberry-pi-with-opencv-and-tesseract
|
CC-MAIN-2021-10
|
refinedweb
| 1,010
| 56.76
|
(English) Why to Update Tally.ERP 9 to the Latest Version
Sorry, this post is not available in Gujarati For the sake of viewer convenience, the content is shown below in this site default language. You may click one of the links to switch the site language to another available language. in case they haven’t.
As Tally, we are committed to walk the GST journey with India’s businesses. Firstly, we are continuously engaging with businesses, GST professionals and tax consultants. Secondly, we continuously keep pace with the statutory evolution of GST and incorporate the developments into our product.
We started our GST journey with Tally.ERP 9 Release 6.0 and the latest version is Release 6.3.1. We will continuously launch new releases as and when they are required to make GST compliance easy and simple for you.
We will take you through the latest enhancements which are part of Release 6 and how they benefit your business by simplifying GST return filing.
Enhancements to ease GST billing in Tally’s latest version
The latest version of Tally.ERP 9 Release 6 covers all aspects of GST billing. You don’t have to worry about the correctness of calculations. The product has the unique capability to prevent errors and help you correct them by raising alerts and warnings whenever any errors occur during the first stage of data entry itself.
There are many more features which make your billing experience wonderful. Read on….
- Print invoices with item-wise and rate-wise breakups
Now, with Tally.ERP 9 Release 6.3.1, you can print item-wise rate breakups in invoices. This will make your customers happier as they will get a better picture and analysis of tax breakups (CGST, SGST, IGST and Cess) for each line item. Moreover, based on the amount of information, Tally.ERP 9 automatically changes the print mode to Landscape.
- GSTIN validation
Valid GSTIN number is a critical requirement for GST return filing. The latest version of Tally.ERP 9 Release 6.3.1 validates GSTIN number as soon since you make the entry. A valid GSTIN ensures that your GST returns don’t get rejected on the GST portal.
- Use Journal Voucher purchases and sales entries
With the latest version of Tally.ERP 9, you can now enter new purchases and sales transactions using Journal Voucher (JV) types as you can enter ‘Reference Number’. So all new JVs passed for sales and purchases will be rightly considered in respective GST returns.
- Use payment for entering expenses
Similarly, you can record expenses using Payment Voucher type which also lets you to enter ‘Reference Number.’
- Use either of lower or upper case to create masters
Do you know that you can now create ‘iPhone’ as an item in Tally? Just press Shift+F3 and you can name a master starting with a lower case letter.
Enhancements to Simplify GST Return Filing
The latest version of Tally.ERP 9 Release 6.3.1 makes filing GST returns extremely simple whether you file your returns on your own, or take the help of a tax consultant who files the returns on your behalf. Let us look at the other enhancements of Release 6.3.1.
- Export GSTR-3B in JSON format
You can now directly export GSTR-3B form in JSON format from Release 6.3.1 and upload it to the GSTN portal. If you prefer to view the GSTR-3B in MS Excel before uploading, you can do so as well.
Download the GSTR-3B Offline Utility from the GSTN portal. Tally.ERP 9 exports details of the transactions into the GSTR-3B MS Excel form. Click on Validate button in the form. Upon successful validation, you can generate it in JSON and upload it to the portal.
- Detection of data entry errors
Any user can oversee the warning provided at the time of creation of transaction and can still make an error. Tally.ERP 9 considers all such behavioural possibilities. It has the powerful capability of error detection built-in at all stages. Errors of any kind get easily tracked and alerts appear whenever necessary, with the options to let you correct them. You can see and correct errors for both GSTR-1 and GSTR-3B separately.
- Easiest way to correct errors
Tally.ERP 9 has built-in flexibility to let you correct the errors yourself, either at transaction level or master level based on your business scenario. You can also ignore an error and accept as is based on the scenario.
- Most convenient way to share data
Suppose your customer asks you to modify an invoice at the last minute and you make the correction. You don’t have to worry about sharing the entire data again with your tax consultant.
With Tally’s latest release, you can share the list of only those invoices which you have modified. Your tax consultant can simply import the file and copy it onto your business data. This way you can both be sure that the data used for filing is up-to-date. Similarly, your tax consultant could also send you the list of only those invoices which he might have modified before filing the returns.
- Manage your GST returns in a simpler way
If you are a business owner who files GST returns on your own, , then the latest version of Tally.ERP 9 makes it simple for you. You can easily generate GSTR-1 and GSTR-3B in JSON format which can be directly uploaded on the GSTN portal.
Please don’t forget –
1) Every latest release of Tally.ERP 9 is built to make GST compliance and GST returns filing easier and simpler for you.
2) Every latest release of Tally.ERP 9 incorporates the ongoing developments in GST law.
Don’t miss out on our latest releases! For more updates on Tally.ERP 9 keep reading our blog posts.
Are you GST ready yet?
Get ready for GST with Tally.ERP 9 Release 6
94,496 total views, 65 views today
Related posts:
Tags In
32 Comments
- Tally Solutions January 24, 2018 at 12:31 pm
We suggest you to contact our support team at support@tallysolutions.com
- neeru sharma December 29, 2017 at 4:35 pm
import and export is not working in new version(6.3.1) of tally . its showing name has no meaningful characters cannot import
- Tally Solutions January 24, 2018 at 12:19 pm
We suggest you to contact our support team at support@tallysolutions.com
- Doki Venktaramanamurty December 28, 2017 at 8:18 pm
yes tally is excellent working now a days
- ACCOUNTANT IRFAN December 27, 2017 at 3:40 pm
while entering sales voucher ,sometimes we need to create new item as so we can press alt+c creating new item ok. but after completing the sales voucher we cannot save it .it shows the item unit measure in it is not valid .and cancelled the voucher(not saving i mean) .and starting to enter same sales voucher again it does not show any error .why it shows error when we create new item while recording sales voucher.it shows only when we create new item while recording the the sales voucher.
- Tally Solutions December 28, 2017 at 4:14 pm
This requires detailed analysis. Please get in touch with Tally Care by dropping a mail to support@tallysolutions.com
- Sai Praveen Ravuru December 24, 2017 at 1:10 pm
Could you also correct the following errors
A)If POS voucher type is selected to raise a sales invoice all the sales voucher are clubbed under Cash Ledger instead of their respective Party Ledger which was selected while raising the invoice.
We are facing a difficulty in giving account statement to customers.
- Tally Solutions December 25, 2017 at 4:56 pm
It’s a behavior in Tally as when we are using POS invoice we are actually taking a Cash Payment. If you want to pass a Credit sales then you should use Sales Invoice. IN case you have any specific business case do let us know. However under POS register using Columnar report you can filter transactions of that party.
- saikannan December 23, 2017 at 3:24 pm
dear tally team our company has been operate cess and add cess how to compare purchase and sales cess in tally because of not updated for cess in tally…
- Tally Solutions December 26, 2017 at 12:29 pm
Hi, please refer below link for details on cess
- JITENDRA MARWAL December 21, 2017 at 9:40 pm
very nice
Thanku so much.
- Prakash Jasani December 21, 2017 at 8:38 pm
Good Job
Keep Updating
- mohammad arsalan December 21, 2017 at 1:05 pm
excellent feature
- roshan kumar December 21, 2017 at 7:46 am
how can i check sales and purchase value of different value(purchase@%5 with tax rate)?
it will be more useful for us if give discount column just after rate with value and percentage.
- Tally Solutions December 22, 2017 at 7:19 pm
If you are talking about Invoice Format, then we have given enhancements in Release 6.3 for billing. Please refer below link for details
- Hemang Patel December 19, 2017 at 10:17 am
Yes Excellent work.
Need help
Tell me that how to use Interest and Late fees payable in new update 6.3
thanks in advance
- Tally Solutions December 19, 2017 at 2:09 pm
HI, Please refer below link for details.
- Parag Apte December 18, 2017 at 1:15 pm
Every update of Tally.ERP9 brings some advantages for the user.
The advantages are in terms of security, efficiency and enhancement to taxation features in the accounting software.
So its a best practice to have installed the latest version of Tally.ERP9.
- Abhilash s December 18, 2017 at 8:14 am
While entry in backdated sales voucher we want to get a warnig message . If tally provide this facility only for sales voucher ,tally will become most powerfull
- Tally Solutions December 19, 2017 at 2:19 pm
Thanks for your suggestion. We will keep you updated on this enhancement.
- Manoj Gupta December 17, 2017 at 12:59 am
Sir in tally 6.3 release the reverse charge mechanism entry is not pass how is it pass in new release please help for this
Thanks
- Tally Solutions December 19, 2017 at 2:17 pm
Hi,
Please refer below link for details
- Ramesh December 16, 2017 at 11:20 pm
Tally not given following solutions.
Fixed asset Ledger not showing in sales voucher entry. How can sale asset with GST.
- Tally Solutions December 20, 2017 at 7:33 pm
You can pass this entry by using Journal Voucher
- JIgnesh Gadani December 23, 2017 at 6:13 pm
Can we get the same (Asset Sales by JV) in GSTR-1?
- Jignesh Gadani December 16, 2017 at 11:18 pm
Great Efforts by Entire Team of Tally for Simplicity again in Release 6.3….
1. Print invoices with item-wise and rate-wise breakups in Landscape format. (5*)
2. Export GSTR-3B in JSON & Excel based GSTR3B offline utility. (5*)
3. Most convenient way to share data by xml through Mark Changed Vouchers. (5*)
- Tally Solutions December 19, 2017 at 2:22 pm
Thanks for the feedback.
- CA OM PRAKASH BANKA December 16, 2017 at 2:18 pm
1) search OPTION IN TALLY – IF WE WANT TO SEARCH SOME ENTRY BY AMOUNT
2) PAYMENT > 20OOO IN CASH SEARCHING
- Tally Solutions December 16, 2017 at 9:12 pm
With the help of Range option in Sales Register or Day book you can do so.
- Shiv ram sharma December 16, 2017 at 1:05 pm
So good keep it)
Would like to bring to the notice of team Tally that when I passed and entry of payment dated 1/1/2018 the cheque is printed with year 2017 instead of 2018. Request team Tally to look into the matter and solve the bug.
|
https://blogs.tallysolutions.com/gu/why-to-update-tally-erp9-latest-version/
|
CC-MAIN-2020-34
|
refinedweb
| 2,003
| 63.7
|
Join the community to find out what other Atlassian users are discussing, debating and creating.
Hi All,
Not sure what I'm doing wrong here... I need to write a behavior that will work in the customer portal of ServiceDesk, that will indicate if at least one value in a standard Jira Multiselect field has been chosen. I can check if any specific value has been selected using the many examples found on this forum, but my main goal here is to simply test if >=1 options have been chosen. Any suggestions?
This is as far as I have gotten, but no joy yet... :
def my_multisel= getFieldByName("multiselect1")
def crField = getFieldById(getFieldChanged())
def int selectedOptions = crField.size()
if (selectedOptions >= 1{
my_multisel.setError("at least 1 item selected") // just used to test
// do more stuff here in real life...
}
else
{my_multisel.clearError()}
Jira 7.12, Scriptrunner 5.5.9, server
|
https://community.atlassian.com/t5/Adaptavist-questions/Servicedesk-Scriptrunner-behaviour-when-using-a-multiselect/qaq-p/1203938
|
CC-MAIN-2021-43
|
refinedweb
| 149
| 57.37
|
different components depending on the menu item clicked.
– There can only be one open tab for a given component.
– If a menu item is clicked and a tab with this component already exist, that tab should get focus.
– When navigating to a component by URL, this component should be opened in a tab.
The finished product when following the steps below looks like this:
I decided to make use of the router-outlet provided by Angular because we need to be able to navigate by URL. If this is not one of your requirements you could decide to use dynamic components to fill your tabs.
In order to make this example work you need a project with Angular 6+ (I used Angular 7.2) and Angular routing. If you use the Angular CLI you should answer “y” to the question “would you like to add Angular routing? “, otherwise you will need to manually add the routing.
Steps:
1. Download bootstrap and ng-bootstrap into your Angular project by entering the following commands in your command line, while you are in your project folder:
npm install bootstrap@<required bootstrap version> npm install --save @ng-bootstrap/ng-bootstrap@<required ng-bootstrap version>
Make sure you install compatible versions of ng-bootstrap and bootstrap. At you can see which versions are compatible with your version of Angular.
2. Add bootstrap to the styles in the angular.json file, so the styles property will become something like:
"styles": ["src/styles.css","node_modules/bootstrap/dist/css/bootstrap.min.css"]
3. Add the NgbModule to your imports in your app.module.ts file:
import { NgbModule } from '@ng-bootstrap/ng-bootstrap';
imports: [BrowserModule, AppRoutingModule, NgbModule],
4. Create a main-content component and a menu component. You can do this manually or by using the Angular CLI command:
ng generate component <componentName>There is no need to put anything inside the components yet, you can keep the generated content or keep them empty for now. Put both components in your app component HTML so you have a (sidebar) menu and a place to render your tabs.
app.component.html:
<div class="app-container"> <app-menu</app-menu> <app-main-content</app-main-content> </div>
app.component.css:
.app-container { display: flex; flex-direction: row; width: 100%; height: 100vh; } .menu-bar { width: 420px; background-color: #cae4db; } .main-content-container { width: 100%; }
Your app should now look like this when you run it (zoomed-in):
5. Create the different components you want to display in your tabs. For this example I created a component called ‘movies’ and a component called ‘songs’. These components don’t need to be connected to anything yet.
6. Create an interface for the tabs, called ITab.
tab.model.ts:
export interface ITab { name: string; url: string; }
7. Create a singleton service called tab.service. With the Angular CLI you can do this by running the following command in your project folder:
ng generate service tab
8. The tab service is where we will keep our tabs. We will define the titles and relative URLs of our possible tabs here (you could also choose to store them somewhere else). Make sure you precede the URL with a forward slash (‘/’), so “/movies” instead of just “movies”. Otherwise we will have trouble comparing them to the navigation URL later.
tab.service.ts:
import { Injectable } from '@angular/core'; import { ITab } from './tab.model'; @Injectable({ providedIn: 'root', }) export class TabService { tabs: ITab[] = []; tabOptions: ITab[] = [{ name: 'Movies', url: '/movies' }, { name: 'Songs', url: '/songs' }]; constructor() {} }
9. Create methods to add and delete a tab in the tab.service.
addTab(url: string) { const tab = this.getTabOptionByUrl(url); this.tabs.push(tab); } getTabOptionByUrl(url: string): ITab { return this.tabOptions.find(tab => tab.url === url); } deleteTab(index: number) { this.tabs.splice(index, 1); }
10. Get the tabOptions from your tabService in the menuComponent.
menu.component.ts:
export class MenuComponent implements OnInit { menuOptions = []; constructor(private tabService: TabService) {} ngOnInit() { this.menuOptions = this.tabService.tabOptions; } }
11. Add menu options to your menu for each of your tab options. Use the openTab method that receives the URL and calls the addTab method of the tabService.
menu.component.html:
<nav class="menu"> <ul class="menu-options-list"> <li * {{option.name}} </li> </ul> </nav>
menu.component.css:
.menu { padding-top: 20px; display: flex; flex-direction: column; justify-content: center; } .menu-options-list { list-style-type: none; padding: 0; margin: 0; } .menu-option { padding: 10px 0px 10px 50px; } .menu-option:hover { background-color: #7a9d96; cursor: pointer; }
menu.component.ts:
openTab(url: string) { this.tabService.addTab(url); }
Your app should now have a menu option for each of the components you want to display in your tabs. It should look similar to this:
12. Get the tabs from the tabService in the mainContentComponent.
main-content.component.ts:
export class MainContentComponent implements OnInit { tabs = []; constructor(private tabService: TabService) {} ngOnInit() { this.tabs = this.tabService.tabs; } }
13. Add a closeTab method that calls the tabService deleteTab method. Also call event.preventDefault to prevent the tabs from being refreshed after clicking on the header.
main-content.component.ts:
closeTab(index: number, event: Event) { this.tabService.deleteTab(index); event.preventDefault(); }
14. Add the ngb-tabset component from ng-bootstrap to the main-content HTML file and use a loop to add a tab for each of the tabs in the tabService. The tab title will be the name of the tab and for now we will use the URL as the tab content.
main-content.component.html:
<div class="main-content"> <ngb-tabset> <ngb-tab * <ng-template ngbTabTitle> <span>{{tab.name}}</span> <span (click)="closeTab(index, $event)">×</span> </ng-template> <ng-template ngbTabContent>{{tab.url}}</ng-template> </ngb-tab> </ngb-tabset> </div>
Now when we click on one of the menu options, a new tab should open that displays the URL of that tab. We can also close these tabs by clicking on the “x” in the corner of the tab. Your app should look like this:
15. Add the components you want to display in your tabs to the routes in the appRoutingModule. In this example that will be the movies and the songs component. Use the same URLs that you have specified in the tabOptions in the tabService.
app-routing.module.ts:
const routes: Routes = [ { path: 'movies', component: MoviesComponent, }, { path: 'songs', component: SongsComponent, }, ];
16. Display the router-outlet in the tab content in the MainContentComponent instead of the URL of the tab.
main-content.component.html:
<ng-template ngbTabContent> <router-outlet></router-outlet> </ng-template>
17. Add the router from angular/core to the constructor of the menuComponent. In the openTab method, navigate to the URL of the tab that is being opened.
menu.component.ts:
constructor(private tabService: TabService, private router: Router) {}
openTab(url: string) { this.tabService.addTab(url); this.router.navigateByUrl(url); }
Now when we click a menu option, the corresponding component will be displayed. However a new tab opens every time an option is clicked and the active tab does not switch but remains the same. We will now use the URL to determine the active tab.
18. Add the router to the constructor of the main-content. In ngOnInit, subscribe to the routers events. Save the urlAfterRedirects from the event in a variable called activeTabUrl when the event is of type NavigationEnd. NavigationEnd means the router is done navigating and redirecting and this is the new URL the app is displaying.
main-content.component.ts:
activeTabUrl; constructor(private tabService: TabService, private router: Router) {}
main-content.component.ts, in the method ngOnInit:
this.router.events.subscribe(event => { if (event instanceof NavigationEnd) { this.activeTabUrl = event.urlAfterRedirects; } });
19. Bind the activeTabUrl variable to the activeId of ngb-tabset. Set the id of each tab to be the URL of the tab. Each tab is now identified by its URL, and the active tab is identified by the URL the router has navigated to.
main-content.component.html:
<ngb-tabset [activeId]="activeTabUrl"> <ngb-tab *
Now when opening a new tab, this tab will become the active tab. But because a component can still be displayed in multiple tabs at the same time, all these tabs will become active. Your app should now look like this:
20. In the tabService, make sure a tab doesn’t already exist before adding it to the tabs array.
tab.service.ts :
addTab(url: string) { const tab = this.getTabOptionByUrl(url); if (!this.tabs.includes(tab)) { this.tabs.push(tab); } }
Now when selecting a menu option you will open a new tab, or go to the tab if it already exists. However, when you select a tab by clicking on the tab header the content of the tab doesn’t change. This is because no routing is taking place when clicking a tab header.
21. Add an onTabChange method to ngb-tabset in the main-content component, that responds to a tabChange event.
main-content.component.html:
<ngb-tabset [activeId]="activeTabUrl" (tabChange)="onTabChange($event)">
In the onTabChange method, navigate to the nextId in the event. The nextId is the id of the tab which header was clicked, which is also the URL of that tab.
main-content.component.ts:
onTabChange(event) { this.router.navigateByUrl(event.nextId); }
In your app you can now open tabs by clicking on the corresponding menu item, close tabs by clicking on the ‘x’ in the corner of the tab and change tabs by clicking on the tab headers. But when we try to navigate to a component by typing the URL into the web browser, nothing is displayed.
22. In the mainContentComponent in the ngOnInit method, add a call to the tabService.addTab method with the activeTabUrl as a parameter. Make sure this method is only called when there are no tabs present. This will make sure a new tab is opened when navigation is done by typing the URL (for example:) in the web browser.
main-content.component.ts:
ngOnInit() { this.tabs = this.tabService.tabs; this.router.events.subscribe(event => { if (event instanceof NavigationEnd) { this.activeTabUrl = event.urlAfterRedirects; if (this.tabs.length === 0) { this.tabService.addTab(this.activeTabUrl); } } }); }
Now we have fulfilled all the requirements for our tabs! Your app should look like this now:
This works fine as long as you have components that only display information. However, if you have components with user input (such as forms) this information will be lost every time you navigate to a different component. If you want to persist this data when switching tabs, you will have to store it outside of your component. A couple of suggestions are: a singleton service, the web browser’s session storage or local storage, or a separate storage facility such as ngrx-store or mobx- store.
Do not forget to load the stored information in the ngOnInit method of your components in order to have access to this data.
Sources used for this article:
Sources on dynamic components (as an alternative for using the router):
2 thoughts on “Dynamic tabs with Angular 6+ and ng-bootstrap”
app created following mentioned steps
Great solution for tabs – had not thought about using a service, but that makes so much sense. The other approaches I have seen get so convoluted when integrating into a large app. Thanks!
|
https://technology.amis.nl/frontend/dynamic-tabs-with-angular-6-and-ng-bootstrap/
|
CC-MAIN-2021-31
|
refinedweb
| 1,863
| 57.67
|
Hello,
I am new to mod_python and am interested in catching python errors with a
default error handler.
For example,
def causeanerror(req):
asdlkfj
would not barf on the user, it would send the error to my error function
or something.
<Directory "/home/wwweb/pyblog/htdocs">
SetHandler mod_python
PythonHandler mod_python.publisher
PythonDebug On
Options none
AllowOverride none
Order allow,deny
Allow from all
<Files ~ "\.(gif|jpe?g|png|html?|css|js)$">
SetHandler None
</Files>
<Files *.pyc>
Deny from all
</Files>
</Directory>
Interestingly, if I do a
# HEAD
I receive:
405 Method Not Allowed
Connection: close
Date: Mon, 14 Nov 2005 17:23:42 GMT
Server: Apache/2.0.55 (Unix) mod_ssl/2.0.55 OpenSSL/0.9.8a PHP/4.4.1
mod_python/3.1.4 Python/2.4.2
Allow: GET,HEAD,POST,TRACE
Content-Type: text/html; charset=iso-8859-1
Client-Date: Mon, 14 Nov 2005 17:23:42 GMT
Client-Peer: 6.13.7.17.5:80
Client-Response-Num: 1
<?> Why does a HEAD request return a 405 but "Allow:" says "HEAD" is cool?
If I do a
# GET -s
that works returning a 200
if I do a
# GET -s
it ALSO returns a status code of 200 OK with the python error. So catching
it in apache (presuming that mod_python would return a 500 error -
wouldn't that be nice?) with ErrorDocument doesn't seem likely to function
properly.
I appreciate your comments and suggestions.
Bye,
Waitman
--
Waitman Gobble
(707) 237-6921
|
http://modpython.org/pipermail/mod_python/2005-November/019527.html
|
CC-MAIN-2017-51
|
refinedweb
| 249
| 58.28
|
Ctrl+P
C# Model to Builder Class is a Visual Studio Code extension that will save you time by automatically generating a builder class from a C# model.
Stop wasting time manually writing out builder classes. Define a model and use this extension to generate your builder.
The generated file will be saved next to your model file with Builder.cs appended to the end of the original filename.
Builder.cs a model that you have written.
User.cs
namespace MyProject.Models
{
public class User
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string Email { get; set; }
public DateTime DateOfBirth { get; set; }
}
}
Run the extension Cmd/Ctrl + Shift + P => Generate Builder From C# Model.
Cmd/Ctrl + Shift + P => Generate Builder From C# Model
Manually update the generated file to take care of any required imports and set the default values.
Import the generated file into your test class.
To create a new user with the default values defined in the builder class use the following code.
var user = new UserBuilder().build()
If you want to override the default values you can specify unique values using .With<PropertyName>(value).
.With<PropertyName>(value)
var user = new UserBuilder()
.WithFirstName('Bruce')
.WithLastName('Wayne')
.WithEmail('bruce.wayne@wayne-enterprises.com')
.WithDateOfBirth(new DateTime(1970, 1, 1))
.Build()
To use the extension:
public class
{ get; set; }
|
https://marketplace.visualstudio.com/items?itemName=FraserCrosbie.csharp-model-to-builder
|
CC-MAIN-2021-21
|
refinedweb
| 223
| 50.23
|
Talk:Tag:amenity=toilets
Contents
category
By accessibility
I would add wheelchair=yes/no to Extended Usage. Maybe also something to show if a fee is to be paid? --Colin Marquardt 11:25, 22 February 2008 (UTC)
Yes, I agree with wheelchair=yes/no and also with fee=yes/no --Stefku 17:02, 19 April 2008 (UTC)
There's a proposal open for wheelchair=*: Proposed_features/wheelchair; you probably know already, of course, I'm just linking it here --achadwick 17:46, 19 April 2008 (UTC)
Another topic is accessibility for the blind - often in Germany toilets are good for wheelchair users, and hygenic through touchless water sensors visually impaired persons can't find. That's a typical: blind=limited. --Lulu-Ann 10:28, 20 April 2009 (UTC)
It was approved key:wheelchair=* some time ago. We should update amenity=toilets page. User:MrFrem82 10.44, 26 Agosto 2010 (UTC)
How would you tag the accessibility of a free toilet inside a larger paid area, like a zoo or themepark? For people already in the themepark, it is as it were access=public, while for people outside, it would be access=customers. And inside a themepark, you could still have access=customers, requiring a fee to get into the themepark, then require a purchase at a bar to use the toilets. IIVQ (talk) 11:15, 11 September 2015 (UTC)
- Tag access=customers. A person inside a theme park is no different than a person inside a shop. Added fee=yes for if there's an extra fee. Brycenesbitt (talk) 19:34, 11 September 2015 (UTC)
By sex
How should women/men only bathrooms be tagged? This is usefol for when they are physically separated. Xeen
Some suggestions: men=yes/no women=yes/no babys_changing_table=yes/no warm_water=yes/no shower=yes/no --Erde12 23:12, 10 February 2009 (UTC)
- Some of these might be quite useful elsewhere. men=* and women=* sounds like an access tag, showers are generally nice things to have (speaking as a cyclist), and recent mums and dads will surely appreciate baby changing facilities. baby_changing=yes/no for that last one, for brevity? --achadwick 13:11, 20 February 2009 (UTC)
- See below. toilet:changing_table=yes seems fine to me. --achadwick 22:46, 22 December 2010 (UTC)
Another idea for sex-limited access would be toilets=gents, toilets=ladies, toilets=unisex etc. But that might be conflating form with access. Hmm. --achadwick 13:11, 20 February 2009 (UTC)
- Seems annoying to have to set men=no for a women's toilet. What about simply sex=male/female (could also apply to school, in whose Talk page Lulu-Ann suggests gender=male/female). Lorp 14:49, 10 August 2010 (BST)
- I'd prefer drinking_water=yes/no and water=yes/no/warm/cold --Lulu-Ann 10:26, 20 April 2009 (UTC)
- Suggest just using amenity=drinking_water on a separate node. But see below for fuller discussion --achadwick 22:46, 22 December 2010 (UTC)
Taginfo says that "sex" is more widespread than "gender", or gender-specific version of toilets=*. So let's go with that and write it up fully. Consensus yet? --achadwick 22:46, 22 December 2010 (UTC)
- There is quite a bit on the web about 'gender-neutral' toilets, see safe2pee. The current extent of tagging suggestions does not seem to cover this use case. -- SK53 11:55, 29 March 2011 (BST)
Sex or capacity ?
I confuse which one i should use? male=yes/no female=yes/no unisex=yes/no or capacity:women=yes/no/number (and may be similar capacity:man=yes/no/number) ??
By type (squat vs. seated; urinals; flushing mechanism)
A recent visit to Italy revealed quite a number of 'hole in the floor' toilets. I presume its not just wheelchair users who might prefer an alternative if available. Similarly, there is no provision for a urinal/pissoir with no other facilities (perhaps not as common as they used to be). I leave it for others to suggest appropriate tags. -- SK53 18:42, 15 April 2009 (UTC)
-- The following types are ideal:
Type=squat_toilet
Type=urinal
type=Flush_toilet
--BDROEGE 10:26, 3 August 2009 (UTC)
- we should also include the flush/sewer type. In Finland you often find Outhouses with no flush/sewer. But this should be a distinct feature and should not be related to your type definition (squat/urinal/flush). Maybe something like sewer=no, flush=no. --Marc 09:12, 22 June 2010 (UTC)
- Strong -1 to using "type" as the key name. This is horrible tagging anywhere; see below --achadwick 22:08, 22 December 2010 (UTC)
I am also missing a tag for latrine (dry toilet, often of wood -) --*Martin* 27 October 2010
- Could you provide a Wikipedia link please? --achadwick 22:08, 22 December 2010 (UTC)
- I think it is an outhouse what I was looking for (). I often see those in forests and nearby chalets. --*Martin* 17:24, 8 February 2011 (UTC)
I disagree with using one key for all of these different ideas. What about facilities with more than one type of stall or stand? Suggest being brave and using the toilet: namespace for saying more about the kind of toilet it is because this sort of meaning really isn't relevant to other types of amenity and I can just see "seated" being a word that can be used to describe some other sort of facility. I propose:
Any more? Which works a bit like fuel=*, with the namespacing and the colons. The defaults are all by country or by sex/gender-role, so I'm not noting those. Use the tags if you need to make the distinction where you live. --achadwick 22:08, 22 December 2010 (UTC)
I suggested also a distinction based on male/female ( this thread) --Sarchittuorg 10:18, 27 May 2012 (BST)
How to tag Porta-johns? Despite being portable, often parks choose to maintain these year around instead of a permanent facility. Brianegge (talk) 01:21, 28 November 2014 (UTC)
Accessories
what about hand_washbasin=yes/no to indicate the presence of such? In public urinals they are not necessarily present --Dieterdreist 15:09, 11 August 2009 (UTC)
Diaper changing table
Is there a tag to indicate that there is a diaper changing table? Such tables are often present near or inside public toilets. --Head 01:13, 9 January 2010 (UTC)
Suggest using the term "changing table" as the basis of this new tag because "diaper" and "nappy" are not 100% understood even within English-speaking countries (the former is US usage, the latter is Commonwealth/GB). Perhaps it should be placed under the toilet: namespace as described above? toilet:changing_table=yes, perhaps. --achadwick 22:14, 22 December 2010 (UTC)
According to taginfo, people have been using baby_changing=yes/no (and someone also baby_change=*). Sounds good to me. --gbilotta 11:20, 2 July 2013 (UTC)
(Drinkable) water
Most toilets have a sink with water (at least in my country). It can be useful to know that for people looking just for some water. What about using something like sink=no/drinkable/undrinkable Arenevier
Not everywhere offers hand-washing, so it may be worth having that. Suggest you use a new tag like toilet:hand_washing=yes/no/minimal for the presence of hand-washing facilities in toilets. Covers the ever-popular bucket and (optional) piece of soap too. --achadwick 22:23, 22 December 2010 (UTC)
As for drinking water consider using amenity=drinking_water in the first instance on a nearby new Node. If you absolutely positively have to do the two things at once, perhaps then create and document toilet:drinking_water=yes/no. It would seem to be more hygienic tagging to keep things separate ☺ --achadwick 22:23, 22 December 2010 (UTC)
Outhouse/composting toilets
Please suggest a way to describe whether it is a watercloset type of toilet or some kind not attached to the sewer, like wikipedia:Outhouse. Whether there is water for washing hands is relevant in this context. vibrog 21:17, 20 September 2010 (BST)
See above for hand-washing. For the usage you want, try toilet:outhouse=yes for old-fashioned ones with an external outhouse construction, or toilet:composting=yes for more modern designs. Both probably require something like toilet:flushing=no. --achadwick 22:30, 22 December 2010 (UTC)
{{Resolved|operator=* You now have "toilets:disposal=pitlatrine", and "toilets:handwashing=no" if you like. Explore building tags if the difference between wood and concrete construction is important to you. Brycenesbitt (talk) 17:40, 19 August 2013 (UTC)
Operator?
If operator=* were added to toilets it might then be possible to determine which are the public (government/council) run facilities and which are others. I think this would be better than adding either public=yes or designation=public which were other things I considered first. Comments? --EdLoach 11:26, 12 January 2011 (UTC)
Dry Toilet
On 23-May-2015 {{tag|toilets:disposal||dry toilet} was added without discussion. Is this not a type of pit toilet? From a rendering point of view should it not render the same as a pit toilet? Brycenesbitt (talk) 16:44, 23 May 2015 (UTC)
|
http://wiki.openstreetmap.org/wiki/Talk:Tag:amenity%3Dtoilets
|
CC-MAIN-2017-04
|
refinedweb
| 1,520
| 63.09
|
Most computer programs return an exit code to the operating system’s shell. Shell scripting tools can use the exit status of a program to indicate if the program exited normally or abnormally. In either case, the shell script can react depending on the outcome of the child process.
Python programs can use os.fork() to create child processes. Since the child process is a new instance of the program, it can be useful in some cases to inspect if the child process exited normally or not. For example, a GUI program may spawn a child process and notify the user if the operation completed successfully or not.
This post shows an example program taken from Programming Python: Powerful Object-Oriented Programming
that demonstrates how a parent process can inspect a child process’ exit code. I added comments to help explain the workings of the program.
Code
import os exitstat = 0 # Function that is executed after os.fork() that runs in a new process def child(): global exitstat exitstat += 1 print('Hello from child', os.getpid(), exitstat) # End this process using os._exit() and pass a status code back to the shell os._exit(exitstat) # This is the parent process code def parent(): while True: # Fork this program into a child process newpid = os.fork() # newpid is 0 if we are in the child process if newpid == 0: # Call child() child() # otherwise, we are still in the parent process else: # os.wait() returns the pid and status and status code # On unix systems, status code is stored in status and has to # be bit-shifted pid, status = os.wait() print('Parent got', pid, status, (status >> 8)) if input() == 'q': break if __name__ == '__main__': parent()
Explanation
This program is pretty basic. We have two functions, parent() and child(). When the program starts on line 38, it calls parent() to enter the parent() function. The parent() function enters and infinite loop that forks this program on line 20. The result of os.fork() is stored in the newpid variable.
Our program is executing in the child process when newpid is zero. If that case, we call our child() function. The child() function prints a message to the console on line 10 and then exits by calling os._exit() on line 13. We pass the exitstat variable to os._exit() whose value becomes the exit code for this process.
The parent process continues in the meantime. On line 32, we use os.wait() to return the pid and status of the child process. The status variable also containes the exitstat value passed to os._exit() in the child process, but to get this code, we have to perform a bit shift operation by eight bits. The following line prints the pid, status, and the child process’ exit code to the console. When the user presses ‘q’, the parent process ends.
References
Lutz, Mark. Programming Python. Beijing, OReilly, 2013.
|
https://stonesoupprogramming.com/tag/os-fork/
|
CC-MAIN-2021-31
|
refinedweb
| 485
| 67.15
|
This Databricks Machine Learning tutorial gives you the basics of working with Databricks and machine learning. You’ll learn how to create a Databricks cluster and run machine learning algorithms on it.
Introduction to Databricks and Machine Learning
Databricks is a cloud-based platform for data analytics and machine learning. It is used by data scientists and engineers to build and train machine learning models, and by businesses to operationalize those models.
Databricks is easy to use and provides a variety of features that make it an attractive platform for machine learning. In this tutorial, we will cover the basics of Databricks and machine learning so that you can get started with building models on the platform.
We will cover the following topics:
– What is Databricks?
– Why use Databricks for machine learning?
– What are the features of Databricks?
– How do I get started with Databricks?
The Databricks Machine Learning Workflow
Machine learning involves a lot of trial and error, which can be expensive and time-consuming. Databricks can help you manage the process by providing a platform for collaboration, organization, and automation.
The Databricks machine learning workflow consists of the following steps:
1. Data preparation: load and clean your data
2. Data exploration: analyze your data to understand its features and structure
3. Model training: build and train machine learning models
4. Model evaluation: assess the performance of your models
5. Model deployment: deploy your models so they can be used in production
Setting up your Databricks Environment
To get started with Databricks, you first need to create a workspace. If you don’t already have an Azure account, you can sign up for a free trial. Then, simply go to the Databricks website and click on the “Try Databricks” button.
Once you have an account, you can create a new workspace by clicking on the “Create Workspace” button. Give your workspace a name and choose the subscription and resource group that you want to use. You can also choose whether to create a Standard or Premium workspace. For this tutorial, we will use a Standard workspace.
Once your workspace has been created, you will be taken to the main workspace page. Here, you can create new notebooks, import notebooks from other sources, and manage your Databricks clusters.
Preparing your Data for Machine Learning
The first step in any machine learning project is preparing your data. This step is critical because the quality of your data will directly impact the results of your machine learning models. In this tutorial, we will cover the basics of data preparation for machine learning.
We will start by discussing why data preparation is important. We will then cover some basic techniques for data preparation, such as Cleaning your Data and Split your Data into Train/Test sets. Finally, we will wrap up with some resources for further reading on the topic.
Why is Data Preparation Important?
Data preparation is important because it can directly impact the results of your machine learning models. Poor data preparation can lead to inaccurate models that do not generalize well to new data. Conversely, good data preparation can lead to more accurate models that are better able to generalize to new data.
Data preparation is also important because it can help you avoid common pitfalls in machine learning projects. For example, if you do not split your data into train/test sets, you run the risk of overfitting your model to the training data. This means that your model will perform well on the training data but will not be able to generalize to new data (i.e., it will not perform well on the test set).
Basic Techniques for Data Preparation
There are many techniques for data preparation, but we will cover two of the most important here: Cleaning your Data and Split your Data into Train/Test sets.
Cleaning your Data: The first step in any machine learning project is to clean your data. This means removing any invalid or missing values from your dataset. Invalid values can cause errors in your machine learning models and missing values can lead to inaccurate results. There are many ways to clean your data, but a good rule of thumb is to use a combination of manual inspection and automated methods (such as using a library like pandas).
Training your Machine Learning Model
After you have split your data into training and testing sets, it is time to train your machine learning model. This is the process of traditional learning where the model “learns” from the training data in order to be able to make predictions on unseen data (in our case, the testing data).
There are many different algorithms that can be used for machine learning, and the choice of algorithm will depend on the nature of your data and the kind of predictions you are trying to make. For example, if you are working with images, you might want to use a convolutional neural network (CNN), whereas if you are working with text data, you might want to use a recurrent neural network (RNN).
Once you have chosen an algorithm, you need to train your model using the training data. This is done by feeding the training data into the model and adjusting the internal parameters of the model so that it can better learn from the data. The process of training a machine learning model is known as “fitting” the model.
After your model has been trained (or fitted), you can then use it to make predictions on new data. For example, if you are trying to predict whether or not an email is spam, you can feed in a new email and have your trained model predict whether or not it is spam.
Evaluating your Machine Learning Model
After you have completed the feature engineering and model selection process, it is time to evaluate your machine learning model. This tutorial will show you how to evaluate your machine learning models using Databricks.
Evaluating your machine learning models is important for two reasons:
1. You want to ensure that your model is actually learning from the training data and is generalizing well to new data.
2. You want to compare different models and choose the best one for your problem.
There are two main ways to evaluate machine learning models: 1) using a train/test split and 2) using cross-validation.
Using a train/test split is the simplest way to evaluate a machine learning model. You simply split your data into a training set and a test set, train your model on the training set, and then evaluate it on the test set. This evaluation will tell you how well your model performs on unseen data.
However, this method has several drawbacks. First, if your training and test sets are not representative of the entire dataset, then your evaluation will not be accurate. Second, if you have a small dataset, then you may not have enough data for both a train/test set and cross-validation folds, which limits your ability to properly assess different models. Finally, if you do not randomly split your data into train/test sets, then you run the risk of overfitting on your test set since it has seen the patterns in the training set (this is why it’s important to use stratified sampling when creating train/test splits).
Cross-validation is a better method for evaluating machine learning models because it overcomes these drawbacks. Cross-validation works by splitting the data into multiple folds (usually 5 or 10), training the model on some of the folds (usually 3 or 4), and then evaluating it on the remaining fold (usually 1 or 2). This process is repeated until all folds have been used for both training and testing. The final score is then computed by taking the average score across all folds
Saving and Loading your Machine Learning Model
After you have trained your machine learning model, you will want to save it so that you can load it and use it again later. In Databricks, we can use the MLlib library to save and load our models. To do this, we first need to import the library:
from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.feature import HashingTF, Tokenizer
pipeline = Pipeline(stages=[tokenizer, hashingTF, lr])
model = pipeline.fit(trainingData)
##Once we have imported the library, we can use the save() method to save our model:
model.save(“/tmp/spark-logistic-regression-model”)
##We can also load our saved model using the load() method:
loadedModel = LogisticRegressionModel.load(“/tmp/spark-logistic-regression-model”)
Making predictions with your Machine Learning Model
In this tutorial, we will show you how to make predictions with your Machine Learning model. We will use the Databricks platform to train and test our model, and then deploy it to a serverless environment.
We will start by loading our data into Databricks. We will then split the data into training and test sets, and train our model using the training set. Finally, we will evaluate our model on the test set, and deploy it to a serverless environment.
Tuning your Machine Learning Model
Tuning your machine learning model is a critical part of training your model to be accurate. In this tutorial, we will show you how to tune your machine learning model using the databricks platform. We will also show you how to use the databricks platform to test your machine learning model.
Conclusion
We hope you enjoyed this Databricks machine learning tutorial! Stay tuned for more tutorials on a variety of topics. In the meantime, check out our other tutorials on subjects like big data, data science, artificial intelligence, and more.
|
https://reason.town/databricks-machine-learning-tutorial/
|
CC-MAIN-2022-40
|
refinedweb
| 1,621
| 61.87
|
)
Aman Singhal(6)
Kuppurasu Nagaraj(4)
Vinoth Rajendran(3)
Gowtham K(3)
Sourav Kayal(3)
Scott Lysle(3)
Akshay Deshmukh(2)
Ravi Shankar(2)
Pritam Zope(2)
Umair Hassan(2)
Saineshwar Bageri(2)
Nilesh Jadav(2)
Vijayaragavan S(2)
Rizwan Ali(2)
Selva Ganapathy(2)
Chetna Solanki(2)
Raj Bandi(2)
Manish Sharma(2)
Zakir Ahamed(1)
Ethan Millar(1)
Meeran Nasir(1)
siva prakash(1)
Jamil Moughal(1)
Pankaj Kumar Choudhary(1)
Allen O'neill(1)
Kartik Kumar(1)
Sumit Jolly(1)
Muhammad Aqib Shehzad(1)
Arjun Singh(1)
Napuzba (1)
Shaili Dashora(1)
Shakti Saxena(1)
Shiva Shukla(1)
Umer Qureshi(1)
Nuwan Karunarathna(1)
Amol Jadhao(1)
Vijai Anand Ramalingam(1)
Neeraj Kumar(1)
Rahul Sahay(1)
Rahul Prajapat(1)
Harpreet Singh(1)
Rahul Saxena(1)
Vithal Wadje(1)
Ajay Yadav(1)
Vipin Kumar(1)
Ritesh Sharma(1)
Sagar Pardeshi(1)
Milstein Munakami(1)
Satendra Singh Bhati(1)
Kiran Kumar Talikoti(1)
Abhishek Kumar Ravi(1)
Pawan Pandey(1)
Nimit Joshi(1)
Gyanender Sharma(1)
Mahesh Alle(1)
Ikram Ali(1)
Manish Dwivedi(1)
Sandeep Sharma(1)
Prabhakar Maurya(1)
Vishnujeet Kumar(1)
Manisha Mehta(1)
Vishal Kulkarni(1)
Shail 0(1)
Johannes Wallroth(1)
Shubham Saxena(1)
Destin joy(1)
Resources
No resource found.
Creating Our Own TabControl In C#
Jul 11, 2017.
In this article we will create our own TabControl component using Buttons and Panels in very easy steps.
View And Update Document Information Panel
Jun 03, 2017.
View And Update Document Information Panel.
ASP.NET Core And History Of Microsoft Web Stack
May 31, 2017.
A brief Introduction to Microsoft Web stack and .NET Core.
Build Modern App With MEAN Stack 2.0 - Part One
May 02, 2017.
Build Modern App With MEAN Stack 2.0.
How To Make Dynamic Control In ASP.NET Using Panels
Apr 30, 2017.
How To Make Dynamic Control In ASP.NET Using Panels.
Stack And Grid Layouts In Xamarin.Forms
Apr 17, 2017.
Stack And Grid Layouts In Xamarin.Forms.
Developer Survey 2017
Mar 24, 2017.
A run-through the Stack-Overflow Annual Developer survey. Build MEAN Stack On Microsoft Azure Portal
Dec 14, 2016.
In this article, you will learn about MEAN Stack and how to build MEAN Stack on Microsoft Azure Portal.#.
Quick Start Tutorial: Creating Universal Apps Via Xamarin: Stack Layout and Layout Options - Part Seven
Jun 16, 2016.
In this article, you will learn about stack layout and layout options to create universal apps via Xamarin.
Voice of a Developer: Browser Runtime - Part Thirty Three
May 28, 2016.
In this article you will learn about Browser Runtime in JavaScript. This is part 33 of the article series..
Bootstrap For Beginners - Part Seven (Bootstrap Panels)
Apr 12, 2016.
In this article you will learn about Bootstrap.This is part seven of the series that includes Bootstrap Panels.
Introduction To Data Structures
Feb 12, 2016.
In this article you will learn about Data Structures in C#. I will discuss stack and queues.
Start With Mean Stack - Part One
Feb 10, 2016.
In this article we will learn about mean stack and its components. It consists of MongoDB, Express.js, AngularJS, & Node.js.
Creating A LAMP Stack Using Bitnami In Microsoft Azure (Virtual Machine)
Feb 01, 2016.
In this article you will learn about creating a LAMP Stack using Bitnami in Microsoft Azure (Virtual Machine).
Customizing Windows Forms In C#
Dec 26, 2015.
In this article we will customize basic windows forms using only Panels in diffrerent colors in Visual Studio using C#
Hardware Requirements For Microsoft Azure Stack Server
Dec 24, 2015.
Learn what hardware do you need to build your own Azure stack server in your data center..
Bind Stack Column Chart in ASP.NET Using jQuery And Ajax
Sep 03, 2015.
This article will help you to bind a stack column chart (Using Highchart Plugin) by calling a web service from jQuery AJAX in ASP.NET..
Stack and Queue in a Nutshell
Jul 09, 2015.
In this article we will learn Atack and Queue in a nutshell.
Overview of Collection, Array List, Hash Table, Sorted List, Stack and Queue
Jul 04, 2015.
This article provides an overview of Collections, Array Lists, Hash Tables, Sorted Lists, Stacks and Queues.: DockPanel
Mar 26, 2015.
This article focuses on the WPF Layout: of DockPanel in details.
WPF Layout: Grid
Mar 26, 2015.
This article focuses on the Grid panel in details.
WPF Layout: Canvas
Mar 19, 2015.
This article focuses on the WPF Layout: of Canvas Panels in details.
Queue and Stack Collection Classes in C#
Mar 17, 2015.
In this article we will learn about and how to work with Queue and Stack collection classes.
WPF Layout: Panels
Mar 16, 2015.
This article explains the basics of focus of various panels in WPF.
JQuery: Slide Show With Panel in ASP.Net Using jQuery
Feb 10, 2015.
This article shows how to make a slide show in ASP.NET using jQuery.
Column and StackedColumn Charts in ASP.Net
Dec 31, 2014.
In this article we will learn about the Column and StackedColumn charts of ASP.Net.
Stacked Queues, An Advance in Data Structures
Nov 24, 2014.
This article is about stacked queues to provide combined configuration of both structures to help us to access data in a fine way for proper memory allocation of the data.
Applied C#.NET Socket Programming
Oct 20, 2014.
This article explains the key networking concepts, for instance ISO stack, of TCP/IP under the C# framework by employing its essential socket classes and how applications can logically and physically be distributed in a network environment.
The Circular Stack, An Advance in Data Structure
Sep 16, 2014.
In this article you will learn how to make a circular stack, an advance in data structures.#.
Performing CRUD Operation With Dapper ( Open Source ORM From Stack Overflow) in MVC
Jul 14, 2014.
This article explains how to use Dapper and do CRUD operations with Dapper in MVC.
Overview of Stack in C#
Jun 19, 2014.
This article decribes what a stack is in C#. A stack is a data structure in which items are added or removed in a Last In First Out (LIFO) manner.
MEAN Stack to Develop Modern Web Application
Mar 19, 2014.
In this article we will introduce the MEAN stack for developing modern web applications.
Simple Drag Selection of Items in WPF
Feb 17, 2014.
This article explains the simple drag-selection of items in a Canvas or an Items Control derived class. Here a List box is used for hosting the Drag-selectable Items.
Arrangement of Items in a List Box Using WPF
Feb 13, 2014.
This article describes how items can be arranged in a List Box (or any Items Control subclass). Here I have used the Items Panel property for the customization.
Programmatically Adding XsltListViewWebPart Inside Panel in SharePoint
Jan 15, 2014.
In this article we explore XSLTListViewWebpart provided in SharePoint 2010.
Wrap Your Original Exception in C#
Dec 19, 2013.
Exception handling mechanism is very essential in every application. A good coding stricture does not leave a single piece of code which might give trouble in application.
Introduction to New Relic Platform
Dec 05, 2013.
New Relic is a powerful and open SaaS platform to monitor your entire stack; a broad range of programming languages and frameworks, including Java, .NET, PHP, Ruby, Node.js and Python.
Java Virtual Machine
Oct 22, 2013.
In this article you will learn about the Java Virtual Machine and its architecture.
Image Movement in WPF Using Canvas Panel
Oct 08, 2013.
In this article we will see how to move an image in WPF using a Canvas Panel..
Generic Collection Classes in C#
Sep 02, 2013.
The collections the System.Collections.Generic namespace are type safe and this article explains them.
Working With Searching Panel Application in MVC 5
Aug 30, 2013.
This article shows how to add a search panel with which you can search by name or by grade in a MVC application with Visual Studio 2013 Preview.
HTML5 Canvas Advanced: Part 3
Aug 23, 2013.
In this article, we will learn about State Stacks, Shadows and some basics of composites.
Date Time Picker in jQuery and JavaScript
Aug 07, 2013.
I have searching many websites for datetime pickers and have not found anything useful. I finally I found a jQuery and JavaScript Date Time Picker and it is very simple and attractive.
Understanding Update Panels With Triggers and Custom Control Events.
Aug 07, 2013.
In this article I'll explain and demonstrate how to use update panel triggers with custom control events efficiently..
Various Types of Panels in XAML Layout
Nov 13, 2012.
In this article we will learn about the XAML Layout Grid Panel, StackPanel, DockPanel, Wrap Panel, Canvas Panel and Panel related Properties Visual and Structural Function with some small examples...
About stack-panel
NA
File APIs for .NET
Aspose are the market leader of .NET APIs for file business formats – natively work with DOCX, XLSX, PPT, PDF, MSG, MPP, images formats and many more!
|
http://www.c-sharpcorner.com/tags/stack-panel
|
CC-MAIN-2017-39
|
refinedweb
| 1,510
| 67.45
|
With the announcement of the new SDK for Windows Phone Microsoft now really starts to stress the companionship of Windows and Windows Phone, and makes it a lot easier to build apps that run on both platforms. Windows Phone apps now reside under ‘Store Apps’ and although you can still write Windows Phone only apps, it’s pretty clear to me what the preferred way to go for new Windows Phone apps is
Two new kinds of Windows Phone apps
Let me make one thing clear: no-one is going to push you. Windows Phone 8.1, like it’s predecessor, will run 8.0 apps fine. You can go on re-using the skillset you have acquired, still using the XAML style you are used to. In fact, Microsoft stresses this point even more by including Windows Phone Silverlight 8.1 apps, which are a bit of a halfway station: your XAML largely stays the same, but it gives you access to the new APIs. Yet I feel the crown jewel of the new SDK is the new Universal Windows app and, in it’s slipstream, the enhanced PCL capabilities. But once again – no-one is forcing you this way. Microsoft are very much about going forward, but also about protecting your existing skills and assets.
One solution, two apps
One thing up front: ‘one app that runs everywhere’ is a station that we still have not reached. You are still making two apps – one for Windows Phone, one for Windows. In this sense, it’s basically the same approach as before where you used code sharing with linked files. That was quite a hassle, and now Visual Studio supports a formal way to share files between projects. This makes maintaining oversight dramatically easier, and it gets rid of the confounded “This document is opened by another project’ dialog too. Plus – and that’s a very big plus – the code you can use on both platforms has become a lot more similar.
Going universal
One warning in advance – if you going for the Universal Windows app, you’re going all the way. It means that for your Windows Phone app in most cases you are basically writing a Windows 8 app - using XAML and APIs that are familiar for Windows Store apps programmers, but not necessarily for Windows Phone developers. If you already have some experience writing Windows Store this won’t be a problem. If you have never done that before, some things may look a bit different.
So, I have created a new blank Universal Windows app and it shows like displayed on the right:
Initially, it only shows the App.Xaml.cs as shared. This has one great advantage already – you can go in and disable the frame rate counter for both apps in one stroke :-):
protected override void OnLaunched(LaunchActivatedEventArgs e) { #if DEBUG if (System.Diagnostics.Debugger.IsAttached) { //this.DebugSettings.EnableFrameRateCounter = true; } #endif
Switching contexts
If you go through the shared App.Xaml.cs you will notice a few more things: at several places in the file it says #if WINDOWS_PHONE_APP, and I also want to point out this little thing on top, the context switcher.
You can set the context switcher to ‘MyNewApp.Windows and you will see the Windows app code greyed out. This way, you can very quickly see which code is executed in which version and which not
Sharing code – sharing XAML
So I went to the Windows Store app, opened Blend, added some text and two tool bar buttons:
<Page x: <Page.BottomAppBar> <CommandBar> <AppBarButton Icon="Accept" Label="AppBarButton" Click="AppBarButton_Click"/> <AppBarButton Icon="Cancel" Label="AppBarButton"/> </CommandBar> </Page.BottomAppBar> <Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}"> <TextBlock HorizontalAlignment="Left" Height="37" Margin="90,53,0,0" TextWrapping="Wrap" Text="Hello World" VerticalAlignment="Top" Width="383" FontSize="29.333"/> </Grid> </Page>
And added some code behind.
private async void AppBarButton_Click(object sender, RoutedEventArgs e) { var m = new MessageDialog("Hello world"); await m.ShowAsync(); }
Now this won’t make for a spectacular app. If you run it, it will basically show:
It amaaaaazing, right ;-)? But then I went a bit overboard – I
moved MainPage.xaml and
MainPage.xaml.cs to the Shared project, and removed it from the Windows Phone 8.1 project. Run the Windows Store App again – still works. Run the Windows Phone 8.1 app, and sure enough…
Now this may seem pretty cool – and in fact it is - but it would not be something I would recommend using just like that. Windows Phone is a different beast, the way people use a phone app differs from how they use a Store app on a tablet, and usually you have to think different about how the layout works on a tablet. Case in point – the app bar. Windows Phone has only room for four buttons. The Secondary Commands show up as menu items, not as buttons. The Top bar does not show up at all. So blindly copying a UI from Windows Phone to Windows Store and back is not a smart thing to do. But the fact that it works, is in itself pretty cool.
Sharing code – in a more practical way
In practice, I have found you almost never share whole pages between phone apps and store apps, for a number of reasons:
- Phone design does not always translate easily to tablet design and vice versa (as pointed out above).
- For phone, space is a premium, and you don’t want to drag along all kinds of stuff that’s only used on a tablet - think super-high-res stuff, or all kinds of elaborate XAML constructions to accommodate that
- If you use controls on one platform that simply are not present on the other, or if you use native controls (for example, maps)
To maximize code sharing, you can for instance use partial classes. That works like this: in your code behind MainPage.Xaml.cs you declare the class “partial”.
namespace MyNewApp { /// <summary> /// An empty page that can be used on its own or navigated to within a Frame. /// </summary> public sealed partial class MainPage : Page { public MainPage() { this.InitializeComponent(); } } }
And then in the shared project you create a new file MainPage.Xaml.cs (or MainPage.cs), with just this:
namespace MyNewApp { /// <summary> /// An empty page that can be used on its own or navigated to within a Frame. /// </summary> public sealed partial class MainPage : Page { public MainPage() { this.InitializeComponent(); } } }
And this still gives the same result. Partial classes in shared projects can be pretty powerful. From the partial class in shared code you can call for instance back to methods in the non-shared portion, provided that those called methods are present in both the non-shared portions of the class (so both the MainPage.Xaml.cs). You can even call methods in referenced PCLs. Just keep in mind that in the background it just works like shared files and you can do a lot to minimize duplication.
More ways of sharing
Another way of sharing code is putting most of it in the shared portion – even stuff that’s not present on both platforms – but then using the #ifdef WINDOWS_PHONE_APP and #ifdef WINDOWS_APP directives. This is largely a matter of preference. For smaller classes with little differences I tend to choose this route, for larger classes or code behind files I tend to go for partial classes.When I am working on stuff that is particularly phone-related, I don’t want my code cluttered by Windows code, and vice versa.
A third way of sharing pieces of UI would be creating user controls in shared code. They can be reused within pages that are in itself not shared.
Finally – don’t forget that everything can be shared in the shared images. Apart from XAML and code this includes, for instance:
- Images
- Resource files (styles, templates, localization, etc)
- Data files
And PCL?
In the past, I have never been a fan of PCL, because of their limitations. This has been greatly improved by the new model and I actually start to like it. If you make a PCL for use in Windows 8.1 and Windows Phone 8.1 you can put almost everything in it that can be stuffed in a shared project, including user controls – although of course you can’t do elaborate partial classes callback tricks into non-shared code. This is because a PCL needs to stand on itself. So only everything that is available in both platforms fits in, so for instance almost everything map-related is out (with a few notable and very important exceptions: Geolocation (GPS tracking) and Geofencing (that is now available on Windows Phone too!).
Still – PCL is only useful if you are building stuff that
- Needs to be reusable over multiple different apps
- Needs to be usable on both Windows Phone and Windows Store
With this respect, there is not much difference in building platform-specific libraries and distributing them vi NuGet. This is more for the advanced programmer who is starting to build his own toolkit. This means you, by the time you are working on your third app :-)
Code sharing recap
- Code, XAML or assets that can be shared within one app for both platform can go into a shared project. For code that does not fit entirely you can use
- Partial classes
- #ifdef directives
- Code, XAML or assets that can be shared over multiple apps for both platform can go into a PCL
Conclusion
Build for Both has become a lot easier. You still have to deliver two apps, but making them as one has become a lot easier. Making clever use of the of partial classes and #ifdef makes it easier too, although this requires some thinking. Still, you have to take into account how both platforms behave differently.
A new exciting story for Windows Phone and Windows developers has begun.
The ‘demo solution’, which is hardly worth it’s name in this case, can be downloaded here.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/code-sharing-strategies
|
CC-MAIN-2016-44
|
refinedweb
| 1,677
| 69.82
|
Pass Channels allow shaders to redirect a color channel to a separate output buffer. These output buffers show up in the Render Options PPGs.
1.1 SPDL
1.1.1 Names and image types
1.2 mental ray shader code
Shaders can easily take advantage of the pass channels mechanism. All they need to provide is a string indicating the name (and optionally a type) of the pass channel(s) they provide. This string is used by the rendering engine to figure out which pass channels to enable for the scene.
At render time the rendering engine adds the requisite framebuffers that will contain the pass channel data and a userdata (XSIPassChannelsUD) that gets linked to the render options. The userdata is required since mental ray does not provide a way of naming the framebuffers. The userdata contains a mapping table between the name of each active pass channel and the framebuffer index to use for it.
The pass channel string is added to the option block of the mental ray renderer in the
metashader section. The option name is called "pass channel". It can contain one or more comma separated pass channel names. Each pass channel can optionally contain a specific image type in brackets. For example "Normal Buffer(normal)" would add a pass channel called "Normal Buffer" with an image of type normal, which maps to the mental ray image type of miIMG_TYPE_N.
Below is an example of how to add this option
MetaShader "pass_channel"
{
Name = "Pass Channel Example Shader";
Type = texture;
Renderer "mental ray"
{
Name = "pass_channel";
Options
{
"pass channel" = "ChannelOne(rgb),ChannelTwo(motion)";
}
}
}
Below is a table of image names and the matching mental ray image type. The channel names are case-insensitive. mental ray treats each image type as a seperate channel format and bit depth. XSI allows (in most cases) the channel type and bit depth to be selected individually.
The mental ray code for the pass channel setup seen above would correspond to the code below. The code provided is very basic. Ideally the shader should store away the information from get_pass_channel in a shader local data which would be created in the _init function of the shader.
It is probably a good idea to only store pass channel samples, unless otherwise required, when called as a specific ray, usually the eye ray (miRAY_EYE). Otherwise the channel samples might start showing up in reflections and other areas where possibly another pass channel is handling that data.
#include <shader.h>
#include <xsi_miuserdata_defs.h>
#include <string.h>
#ifdef WIN32
#define strcasecmp stricmp
#endif
int get_pass_channel
(
miState *state,
char *channel_name
)
{
miTag data = state->options->userdata;
XSIPassChannelsUD *pc = NULL;
int i;
/* Find the pass channels userdata */
while( data )
{
miUint label;
mi_query( miQ_DATA_LABEL, NULL, data, (void *)&label );
if( label == XSIPassChannels_Magic )
{
mi_query( miQ_DATA_PARAM, NULL, data, (void *)&pc );
break;
}
mi_query( miQ_DATA_NEXT, NULL, data, (void *)&data );
}
if( pc == NULL )
return( -1 );
for( i = 0; i < pc->nb_passchannels; i++ )
{
if( strcasecmp( pc->passchannel[ i ].name, channel_name ) == 0 )
return( pc->passchannel[ i ].fb );
}
return( -1 );
}
/* The shader is set as a pass-through shader and hence the parameter block
consists of a single color only. The shader records the incoming color
and the motion at current intersection point */
miBoolean pass_channel( miColor *result, miState *state, miColor *p )
{
int fb_color, fb_motion;
*result = *mi_eval_color( p );
fb_color = get_pass_channel( state, "ChannelTwo" );
if( fb_color != -1 && state->type == miRAY_EYE )
{
mi_fb_put( state, fb_color, result );
}
fb_motion = get_pass_channel( state, "ChannelTwo" );
if( fb_motion != -1 && state->type == miRAY_EYE )
{
miVector motion = { 0.0f, 0.0f, 0.0f };
/* Get the motion vector in internal (world) space */
if( state->motion.x != 0.0f &&
state->motion.y != 0.0f &&
state->motion.z != 0.0f )
mi_vector_from_object( state, &state->motion, &motion );
mi_fb_put( state, fb_motion, &motion );
}
return( miTRUE );
}
Categories: XSISDK
|
http://softimage.wiki.softimage.com/indexcc2c.html?title=Pass_Channels_%28XSISDK%29
|
CC-MAIN-2020-10
|
refinedweb
| 617
| 54.12
|
HOISTED on the shoulders of masked gunmen one day, denouncing Israel as “the Zionist enemy” another, Mahmoud Abbas this week predictably alarmed Israelis. Yet the belligerence betrays his paradoxical weakness. As the heir apparent to the late Yasser Arafat, he will overwhelmingly win the Palestinian presidential election on January 9th, but will then have to tread several extremely fine lines to win trust—both at home and abroad—and keep his job.
Mr Abbas (also known as Abu Mazen) does represent the best chance for peace talks in years. Israel and the United States froze out Mr Arafat after the start of the second intifada, now in its fifth year. Mr Abbas, unlike any other senior Palestinian leader, has long called for armed resistance to stop—a call he renewed last month, to the anger of militants within his own Fatah party. Ariel Sharon, Israel's hawkish prime minister, has spoken of a “window of opportunity”. But on the campaign trail Mr Abbas has looked and sounded more like Mr Arafat, making all the demands—Jerusalem as Palestine's capital, right of return for refugees, and so on—that Israel refuses to countenance.
That should be no surprise. Mr Abbas is out on a lonely limb: the moderate extreme of the Palestinian political spectrum. “His own convictions are completely peaceful and non-violent,” says Qays Abdul Kerim of the Democratic Front for the Liberation of Palestine, a relatively moderate party (ie, it believes in blowing up only military targets and armed settlers, rather than just anyone), adding, “I think this policy is naïve.”
Palestinians agree: a poll last month by the Palestinian Centre for Policy and Survey Research found that almost two-thirds think violence has achieved more than negotiations have. A full three-quarters argue that it is what forced Mr Sharon to order a unilateral withdrawal from the Gaza strip, due later this year. Even Ahmed Qurei, the Palestinian prime minister and backroom negotiator of the 1993 Oslo peace accords, is more sceptical than Mr Abbas: on January 1st he said the violence should end only “if there was a credible, serious peace process”, and rejected a conference on Palestinian reform that Britain's Tony Blair proposed and Mr Abbas supports.
If Mr Abbas is really such a radical pacifist, why is he so popular? It is not just the mantle he inherited from Mr Arafat. After four years of bloodshed, 81% of Palestinians, though they deeply distrust Mr Sharon, say they want reconciliation with Israel. They know Mr Abbas is someone Israel might talk to. They also prefer unity to chaos. Marwan Barghouti, the jailed leading member of the Fatah “young guard”, pulled out of the election to avoid a split.
But Mr Barghouti extracted promises in return (for instance, to make freeing prisoners a top priority), and a recent poll showed he would still beat Mr Abbas if he ran (see chart): hence Mr Abbas has to watch his back. His biggest challenge may not be from Islamist groups like Hamas, which is boycotting the presidential poll. Though Hamas has gained points by providing social services, and could do well in upcoming municipal and legislative elections, its religious ideology puts off many secular Palestinians. Its support, currently down to around 20%, will never rise much above 30%, reckons Hani al-Masri, an analyst in Ramallah, the Palestinians' administrative capital on the West Bank.
Instead the problem is the faultlines in Mr Abbas's own party: between peaceniks and militants; old and young; returned exiles like him and Mr Arafat and local heroes like Mr Barghouti; the West Bank and Gaza. If he does not do well, anger could spill out at a Fatah congress due in August. There may not be a direct challenge to his leadership, for the “young guard” is a diverse gaggle, and nobody but Mr Barghouti enjoys widespread support; but keeping the party cohesive will be tough.
That is especially true of Fatah's more extreme elements, the al-Aqsa Martyrs' Brigades—a splintered multitude of militias, united in little but name, but collectively responsible for nearly as many attacks as Hamas. Mr Arafat used to buy their loyalty with extensive secret funds. But more recently that money seems to have dried up or gone missing. Israeli intelligence claims that some four-fifths of al-Aqsa attacks are now sponsored by Hizbullah, a Lebanese militant group backed by Iran. Just after Mr Arafat's death, Fatah gunmen burst into a mourning tent that Mr Abbas was in, killing two people. His triumphal progress with al-Aqsa fighters in Gaza this week was both a show of strength and a reminder of potential weakness: they were of a branch loyal to Muhammad Dahlan, the Gaza security chief who is one of Mr Abbas's staunch backers, and thus one of his chief props.
Mr Abbas therefore needs to win comfortably. Hamas, despite its boycott, has not told voters to stay away, but if they do, a low turnout would damage Mr Abbas's legitimacy. So would a big vote for his nearest rival, Mustafa Barghouti (a clansman but not a close relation of Marwan), who is campaigning on another issue that bothers Palestinians a lot: corruption in the Palestinian Authority.
In other words, his fiery stump speeches look designed to get him elected safely enough to give him a free hand for peace talks. His official platform, by contrast, is padded with safety language—not the refugees' “right of return”, for instance, but “a fair resolution according to United Nations General Assembly Resolution 194”: diplomatic code for “let's discuss it”.
His true message will become clear only if peace talks begin. Getting there, however, is a chicken-and-egg problem. Israel wants evidence of a crackdown on terrorists before making concessions. But Mr Abbas “wants to reach a truce with the resistance, a ceasefire through a deal,” says Mr Masri. And fighters will lay down arms only if they see that his approach works: in other words, if Israel first makes concessions, such as easing up at checkpoints or releasing prisoners.
What could get the ball rolling? Mr Sharon offered to co-ordinate the Gaza withdrawal with the Palestinians, but Mr Abbas refused because it is not part of a full peace deal. Avi Gil, a former director-general of the Israeli foreign ministry, thinks that Israel needs to make a different overture. It could, he suggests, start by handing over responsibility for security to the Palestinian Authority in one town—Bethlehem, say—and if that works, then another, and another. That would be both a test of Mr Abbas and a sign of good faith.
Time is not on Mr Abbas's side. Both Mr Gil and Mr Masri give him the symbolic 100-day credit limit with Palestinians. “If [Israel's] view is that there is a sequence of first withdraw [from Gaza] and then see what happens,” says Mr Gil, “we could miss the chance we've been given.”
This article appeared in the Middle East & Africa section of the print edition under the headline "From the circus ring to the tightrope"
|
https://www.economist.com/middle-east-and-africa/2005/01/06/from-the-circus-ring-to-the-tightrope
|
CC-MAIN-2021-21
|
refinedweb
| 1,193
| 57.3
|
We transform XHTML to LaTeX and BibTeX to allow technical articles to be developed using familiar XHTML authoring tools and techniques.
Occasionally a web page turns the corner from a casually drafted idea to an article worthy of publication. Computer science conferences often require submissions using specific LaTeX styles; for example, the ISCW2004 submission instructions require that submitted papers be formatted in the style of the Springer publications format for Lecture Notes in Computer Science (LNCS). XSLT is a convenient notation to express a transformation from XHTML to LaTeX.
Tools to transform from LaTeX to HTML are commonplace, but there are far fewer to go the other way. A little bit of searching yielded some work[Gur00] that was designed to undo a transformation to XHTML. It used an odd XHTML namespace and exhibited various other quirks specific to reversing that transformation, but it provided quite a boost up the LaTeX learning curve[Mann94].
That code did not integrate with the BibTeX. In order to take advantage of automatic bibliography formatting traditionally provided by LaTeX styles, after studying the BibTeX format[Spen98] for a bit, xh2bibl.xsl was born.
Together with tradtional pdflatex and bibtex tools[tetex] and and XSLT processor such as xsltproc[XSLTPROC], this transformation can turn ordinary web pages with just a bit of special markup into camera-ready PDF in specialized LaTeX styles.
This article demonstrates the basic features. See:
They are produced ala:
$ make Overview.pdf xsltproc --novalid --stringparam DocClass llncs \ --stringparam Bib Overview --stringparam BibStyle splncs \ --stringparam Status prepub \ -o Overview.tex xh2latex.xsl Overview.html TEXINPUTS=.:../../../2004/LLCS: pdflatex Overview.tex This is pdfTeX, Version 3.14159-1.10b (Web2C 7.4.5) ... Output written on Overview.pdf (3 pages, 62474 bytes). Transcript written on Overview.log. xsltproc --novalid -o Overview.bib xh2bib.xsl Overview.html BSTINPUTS=.:../../../2004/LLCS: bibtex Overview This is BibTeX, Version 0.99c (Web2C 7.4.5) The top-level auxiliary file: Overview.aux The style file: splncs.bst Database file #1: Overview.bib TEXINPUTS=.:../../../2004/LLCS: pdflatex Overview This is pdfTeX, Version 3.14159-1.10b (Web2C 7.4.5) ... Output written on Overview.pdf (3 pages, 67583 bytes). Transcript written on Overview.log. TEXINPUTS=.:../../../2004/LLCS: pdflatex Overview This is pdfTeX, Version 3.14159-1.10b (Web2C 7.4.5) ... Output written on Overview.pdf (3 pages, 67167 bytes). Transcript written on Overview.log.
The transformation xh2latex.xsl works in the obvious way for many idioms:
Table support is limited to tables with border="1" and where all rows have the same number of cells. For example:
Specialized markup is required for other idioms. An article.css stylesheet provides visual feedback for this special markup.
To use a latex package, add a link to the head of your document a la:
<link rel="usepackage" title="url" href="" />
The package name is taken from the title attrbute. The href attribute is not used in the LaTeX conversion.
We recommend the url.sty package, per a TeX FAQ. For example:.
The following patterns are used to extract the title page material:
support for WWW2006 style authors, following ACM style, is in progress.
The a[@rel="ref"] pattern is transformed to the LaTeX \ref{label} idiom, assuming the reference takes the form href="#label". @@needs testing
The footnote pattern is *[@class="footnote"].
The div[@class="figure"] pattern is transformed to a figure environment; any div/@id is used as a figure label. The file pattern is object/@data. Figures are currently assumed to be PDF; the object/@height attribute is copied over. The caption pattern is p[@class="caption"]. @@need to test this. Be sure to include the epsfig package a la:
<link rel="usepackage" title="epsfig" />
An a element starting with an open square bracket [ is interpreted as a citation reference. The href is assumed to be a local link ala #tag.
The pattern dl/@class="bib" is used to find the bibliography. Each item marked up ala...
<dt class="misc">[<a name="tetex">tetex</a>]</dt> <dd> <span class="author">Thomas Esser</span> <cite><a href="" >The TeX distribution for Unix/Linux</a></cite> February <span class="year">2003</span> </dd>
or
<dt class="misc" id="tetex">[tetex]</dt> ...
Note the placement of the bibtex item type misc and the tag tetex and keep in mind that bibtex ignores works in the bibliography that are not cited from the body.
The xh2bibl.xsl transformation turns this markup into BibTeX format. xh2latex.xsl transforms the entire bibliography dl to a \bibliography{...} reference.
capitalization of titles seems to get mangled. I'm not sure if that's a feature of certain bibliography styles or what.
Formatting a LaTeX document is done in several passes. One typical manual shows:
ucsub> latex MyDoc.tex ucsub> bibtex MyDoc ucsub> latex MyDoc.tex ucsub> latex MyDoc.tex
The follwing excerpt from html2latex.mak shows some rules to accomplish this using make:
.html.tex: $(XSLTPROC) --novalid $(HLPARAMS) \ -o $@ xh2latex.xsl $< .html.bib: $(XSLTPROC) --novalid -o $@ xh2bib.xsl $< .tex.aux: TEXINPUTS=$(TEXINPUTS) $(PDFLATEX) $< .tex.bbl: BSTINPUTS=$(BSTINPUTS) $(BIBTEX) $* .aux.pdf: TEXINPUTS=$(TEXINPUTS) $(PDFLATEX) $* TEXINPUTS=$(TEXINPUTS) $(PDFLATEX) $*
Sources:
|
http://www.w3.org/2004/04/xhlt91/
|
CC-MAIN-2017-04
|
refinedweb
| 851
| 52.76
|
Does the World Need Binary XML? 481
sebFlyte writes "One of XML's founders says 'If I were world dictator, I'd put a kibosh on binary XML' in this interesting look at what can be done to make XML better, faster and stronger."
Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!
"We shall reach greater and greater platitudes of achievement." -- Richard J. Daley
For Starters (Score:2, Insightful)
For starters, keep Microsoft out of it.
Re:For Starters (Score:5, Interesting)
IBM has actually tried to introduce some goofy stuff into the XML standards, like line breaks, etc, that should not be in a pure node-based system like XML. Why are not you picking on them in your comment?
As far as SOAP and XML Web Services (standardized protocols for XML RPC transactions) Microsoft was way ahead of the pack. And I rather enjoy using their rich set of
.NET XML classes to talk to our Unix servers. It helps my company interop.
The fake grass is always greener... (Score:3, Insightful)
You had me until then; no self-respecting engineer would ever use those terms.
Re:For Starters (Score:3, Insightful)
Re:For Starters (Score:2)
Re:For Starters (Score:4, Insightful)
However, let me re-phrase the grandparent:
"For starters, make sure Microsoft can't extend it to lock out compeditors in some way."
Better?
Soko
Then what (Score:3, Funny)
Two words. (Score:2)
Re:Two words. (Score:2)
Re:Two words. (Score:2)
Then we wrap it again, that's what! (Score:5, Funny)
Of course not! That's not XML!
<file=xmlbinary> <baseencoding=64> <byte bits=8> <bit1>0 </bit><bit2>1 </bit><bit3>1 </bit><bit4>0 </bit><bit5>1 </bit><bit6>0 </bit><bit7>0 </bit><bit8>1 </bit> </byte>
<boredcomment>(Umm, I'm gonna skip a bit if y'all don't mind)</boredcomment>
</baseencoding> </file>
Now it's XML!
Re:Then we wrap it again, that's what! (Score:3, Funny)
<file type="xmlbinary">
<baseencoding base="64">
<byte bits="8">
<bit seq="0">0</bit>
<bit seq="1">1</bit>
<bit seq="2">1</bit>
<bit seq="3">0</bit>
<bit seq="4">1</bit>
<bit seq="5">0</bit>
<bit seq="6">0</bit>
<bit seq="7">1</bit>
</byte>
<!--
(Umm, I'm gonna skip a bit if y'all don't mind)
-->
</baseencoding>
</file
Vast omissions! (Score:5, Funny)
Aside from the mistakes pointed out by others, you also forgot to reference the xmlbinary namespace, the xmlbyte namespace, and the xmlboredcommentinparentheses namespace, and to qualify all attributes accordingly. You also didn't include anything in or any magic words like CDATA, and you didn't define any entities. You also failed to supply a DTD and an XSL schema.
This is therefore still not _true_ XML. It simply doesn't have enough inefficiency. Please add crap to it
The solution is clear... (Score:4, Funny)
Re:The solution is clear... (Score:3, Insightful)
Step 1 to getting binary XML (Score:2, Insightful)
Thats all you need. XML compresses great.
Re:Step 1 to getting binary XML (Score:4, Insightful)
So, to propose simply compressing it means that there's and expansion (which is expensive) followed by a compression (which is really expensive). That seems pretty silly. However, given an upfront knowledge of which tags are going to be generated, it's pretty easy to implement a binary XML format that's fast and easy to decode.
This is what I did for a company that I worked for. We did it because performance was a problem. Now, if we don't get something like this through the standards bodies, more companies are going to do what mine did and invent thier own format. That's a problem -- back to the bad old days before we had XML for interoperability.
Now, if we get something good through the standards body then, even though it won't be human readable, it should be simple to provide converters. To have something fast that is onvertable to human readable and back seems like a really good idea.
Re:Step 1 to getting binary XML (Score:3, Insightful)
This is really about making it proprietary. (Score:3, Insightful)
This is all about different companies trying to get THEIR binary format to be the "standard" with XML.
From the article
Images are already
Re:Step 1 to getting binary XML (Score:2, Interesting)
KISS (Score:5, Interesting)
I agree with his point.
What's wrong with just compressing the XML as it is with an open and easy-to-implement algorithm like gzip or bzip2?
Re:KISS (Score:2)
Re:KISS (Score:2)
Re:KISS (Score:2, Interesting)
Re:KISS (Score:5, Informative)
Data => XML.
XML == large (lots of verbose tags)
XML == slow (have to parse it all [dom], or
build big stacks [sax] to get at data)
Solution:
XML =>
You've solved (kindof) the large problem, but you still keep the slow problem.
What they're suggesting is nothing more than:
XML =>
Basically using a specialized compression schemes that understand the ordered structure of XML, tags, etc, and probably has some indexes to say "here's the locations of all the [blah] tags", attributes so you can just fseek() instead of having to do domwalking or stack-building. This is important for XML selectors (XQuery), and for "big iron" junk, it makes a lot of sense and can save a lot of processing power. Consider that Zip/Tar already do something similar by providing a file-list header as part of their specifications (wouldn't it suck to have completely to unzip a zip file when all you wanted was to be able to pull out a list of the filenames / sizes?)
"Consumer"/Desktop applications already do compress XML (look at star-office as a great example, even JAR is just zipped up stuff which can include XML configs, etc). It's the stream-based data processors that really benefit from a standardized binary-transmission format for XML with some convenient indexes built in.
That is all.
--Robert
Re:KISS (Score:3, Interesting)
So if the need is for compression over networks, well thats only half of XML performance problems. And if the end result becomes a binary form
Re:KISS (Score:2)
Re:KISS (Score:2)
Re:KISS (Score:3, Informative)
There's no reason why it couldn't be used for xml just as it is for html.
Ewan
Re:KISS (Score:2)
Make a XML compiler... (Score:2)
I guess this is another itch to scratch by the community...
Re:Make a XML compiler... (Score:2)
Re:Make a XML compiler... (Score:2)
Oooh, limelight! (Score:2)
a kabosh? (Score:2)
Re:a kabosh? (Score:2, Funny)
However, you wanted to go to a binary encoding you could try for something relatively straight forward like:
original:
patented XML encoding algorithm (hexideximal):
Binary XML has been around a while... (Score:5, Informative)
One of the earliest projects that has tried to make a binary XML (as far as I'm aware) was the EBML (Extensible Binary Meta-Language) [sourceforge.net] which is used in the Matroska media container [matroska.org].
Re:Binary XML has been around a while... (Score:2)
Re:Binary XML has been around a while... (Score:3, Insightful)
The question should instead be "How can we best standardize binary XML?"
My main fear is the typical "design by committee" style of standards bodies will lead to a super-bloated binary standard containing every pet feature of each participan
Goals (Score:2)
I'm not sure why they think that one has to come before the other.
Frankly, make it a standard so I can write proper code to handle it, and you'll have me (joe random developer) interested.
Re:Goals (Score:2, Insightful)
Because standards written in a vacuum tend to suck. Why wouldn't you want input from developers with different backgrounds and needs, then cherry pick the best ideas (many of which you didn't think of), toss out universally reviled ones, and implement a broad, useable standard?
Re:Goals (Score:2)
Basically, they could start with some structure, to ensure that structure may always be present. Hopefully.
gzip ? (Score:2, Interesting)
Somebody fill me in
...
Re:gzip ? (Score:2)
there are already standards for this... (Score:3, Interesting)
i fail to see the need to have a "binary xml" file format when there are already facilities in place to compress text streams
Re:there are already standards for this... (Score:5, Insightful)
Re:there are already standards for this... (Score:3, Insightful)
Binary formats contain pointers all over the place... pointers that say "this many bytes to the next record", or if the binary format is designed to be very fast to read, will even contain pointers that say "record 22031 is at offset XXX, record 22032 is at offset YYY". It's very quick to get to record 22032 for these formats, you just jump there and don't even have to wait eons for a physical disk to read in every single byte in between.
Now, compare to XML. EVEN
Re:there are already standards for this... (Score:2)
Maybe this is like comparing assembly to C (Score:5, Insightful)
I'm sure when C came out, the argument was similar that the performance hit doesn't make up for the readability or cross compatibility. But as computers and network connections became faster, C becomes a more viable alternative.
Re:Maybe this is like comparing assembly to C (Score:3, Insightful)
Re:WHO NEEDS FREAKING READABILITY ?! (Score:3, Informative)
And when someone sends me a bunch of data they want importing into a database, in what format should they send it? I'd like to be able to ensure that their data is correct before giving it to my import routine, and when my validator says there's an error, I'
Human readability makes it much easier (Score:3, Informative)
Many people claim that XML is so great because you can "just read and understand it" without having to use cumbersome and hard to understand specifications. This exactly is what makes XML, indeed, nice for typesetting purposes like HTML, maybe as an alternative for simple configuration files etc, but indeed NOT for RPC and databases as you write. I couldn't agree more.
I have seen so much time and money lost due
Re:WHO NEEDS FREAKING READABILITY ?! (Score:3, Insightful)
The best use for XML is at system or domain boundaries, where you cannot control the software on both sides.
For example, a support system might use file exchange to open support tickets in a vendors system for hardware failures. In this case, the vendor probably needs to deal with multiple different customers, and each of their customers might be dealing with several vendors.
Being able to encapsulate to XML, in this case, is v
You don't need to change XML itself (Score:3, Insightful)
Text compresses quite well, especially redundant text like the tags. So why not just leave XML alone and compress it at the transportation level with protocols like sending it as a zip, let v.92 modems do it automatically, or whatever. No need to touch XML itself at all.
Re:You don't need to change XML itself (Score:3, Interesting)
<SomeTagName>some character data</SomeTagName>
According to the XML spec, the closing tag must close the nearest opening tag. So why does it have to include the opening tag's name? This is 100% redundant information, and is included in every XML tag with children or cdata. An obvious compression would be to replace this with:
<SomeTagName>some chara
Binary XML is called ASN.1 (Score:3, Insightful)
But secondly, no, you don't need Binary XML, all you need to do is Gzip it on the wire. It gets as small as Binary XML.
One of the easiest ways to shrink your XML by about 90% is use tags like: instead of You can use a transformation to use the short names or long names on the wire.
Re:Binary XML is called ASN.1 (Score:2)
And it becomes even slower to parse as a result. Binary XML's advantage isn't its size, it is its parsing performance.
Re:Binary XML is called ASN.1 (Score:2)
Amen To That (Score:5, Insightful)
XML, as implemented today, is often little more than a thin wrapper for huge gobs of proprietary-format data. Thus, any given XML parser can identify the contents as "a huge gob of proprietary data", but can't do a damned thing with it.
Too many developers have "embraced" XML by simply dumping their data into a handful of CDATA blocks. Other programmers don't want to reveal their data structure, and abuse CDATA in the same way. Thus, a perfectly good data format has been bastardized by legions of lazy/overprotective coders.
The slew publications exist for the sole purpose of "clarifying" XML serves as testament to the abuse of XML.
Re:Amen To That (Score:2)
Re:Amen To That (Score:2)
If the nails look bent - blame the hammer or the carpenter?
Re:Amen To That (Score:5, Insightful)
The data is interchangable either way - only difference is that binary XML file is not immediatly human readable.
Re:Amen To That (Score:2)
Compression and huffing around (Score:2, Insightful)
Lets talk about where this verbose talk of verbosity is stemming from:
apple
orange
pineapple
this is a data set. Noone knows what it is.
Here it is again with some pseudo xml style tags
I am listing vegetables here
this is a list of vegetables
vegetables are listed on thier own without any children
Several points. (Score:2)
2) Can't webservers and browsers (well, maybe not IE, but then it's not a browser... it's an OS component, haha) transparently compress XML with gzip or some other?
3) Making it binary won't compress it all that much, using a proper compression algo will.
4) Doesn't something like XML, that makes use of latin characters and a few punctuation marks, compress with insane ratios even in lame compression algo's?
5) I
Re:Several points. (Score:3, Informative)
The problem is that XML is being used for web services which are unlike HTML: the requesting machine will not like waiting 2-3 seconds for the response to the method call. These are interoperating applications, not people downloading text to read, so the response time is much more critical.
I agree that gzip compression is a simple solution to the network problem. It does not address the parsing time problem, and in fact exacerbates it, but in my opinion the network issue is the big one. Time works in favor
Oh please god no (Score:2)
SMPTE KLV (Score:2)
it's needed today, not tomorrow (Score:2)
I'm not seeing in the article where he submits a solution to the problem, he just said as computers and networks get faster, the bloat won't be slow anymore. T
Sounds like CORBA or any other RPC. (Score:2, Insightful)
Fielding on binary Waka (HTTP replacement) (Score:2)
xtp:// (Score:2)
Ok, we got a name. Now all we need is one fart smella to design it.
Doesn't work at all (Score:2)
Nope, sorry, those lyrics suck. We're gonna stick with Mr. Bacharach's version.
Binary XML? (Score:2)
But ASCII is binary after all... (Score:3, Interesting)
However, if anything, XML has shown us the power of well-structured information. XML has given the possibility of universal interoperability. Developments in XML-based technologies have led us to the point where we know enough now to create a standard for structured information that will last for several decades.
It's time that we had a new ASCII. That standard should be binary XML.
When I think of the time that has been wasted by every developer in the history of Computer Science, writing and rewriting basic parsing code, I shudder. Binary XML would produce a standard such that an efficient, universal data structure language would allow significant advances in what is technically possible with our data. For example: why is what we put on disk any different from what's in memory? Binary XML could erase this distinction.
A binary XML standard needs to become ubiquitous, so that just as Notepad can open any ASCII file today, SuperNotepad could open any file in existance, or look at any portion of your computer's memory, in an informative, structured manner. What's more, we have the technology to do this now.
Re:But ASCII is binary after all... (Score:3, Interesting)
(1) Have every PC OS contain a DBMS (this is not as difficult as you would think)
(2) Always keep your data in a DBMS
(3) Have said DBMS transfer the data via whatever method it would like. Chances are this would be some sort of compact, efficient binary method.
Re:But ASCII is binary after all... (Score:3, Insightful)
However (as I tried to emphasize), ASCII is binary too. It's not that binary is inherently more difficult to debug. It's that we need a binary standard as universal as ASCII has become.
Imagine debugging before in the 1960's, when ASCII wasn't standardized. We forget about those
XML images !? (Score:2, Funny)
Yeah, right ! XML binary images... So needed...
Overwhelming feeling... (Score:5, Insightful)
Didn't anyone remember that text processing was bulky and expensive? Sometimes the tech community seems to share the same uncritical mind as people who order get-rich-quick schemes off late night infomercials. I doubt XML would have gotten out of the gate as is, had the community demanded these kinds of features from the get-go.
Re:Overwhelming feeling... (Score:2)
what's wrong with GZip? (Score:2)
Why not re-examine http? (Score:3, Interesting)
We need to look towards http 2.0. What I would want:
- pipelining that works, so that it could be enabled for use on any server that supports http 2.0
- gzip and 7zip [7-zip.org] support.
- All data is compressed by default (a few excludes such as
- Option to initiate persistant connection (remove the stateless protocol concept), via a http header on connect. This would allow for a whole new level for web applications via SOAP/XML.
There are tons of other things that could be enhanced for today's uses.
HTTP is the problem. Not XML
Re:Why not re-examine http? (Score:3, Insightful)
Please remember that not all XML data is transmitted by HTTP however (thank god).
It's a markup language (Score:2)
It's a markup language, it's not supposed to be ideal for general purpose data transfer.
People should stop trying to optimize it for a task it wasn't designed for. Focus on making XML better for markup, and for pity's sake come up with something else that's concise and simple and efficient for general purpose use.
Binary not needed - better table format neeeded. (Score:3, Insightful)
This is what made us balk at using XML for storing NMR spectroscopy data, even though it is already in a textual form to begin with. The current textual form is whitespace-separated, little short numbers less than 5 digits long, for hundreds of thousands of rows. That isn't really that big in ascii form. But turn it into XML, and a 1 meg ascii file turns into a 150 meg XML file because of the extra repetative tag stuff.
In another bit of irony, we can't find an in-memory representation of the data as a table which is more compact than the ascii file is. The original ascii file is even more compact than a 2-D array in RAM. (because it takes 4 bytes to store an int even when that int is typically just one digit and is only larger on rare occasions.)
The article doesn't go far enough... (Score:5, Insightful)
From experience, I can state that using XML in any high performance situation is easy to screw up. But once you get past the basic mistakes at that level, what other inherent problems are there?
Oh, and just stating "well, the format is obviously wasteful" just because it's human readable (one of its primary, most useful, features) is NOT an answer.
I get the feeling that this perception of XML is being perpetuated by vendors who do not really want to open up their data formats. Allowing them to successfully propagate this impression would be a very real step backwards for all IT professionals.
Anecdotal example (Score:3, Interesting)
Client attempted to open in a DOM based application which I suspect used recursion to parse the data (easy to code, recursion). Needless to say it brought their server to its knees.
We switched to flat files shortly there after.
In my problem domain, where 20MB is a small data set, XML is useless. XML seems does not scale well at all (though using a SA
XML doesn't need to be non-ascii to be small (Score:4, Informative)
Stop using bad DTDs. There seems to be a DTD style in which you avoid using attributes and instead add a whole lot of tags containing text. Any element with a content type of CDATA should be an attribute on its parent, which improves the readability of documents and lets you use ID/IDREF to automatically check stuff. Once you get rid of the complete cruft, it's not nearly so bad.
Now that everything other than HTML is generally valid XML, it's possible to get rid of a lot of the verbosity of XML, too. A new XML could make all close tags "</", since the name of the element you're closing is predetermined and there's nothing permitted after a slash other than a >. The > could be dropped from empty tags, too. If you know that your DTD will be available and not change during the life of the document, you could use numeric references in open tags to refer to the indexed child element type of the type of the element you're in, and numeric references for the indexed attribute of the element it's on. If you then drop the spaces after close quotes, you've basically removed all of the superfluous size of XML without using a binary format, as well as making string comparisons unnecessary in the parser.
Of course, you could document it as if it were binary. An open tag is indicated with an 0x3C, followed by the index of the element type plus 0x30 (for indices under 0xA). A close tag is (big-endian) 0x3C2F. A non-close tag is an open tag if it ends with an 0x3E and an empty tag if it ends with an 0x2F. Attribute indices are followed with an 0x3D. And so forth.
Wrong Problem (Score:3, Insightful)
Re:Wrong Problem (Score:3, Insightful)
Use XML in places where it makes sense: Interfaces between different companies/business partners/departments etc, interfaces between mutually hostile vendors, really long time data storage.
Using xml as data format between two tightly coupled Java programs, standing next to each other and who's exchanging massive amounts of data is insane.
This is of course a simplified example BUT the point is ALWAYS beware of the trade-offs you do when you make a technology choice. Same things go for algorithms
XML not useful for xferring copious binary data (Score:3, Insightful)
A good binary XML specification could be an extremely good fit for us.
And, don't suggest that we just compress XML and send that. Here's why: first we have to expand all that digitized data into some sort ASCII encoding, which is then compressed. End result: no gain and a possible loss of precision in the data.
A real, live, useful binary XML spec could help us immensely. I say BRING IT ON!!!!
BTW, wasn't DIME [wikipedia.org] supposed to address these problems? What happened to DIME, anyway?
Possibly I'm a cynic (Score:3, Funny)
We should rejoice, buy more CPUs, and move the problem from XML, to languages with poor concurrency support.
Re:Binary = Proprietary (Score:2)
Re:Binary = Proprietary (Score:3, Insightful)
Of course binary doesn't equal proprietary. Those are two completely different concepts.
PNG is a binary format. It isn't proprietary, though. And although I can't immediately find a text-based proprietary format, such formats are not impossible (although arguably easier to reverse-engineer than binary proprietary formats).
But if the XML is really such a problem, I suggest the simple solution. Compressing XML with a simple and open algorithm like gzip or bzip2, is the way to go. XML usually compresses very
Re:Binary = Proprietary (Score:3, Insightful)
As long as it's standardized, the standard is freely available to anyone who wants it, it does not depend on an external library, and it is unencumbered by any sort of patent, it isn't proprietary.
I hate XML right now because of all the string processing and parsing. Text is a sloppy way of defining something, and it begets lots of big processing libraries. It's OK for big PC memory hog apps, but I can't build a small enough one that is still robust enough to w
Microsoft XML (Score:3, Interesting)
If Microsoft doesn't respect text-only XML, what do you think will happen when^H^H^H^Hif binary XML is out?
Re:Binary = Proprietary ... I disagree (Score:3, Insightful)
It far outweighs it huh? I guess you have never heard of a large segment of the computing world refered to as embeded systems.
If you can develop a good parser (not that hard), the cost difference is negligable, if any.
This is simply untrue, development of a good parser is easy, but it's added bloat that isn't negligable for many computing devices outside of the PC/Server realm. Not to mention the added network tra
Re:Binary = Proprietary ... I disagree (Score:3, Insightful)
Re:ZIP ?! (Score:2)
Because smaller file sizes is only one of the reasons for Binary XML.
Simply compressing it makes it smaller, but does nothing to simplify handling. Parsing XML is the big hairy deal in this case. Things like XML include a lot of ambiguities and complex things, parsing/representing the trees can be a challenge. Think processing of name-spaces and all of the myriad things in XML.
I suspect the purp
Because it's freaking slow (Score:2)
As it happens, most soap requests are NOT human readable. Sure i can sit and figure one out, but unless it's a trivial example, trying to decipher it isn't easy.
A standard binary xml format would allow a standard binary soap variant. Debuggers could hand bsoap->soap tran
|
http://developers.slashdot.org/story/05/01/14/1650206/does-the-world-need-binary-xml
|
CC-MAIN-2015-06
|
refinedweb
| 4,533
| 62.98
|
#include <itemcopyjob.h>
Detailed Description
Job that copies a set of items to a target collection in the Akonadi storage.
The job can be used to copy one or several Item objects to another collection.
Example:
Definition at line 62 of file itemcopyjob.h.
Constructor & Destructor Documentation
Creates a new item copy job.
- Parameters
-
Definition at line 55 of file itemcopyjob.cpp.
Creates a new item copy job.
- Parameters
-
Definition at line 64 of file itemcopyjob.cpp.
Destroys the item copy job.
Definition at line 73 of file itemcopyjob.cpp.
Member Function Documentation
This method should be reimplemented in the concrete jobs in case you want to handle incoming data.
It will be called on received data from the backend. The default implementation does nothing.
- Parameters
-
- Returns
- Implementations should return true if the last response was processed and the job can emit result. Return false if more responses from server are expected.
Reimplemented from Akonadi::Job.
Definition at line 91 of file itemcopyjob.cpp.
This method must be reimplemented in the concrete jobs.
It will be called after the job has been started and a connection to the Akonadi backend has been established.
Implements Akonadi::Job.
Definition at line 77 of file itemcopyjob.cpp.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2019 The KDE developers.
Generated on Thu Dec 5 2019 04:14:05 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online.
|
https://api.kde.org/kdepim/akonadi/html/classAkonadi_1_1ItemCopyJob.html
|
CC-MAIN-2019-51
|
refinedweb
| 250
| 52.26
|
UFDC Home myUFDC Home | Help <%BANNER%> Venture Capital Returns and Public Market Performance Description Standard View MARC View Metadata Usage Statistics PDF VIEWER This item has the following downloads: ( PDF ) Full Text PAGE 1 1 VENTURE CAPITAL RETURNS AND PUBLIC MARKET PERFORMANCE By MELISSA CANNON GUZY A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 2010 PAGE 2 2 2010 Melissa Cannon Guzy PAGE 3 3 To my family and friends, who have nurtured my intellectual curiosity throughout my lifetime PAGE 4 4 ACKNOWLEDGMENTS I would like to thank my family for all of their support and encouragement over the years as well as the University of Florida for allowing me to continue my education. Thank you to my committee for their input and advice, including Dr. Joel Houston and a special thanks to my committee chair, Dr. David Brown. I would also like to thank Kelly Herring, Dean Kraft, David Day and Christopher Needles of the University of Florida for their support. PAGE 5 5 TABLE OF CONTENTS Page ACKNOWLEDGMENTS .................................................................................................. 4 LIST OF FIGURES .......................................................................................................... 7 LIST OF ABBREVIATIONS ............................................................................................. 8 ABSTRACT ................................................................................................................... 11 CHAPTER 1 VENTURE CAPITAL ............................................................................................... 13 Venture Capital Returns .......................................................................................... 14 Literature Review .................................................................................................... 20 2 DATA DESCRIPTION ............................................................................................. 28 3 SEMICONDUCTOR INDUSTRY ............................................................................ 33 Venture Capital Returns in the Semiconductor Industry ......................................... 37 Semiconductors Industry Public vs. Private ............................................................ 40 4 CLEANTECH INDUSTRY ....................................................................................... 48 Venture Capital Returns in Cleantech Industry ....................................................... 51 Cleantech Industry Public vs. Private Investment Returns ...................................... 56 5 INTERNET INDUSTRY ........................................................................................... 61 6 WINDOW OF CONVERGENCE ............................................................................. 70 7 CHINA EMERGING VENTURE CAPITAL SECTOR .............................................. 77 8 ROLLING BETA ...................................................................................................... 80 9 CONCLUSION ........................................................................................................ 81 LIST OF REFERENCES ............................................................................................... 83 BIOGRAPHICAL SKETCH ............................................................................................ 86 PAGE 6 6 LIST OF TABLES Table page 1-1 Market Return and Standard Deviation .............................................................. 16 1-2 Venture Returns Compared to the Nasdaq Composite and the Russell 2000 .... 17 3-1 Semiconductor Returns Summary ...................................................................... 41 3-2 Semiconductor Correlation Coefficient Matrix 1990-1995 .................................. 42 3-3 Semiconductor Correlation Coefficient Matrix 1995-2000 .................................. 43 3-4 Semiconductor Correlation Coefficient Matrix 1998-2000 .................................. 44 3-5 Semiconductor Correlation Coefficient Matrix-2000-2009 .................................. 45 3-6 Semiconductor Correlation Coefficient Matrix 2007-2009 .................................. 46 4-1 Mean Returns and Standard Deviation for the Cleantech Sector ....................... 56 4-2 CleanTech Correlation Coefficient Matrix 2000-2009 ......................................... 58 4-3 CleanTech Correlation Coefficient Matrix 2007-2009 ......................................... 59 5-1 Mean Returns and Standard Deviation for the Internet Sector ........................... 66 fr-FR 5-2 Internet Correlation Coefficient Matrix 1995-2000 ................................ .............. 67 fr-FR 5-3 Internet Correlation Coefficient Matrix: 1998-2000 ................................ ............. 68 fr-FR 5-4 Internet Correlation Coefficient Matrix 2000-2009 ................................ .............. 68 5-5 Internet Correlation Coefficient Matrix 2007-2009 .............................................. 69 PAGE 7 7 LIST OF FIGURES Figure page 1-1 Rolling Cash Draw Downs vs Distributions .................................................... 18 7-1 China Exit Multiples ........................................................................................ 78 PAGE 8 8 LIST OF ABBREVIATIONS A-Mean Average Mean AATI Advanced Analogic Technologies Incorporated AMAT Applied Materials, Inc. CAGR Compound Annual Growth Rate CAPM Capital Asset Pricing Model CTIUS The Cleantech Index DA RPA The US Defense Research Agency DOE Department of Energy ECO The Wilderhill Index EDA Electronic Design Automation ENOC EnerNOC, Inc. EV Electric Vehicles FSLR First Solar, Inc. INTC Intel Corporation IPO Initial Public Offering IRR Internal Rate of Return IXIC Nasdaq Composite Kleiner Perkins Kleiner Perkins Caufield & Byers LP Limited Partner LSI LSI Corporation M&A Merger and Acquisition MOX Morgan Stanley Internet Index MW Megawatts PAGE 9 9 NASDAQ National Association of Securities Dealers Automated Quotations NVCA National Venture Capital Association NYMEX New York Mercantile Exchange PC Personal Computer PLXT PLX Technology PV Photovoltaic R&D Research & Development R&D Research and Development SD Standard Deviation Sequoia Sequoia Capital SOX Philadelphia Semiconductor Index SPGTCLNT S&P Global Clean Energy Index STP Suntech Power Holdings SUZ Suzlon Energy T-Mean Geometric Mean TAM Total Available Market TE Thomson VentureXpert Energy/Industrial Returns TI Thomson VentureXpert Internet Returns TS Thomson VentureXpert Semiconductor Venture Returns TSMC Taiwan Semiconductor Manufacturing Corporation TXN Texas Instruments Incorporated UMC United Microelectronics Corporation VantagePoint VantagePoint Venture Partners VC Venture Capital VC XLNX Xilinx, Inc. PAGE 10 10 Y GE Yingli Green Energy Holding Company Limited ZOLT Zoltek Companies, Inc. PAGE 11 11 Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science VEN TURE CAPITAL RETURNS AND PUBLIC MARKET PERFORMANCE By Melissa Cannon Guzy May 2010 Chair: David Brown Major: Finance PAGE 12 12 lower than the returns to publicly traded technology firms. I argue that this underperformance. Th attracted substantial capital. As the 40-year technology innovation for transformation in the U.S. has slowed, the ability for U.S. VCs to generate returns in excess of public markets has dwindled to an elite few. PAGE 13 13 CHAPTER 1 VENTURE CAPITAL Over the past 30 years, technology innovation, supported by venture capital investment, was a growth catalyst in the U.S. The Venture Capital Industry, at times, produced spectacular financial returns from a variety of industries including semiconductors, biotechnology, the Internet and optical communications, among others. During the late 1990s, after several years of high returns on venture capital prior investments investor interest increased in the asset class. This interest in venture capital soared after 1995, with new commitments from limited partners ( LP ) rising 24.2 percent in 1996 to nearly $10.5 billion and then rising an additional 45.0 % the following year. By 2000, new LP commitments reached $93.4 billion, more than 10 times the amount available in 1995 1 chasing returns above the public markets. However, the technology bust of 2000 began a decade of challenging times for the VC community, the number of IPOs declined, thresholds increased, investment holding periods increased, companies became more capital intensive and merger and acquisition (M&A) values declined. The analysis was designed to study venture capital returns during the time periods of 1990-1999 and 20 00 -2009 and then compare the VC returns to U.S. public market benchmarks and individual stocks in 3 industries Semiconductors, CleanTech and the Internet. These sectors have historically accounted for at least 20% of the annual investment by VCs at one time or another. For each sector, the venture return data was collected from the Thomson VentureXpert database and Cambridge Research LLC and then compared to the broader market performance of the Nasdaq Composite and 1 NVCA PAGE 14 14 the Russell 2000 Index. Additionally in each sector, the specific industry venture returns were also compared to the public market performance of an industry specific index and at least 5 public companies. Other assets considered as industry growth catalysts were included to determine any impact of market conditions that could influence returns. Each sector analysis includes a returns correlation matrix, R 2 matrix, a covariance matrix and a beta matrix to determine the relationships among assets. Additionally, the data from the analysis was used to identify if there was a window of time for an optimal VC exit when recent initial public offerings (IPO) returns converge to either the industry benchmarks, the Nasdaq Composite or the Russell 2000 Index. The Venture Capital industry historically was at the forefront of development of many new markets and companies creating clusters of entrepreneurs, utilizing industry contacts, adding management expertise and providing capital. By focusing on emerging sectors where new companies could become industry giants, the VC industry generated extraordinary returns for their investors. In March 2000, the NASDAQ market peaked at 5,132, which was 500% above its level in August 1995, the same time as the Netscape IPO. By 2002, the NASDAQ market had fallen to 1,185, and Silicon Valley was left t companies. Venture capital returns have been struggling to recover ever since Venture Capital Returns venture capitalists and their Limited Partners are feeling the effects of the technology crash a decade ago, as well as the confluence of events that occurred in late 2008. VCs have nearly been choked by a near stoppage of IPOs and declining M&A valuations. For the first time since the National Venture Capital Association (NVCA) has been tracking this data, in Q4 2008 and Q1 2009, there were not any IPOs PAGE 15 15 of VC-backed companies. For the calendar years of 2008 and 2009, there were only 7 and 9 venturebacked IPOs, totaling $600 million and $1.2 billion in proceeds respectively There were 392 IPOs of venture-backed companies between 2001-2008, significantly lower than the 1,776 between 1992 and 2000. VCs are now far more dependent upon M&A for liquidity. During the period of 20 02 -2008, the median M&A exit ratio for the U.S was just 1.7x and by 2009, the ratio had dropped to .9x, which means that these investments are returning less than invested capital. 2 However, the M&A exit ratio for Chinese companies during the same period was 8.8x of invested capital, significantly higher than the U.S., Europe or Israel. According to press releases by Cambridge Associates LLC, venture returns have declined significantly over the last decade. The rolling ten-year return, reported quarterly, w as 8.4% as of September 30, 2009, down from 14.3% as of June 30, 2009 and down from 40.2% one year earlier. The decline in the 10-year rolling return data was not unexpected, as the lucrative 1999 dot.com exits are no longer included in the 10-year computation. According to The National Venture Capital Association (NVCA) and Cambridge Associates LLC, 3 which is comprised of 1,287 venture capital funds, including fully liquidated partnerships formed between 1981 and 2009, the one-year return for venture capital was -12.44%, compared to the Russell 2000 index, which was -9.55%, and the Nasdaq Composite, which posted a gain of 1.46%. Cambridge Associates reports 2 E&Y Venture Insights Q3 2009 3 Cambridge Associates LLC notes in the report that since ventu re -backed companies usually require 58 years to mature that these funds are really too young to have produced meaningful returns, so returns to the benchmark statistic may be irrelevant but could be indicative of a trend. PAGE 16 16 quarterly results based upon the vintage year of a fund. For the analysis, reported quarterly net IRR returns were used from the Thomson VentureXpert database. Table 1-1 summarizes the returns for the Nasdaq Composite, the Russell 2000 Index and venture capital top quartile performing funds and overall venture returns for the periods between 1990-2000 and 2000-2008. Table 1-1: Market Return and Standard Deviation 4 Nasdaq Composite Geometric mean return Nasdaq Composite Std. Deviation Russell 2000 Geometric mean return Russell 2000 Std. Dev iation Top Quartile Venture Geometric mean return Top Quartile Ve nture Std. Dev iation Overall Venture Geometric mean return Overall Venture Std. Dev iation 1990 2000 14.36 % .306 7.74% .207 53.86% .501 33.74% .316 2001 2008 1.52 % .370 5.47% .300 6.16% .232 1.55% .168 During the period of 1990-2000, overall VC returns regardless of whether a firm was in the top quartile generated very high returns and with a standard deviation similar to the Nasdaq Composite. Since 2001, the top quartile of VC firms continued to outperform the Nasdaq Composite as well as the Russell 2000 Index, although by a much narrower margin. The overall venture returns during this period underperformed the Russell 2000 Index as well as the Nasdaq Composite, prior to any adjustment for liquidity. The volatility of VC returns dropped significantly during this period. Between 1990 and 2000, venture capital funds generated stellar returns on average of 33.74%, compared to 14.36% for the Nasdaq Composite and 7.74% for the Russell 2000, net 4 The Nasdaq Composite and the Russell 2000 Index return calculations are the time-weighted average over the specified time periods calculated by using the annualized point to point quarterly returns. The venture capital returns were derived from the Thomson VentureXpert database and represent the Time Weighted annualized pooled average of over the specified time period. The Thomson database uses periodic IRRs based upon a defined sample set. According to Cambridge Associates LLC, the equal weighted pooled mean for venture returns for a 2000 vintage fund was -2.76%. The results may vary due to due to disparity of sample size. PAGE 17 17 returns in excess of the benchmark indices were 19.38% % and 26.00% respectively. Table 1-2 summarizes the return spread during the analyzed time periods. Table 1-2 : Venture Returns Compared to the Nasdaq Composite and the Russell 2000 5 Venture Top 20 Spread over the Nasdaq Composite Venture Top 20 Spread over the Ru ssell 2000 Venture Spread over the Nasdaq Composite Russell 2000 Std. Dev Venture Top 20 Spread over the Russell 2000 1990 2000 39.50% 46.12% 19.38% 26.00% 200 1 2008 4.63% .69% 3.07% 7.02% The overall venture returns are not only underperforming the broader market indices, but in some cases they are failing to return the invested capital. Vintage year venture funds raised in 1999 have paid just 0.63 times the amount of capital paid in by Limited Partners, while the 2000 vintage funds have returned just 0.38 times. Another measure of actual performance is cash and stock distributions compared to capital drawn down by VCs, as summarized in Table 13. Table 1-3: Venture Capital Cash flow Summary 6 Capital Drawn Down by VC Cash Di stributions Stock Distributions Total Distributions Percentage of C apital Returned 1990 2000 $52,865.83 $35,013.13 $37,974.99 $79,988.12 38.063% 200 1 2008 $85,022.55 $58,428.07 $17,440.51 $75,868.58 10.767% It is evident that there has been a performance shift in VC returns post 2000, which has persisted for the last 8 years. The market conditions that existed during the positive return period need to be examined further. The optimal time for a VC to invest is during the period of rapid and transformational industry innovation in emerging 5 See footnote 4. 6 Data was derived from Thomson VentureXpert. The sample size was $ 225,381.30 million. Average sample size was 897. PAGE 18 18 geographical markets or in emerging market segments. It is also extremely important that the environment is receptive to start-up companies; customers must be willing to purchase goods and services from a new market entrant. Historically, VCs that identified nascent market segments, ignored or dismissed by the large companies, that then underwent rapid innovation, generated substantial financial returns. It is evident that during the period from the mid-1990s to mid-2000, Venture Capital cumulative distributions to the LPs far exceeded draw downs by VC Firms Figure 1-1. Rolling Cash Draw Downs v. Distributions The rapid innovation characteristic typical of successful VC industries was evident in semiconductors and the Internet markets and is apparent in the emerging CleanTech industry. In order for start -ups to succeed in the CleanTech sector, government policy and economic support, including financial subsidies to make the economics attractive for the ultimate customer must continue. Venture-backed companies face difficulties in competing in industries that favor the large incumbent, characterized by expensive and PAGE 19 19 long testing cycles, limited distribution channels and capital intensive. Additionally, industries that only require sustaining innovation are also unattractive I ncumbents can easily implement the next generation product, and they have significantly more knowledge regarding the industry development and the needs of customers. In transformational markets, the existing market leaders may not feel it is necessary to allocate resources since the perception is that market risk is high, timing is uncertain and the future return is unknown. Once an industry segment begins to mature by growth and profitability, the more established companies will enter the market and displace the startups, unless a strong company exists with the resources to ward off competition. Managing the exit strategy and timing is an important element of VC returns. The competition increases may compromise realized returns. This thesis summarizes the observed correlation coefficients, geometric mean and average mean returns, as well as the Abnormal Return observations for three venturebacked industries: semiconductors, CleanTech and the Internet, which were then compared to benchmark indices and particular public stocks in each market segment. Additionally, the rolling beta of the overall venture industry, the Nasdaq Composite, the Emerging Market Index was compared to the S&P 500. The thesis includes the findings from the observed results and the potential implications for venture capital returns. of time for the correlations of the public companies to converge to the broader benchmark indices evident by an observed correlation coefficient in excess of >.50 for three consecutive quarters. PAGE 20 20 Over the past 30 years, VCs have pursued industries that have the potential to generate above market returns and from time to time adjusting their allocations to the which future market will provide an accelerated growth opportunity for VCs has not been easily replicated, especially over the last 10 years. Since the peak in 2000, when VCs invested $100.5 billion in 7,913 deals, VC investment totals have dropped in 2009 to $17.7 billion in 2,795 deals, marking the lowest level of dollar investment since 1997. 7 Later stage and expansion capital represented 64.50% of the capital invested in 2009, compared to 60.11% in 2000. The challenge for VCs is to identify market opportunities, find qualified management teams that can commercialize potentially disruptive technology or create an emerging market segment; a necessary condition for start-ups to succeed and for VC to generate above public market returns. Literature Review Many academic papers have been written regarding the VC industry, but few have examined the correlation and relationship, by sector, between the VC industry, specific industry indices and public market comparables. Very few research papers have been written that address the market and technology environment that existed pre-2000 that enabled VCs under favorable market conditions to generate attractive returns. In examining Venture Capital behavior, several papers have been written to address the timing of influx of capital. In the paper Venture Capital Investment Cycles: The Impact of Public Markets (2005 Gompers, Lerner and Sharfstein) they find that 7 PricewaterhouseCoopers/National Venture Ca Reuters. PAGE 21 21 venture capitalists with the most industry experience increase their investments the most when public signals become more favorable. The authors believe that venture capitalists rationally respond to attractive investment opportunities signaled by public markets shifts. This is consistent with why VCs flock to industries simultaneously and have a general belief that the oats, defined by a high P/E rating and market capitalization The Price Waterhouse MoneyTree data that shows that VC investment peaked in 2000 supports this theory. However, the flow of funds into venture capital investments may in fact be negatively correlated to the returns generated by venture capital firms. According to the Price Waterhouse Coopers MoneyTree Report, which is a quarterly study of venture capital investment activity in the U.S. and is a collaboration between PricewaterhouseCoopers and the National Venture Capital Association based upon data from Thomson Reuter, the flow of funds into the semiconductor, Internet and telecommunication market at the pe ak of Nasdaq. The behavior of venture capital returns by sector should be examined to fully understand the impact of an industry life cycle. The volatility of venture returns, the flow of funds and the types of investments made in any given year in the venture capital industry is also well documented, however variation in returns is often ignored as sectors are abandoned. What happens to the returns over time that have IPOs or sectors out of favor should be examined to understand the full life cycle of an industry segment? PAGE 22 22 The flow of funds into the industry is highly correlated to an increase in IPO valuations. which leads to additional venture capital firms to raise additional funds (Gompers and Lerner 1998b; Jeng and Wells, 2000) Moreover, returns of venture capital funds tend to be highly correlated with the returns of the market as a whole (Cochrane, 2005; Kaplan and Shoar 2005; Ljundvist and Richardson, 2003). However. the correlation to the public markets varies over time and by industry segment, which reflects the maturation of particular industry segment over the life cycle. The growth of the venture capital industry in the early 1980s and the unprecedented growth in venture capital fundraising in the 1990s were matched by a rise and fall in the IPO market activity. This suggests that both venture capital firms as well as the LPs use the public market and IPO activity as a signal for the potential for returns, rather than the actual returns or cash distribution on a risk adjusted basis. The volatility of the number of investments made in a specific industry is often the subject of academic inquiry (e.g., Sharfstein and Stein, 1990), which addresses the fluctuation in venture capital investment activity. An influx of capital into a specific sector may be a response that venture capitalists feel compelled to follow the "herd for the reputation consequences of being an industry contrarian or left out. VCs also find that participation by a co-investor is preferred, with the expectation that the other firm will reciprocate in the future (Lerner 1994a). Second, by sharing the due diligence, VCs can correlate market signals and thereby may select better investment opportunities in situations of uncertainly and return potential Wilson (1968), Sah and Siglitz (1986). Finally, VCs tend to have expertise that is both sector-specific and location-specific, and syndication can assist with information across industry boundaries, allowing VCs to PAGE 23 23 diversify their portfolios (Stuart and Sorenson (2001). The number of firms pursuing an investment opportunity is often perceived as a quality indicator. When a VC invests alongside another reputable firm with prior experience and expertise in a particular sector, the co-investor taking the same risk often justifies the investment decision. Additionally, building a syndicate often reduces the risk of capital available for follow on financings. Academic papers that have attempted to address the driver behind the venture capital industry have exami ned the number of patents, levels of R&D spending and venture capital funding (Hellman and Puri, 2000), as well as what Kortum and Lerner (2000) focused on, which is the surge of venture capital funds after 1978, when the U.S. Department of Labor freed pensions to invest in venture capital as well the ratio of patents to R&D spending, rather than patenting. Kortum and Lerner (2000) concluded that patent filing patterns across industries over a three-decade period suggest that the impact on venture capital and technological innovation is positive and significant. They suggest that venture capital accounted for 8% of industrial innovations in the decade ending in 1992. A common conclusion among academics is that venture capital spurs growth and innovation of new firms. Another view is that when new opportunities arise for new firms to innovate, these companies require funds from venture capitalists and as a consequence venture capital investments in a sector increase. Although prior research attempts to get to answer the question of the underlying drivers of venture returns and the impact of technological innovation opportunities, the understanding of the market conditions that need to exist for venture returns, in excess of market returns to justify the risk, needs additional research. The recent collapse in venture activity for PAGE 24 24 innovation was the subject of Lerner and Schiff (2002) as well as Gompers and Lerner (2001). They concluded that while venture capital has a powerful impact on innovation, it is far from uniform. Boom periods lead to overfunding of particular sectors, which then can lead to a sharp decline in venture fund effectiveness. Additionally, Hirukawa and fi According to the paper, over the last 30 years, public and private pension funds as well as sovereign wealth funds have increased their allocations to venture capital, with the belief that venture returns can increase their overall returns above the public robustness of the venture capital market depends upon a vibrant public market that allows venturebacked companies to IPO. They point out that few firms went public in 1970s and very little venture capital was raised, which was true, but other factors such as the development of the industry and the restriction on pensions to invest money in the asset class are relevant facts. The quality of venture capital firms and their impact on the success of a startup has been studied. Hochberg, Ljundgvist, Lu (2005) analyzed that VC Funds with more influential networks have significantly better performance, as measured by the proportion of portfolio company investments that are successfully exited through an IPO or a sale to another company. Other papers discuss measuring risk for venture capital (Woodward, SandHill Econometrics 2009), which addresses the issues of how to measure risk when stale valuations are present. Since venture capital funds are organized as limited partnerships, and these investment funds are carve-outs of the 1940 Investment PAGE 25 25 Company Act and therefore, prior to 2008, the y were exempt from marking to market or to report valuations in a standard format, except as required by the limited partners. As part of the larger move toward fair value accounting, venture capital firms are required under the Financial Accounting Standards Board Accounting Standards Codification (ASC) 870 formerly FAS 157, to mark their portfolios to market on a quarterly basis. Even with the revised accounting standards, the requirement to mark for a venture capital investment is a challenging exercise given the illiquid nature of the assets in the portfolio, the standard set of criteria such as which market comparables that should be used, the discount for revenue projections given the level of uncertainty in the business plan and nascent aspects of the industry development, dilution for future rounds of financing as well as IPO dilution, the impact of the liquidation preference for preferred shares and a control position in an IPO or in the event of M&A. The assessment is highly judgmental and mainly based upon market comparables and analogous events. Since VC firms are ranked by their reported internal rate of return (IRR) to the LPs, few will take the risk of potentially under-valuing their portfolio in fear of being ranked lower than their peers. P re -2008, most venture capital firms valued their portfolio holdings annually at either the price of the last round or a significant event such as an IPO, new round of financing, acquisition or shut down. Therefore, valuations reflected the view of the venture capital market rather than asset appreciation. Flag Venture Management (2002) stated that Start up companies simply have too many moving product development delayed, a patent awarded to permit a meaningful valuation for PAGE 26 26 even the briefest period of time. 8 The late 1990s, valuations for venture-backed companies, especially in the Internet, increased substantially regardless of the actual growth or profits. The valuations of the venture-backed companies were based upon market perception and fierce competition among VCs for deals by other firms. The valuation of a venture portfolio reported to the Limited Partners, even under FASB 157, is cu rrent especially of an early stage company. The only measurement of value is the market clearing price, which generally reflects the potential future value. VC follow on financing are often based upo n non-quantitative measurements L ate stage or pre-IPO round valuations can often surpass the public market comparables. Venture capitalists often choose the most attractive set of comparable, winners in their respective market segments to justify valuations. Additionally, there are times when venture capitalists have made investments in early stage companies that do not require substantial rounds of follow on investments and those may prove to be more valuable than the carrying cost. On the other hand, the companies that have created a new market segment may be undervalued, and current market comparables may not reflect the high growth rate. Because the reported returns may not reflect the actual return, determining the relationship to public market securities or industry benchmarks is challenging. However, over time, a comparison of the returns should prove valuable as an evaluation of the maturity market segment and the potential linkage to the public markets Attempts to overcome the portfolio valuation issue were examined in a paper by Emery (2003), where returns over longer 8 Beyond the J Curve, Managing a Portfolio of Venture Capital and Private Equity Funds written by Thomas Meyer and Pierre-Yves Mathonet (September 2008 ). PAGE 27 27 time periods were used to overcome stale pricing problems. A related study is Gompers et al. (2006), which indicates that a large component of success in entrepreneurship and venture capital can be attributed to skill rather than luck. They show that entrepreneurs with a track record of success are more likely to succeed than novice entrepreneurs and those who have previously failed. They also find that funding by more experienced VCs enhances the chance of success but only for entrepreneurs without a track record of success. During the period between1970-1999, technology innovation was in full speed in the U.S. in areas such as semiconductors, telecommunications, wireless and broadband, that led to the Internet revolution. The growing demands for computers were driven by advancement in science, industry and the government and created a n opportunity to commercialize technology development in semiconductors as well as personal computers. As highlighted in the b : Innovation and the Growth of High Tech 1930-1970 by Christophe Lecuyer, the rise and growth of technology innovation in Silicon Valley was not an accident but made possible by 40 years of accumulated skills and competencies with expertise in manufacturing, product engineering, sales and marketing. The companies in Silicon Valley were able to capitalize on the demand for high performance electronic components in World War II and the Cold War driving technology innovation in reliability and commercial production. PAGE 28 28 CHAPTER 2 DATA DESCRIPTION The data set composition consists of venture returns specific to each industry from the Thomson VentureXpert database, using the Cap Weighted periodic IRRs as well as the annualized point to point quarterly returns for the Nasdaq Composite and the Russell 2000 Index. Additionally, for each sector a minimum of 4 public stocks as well as an industry-specific benchmark index, again using the annualized point to point quarterly returns was included. The correlation coefficients among the public stocks are based upon daily returns. The correlation coefficient to the Thomson venture returns are based upon quarterly periodic IRRs to the quarterly point to point returns of the public stocks. The correlation to PC sales and solar module sales was based on annual growth of the assets to the annual average growth of the other assets. The semiconductor segment included more public stocks and only one industry benchmark, the SOX Index. The public semiconductor stocks include a sampling of small as well as more established companies. The broader definition of the Energy/Industrial sector used by the Thomson VentureXpert to calculate the CleanTech industry performance is often debated as to its applicability to the revitalized CleanTech industry, which is a broad term encompassing not only energy generation and energy storage but also EV transportation. The solar segment was emphasized due to the amount of data available compared to other CleanTech sectors and its inclusion in the venture database as well as the industry index funds. For the Internet market, the appropriate mix of public stocks included companies in search as well as e-commerce and one industry index. The objectives of the analysis were as follows: PAGE 29 29 (1) Analyze industry specific venture returns compared to the Nasdaq Composite and the Russell 2000 for a variety of time periods, including: 1990-1995, 1995-2000, 1998-2000, 2000-2008, 20072009; (2) Analyze the performance of the Thomson Semiconductor (TI) annualized quarterly Cap Weighted Periodic IRR returns to the Time Weighted annualized point to point quarterly returns for the public stocks and indices including; Nasdaq Composite (IXIC), the Russell 2000 Index (RUT), the Philadelphia Semiconductor Index (SOX) Intel (INTC), Applied Materials (AMAT), Texas Instruments (TXN), Xilinx (XLNX), PLX Technology (PLXT), and Advanced Analogic Technology (AATI) as well as for PC Sales (PC); (3) Analyze the performance of the Thomson Energy/Industrial (TE) returns to the Nasdaq Composite (IXIC), the Russell 2000 Index, The Wilderhill Index (ECO), The Cleantech Index (CTIUS), the NYMEX Crude Oil Futures (CIc1), the S&P Global Clean Energy Index (SPGTCLNT). Stocks also included in the analysis were Zoltek (ZOLT), Suntech Power (STP), First Solar (FSLR) and Yingli (YGE) and EnerNOC (ENOC) using the same methodology as the semiconductor market. The time periods prior to 2000 were eliminated due to insufficient data; (4) Analyze the Internet Sector including the performance of th e quarterly returns annualized for the Thomson Internet (TI) returns to PAGE 30 30 the Nasdaq Composite (IXIC), Russell 2000 Index (RUT), The Morgan Stanley Internet Index (MOX), Yahoo (YHOO), Google (GOOG), Amazon (AMZN), Sina (SINA), Priceline (PCLN) and PC sales (PC) again using the same methodology. (5) Summarize the convergence observations and the potential impact on liquidity timing. One of the benchmark indices chosen included the Russell 2000 Index, which measures the performance of the small-cap segment of the U.S. equity universe and is a comprehensive barometer and it completely reconstituted to ensure that larger stocks do not distort the performance and characteristics of the small cap universe and therefore it is a relevant benchmark for venture capital returns. The Russell 2000 index includes previously venture-backed companies including AMD, Amazon, AMAT, Broadcom, Cisco, First Solar, Google as well as Intel. The other market index chosen was the Nasdaq Composite as a measure of more broad public market performance. Its composition includes approximately 3,000 companies that trade on the exchange. For the semiconductor market, the SOX Index, which was introduced on December 1, 1993, is the most widely recognized index that investors use to track the performance of semiconductor makers and equipment manufacturers. Because it tracks the cyclical semiconductor industry, it has been a very volatile over the years. Since t he SOX is comprised of 18 stocks, of which 14 of the companies manufacture semiconductors and 4 produce semiconductor equipment, it is an indicator for the overall industry rather than a barometer of the growth companies in the sector. Additionally, it is a price-weighted index, meaning that firms with higher stock prices PAGE 31 31 have greater influence on the index, reflecting the large capitalization stocks and potentially having less relevance to venture-backed companies. Specific industry benchmarks for the CleanTech sector included the WilderHill Index (ECO), created in 1994 to track the Clean Energy sector and emphasizing companies that benefit substantially from a societal transition toward use the of cleaner energy and conservation. 9 Stocks and sector weightings in the ECO Index are based on their significance for clean energy, technological influence and relevance to preventing pollution, including companies such as First Solar, JA Solar, Advanced Battery, Suntech, Gamesa, Suzlon as well a Portland General Electric. The Cleantech Index (CTIUS) claims to be the first and only index to reflect the surging demand for clean technology products and services. The Cleantech Index is comprised of 78 companies from the alternative energy, energy efficiency, advanced materials as well as power transmission sectors and includes companies such as Zoltek, Suntech, Suzlon, and First Solar among others. The third industry analyzed was the Internet sector, and the industry benchmark index chosen was the Morgan Stanley Internet Index SM (MOX), which is one of the oldest indices in the sector with data available pre 2000. The MOX Index includes companies from 9 Internet subsectors: infrastructure, infrastructure services, consulting/services, portals, vertical portals, commerce, Internet/B2B software, B2B commerce, and multi-sector. The MOX Index also includes companies such as Amazon, AOL, Priceline.com, Yahoo and Microsoft. 9 Wilderhill Index website. PAGE 32 32 Each section has a brief industry overview, summary of returns for each asset and correlation coefficients between and among assets in each industry as well as the abnormal returns for each period using the CAPM. The convergence analysis examines the dates at which the returns of the public companies begin to converge with the returns on the various benchmark indices to determine the amount of time after an IPO has an asset perform independently of the market benchmark index. The convergence analysis used a correlation coefficient of >.50 for a least 3 consecutive quarters. PAGE 33 33 CHAPTER 3 SEMICONDUCTOR INDUST RY The rise of Silicon Valley from the 1930s to the 1990s was a confluence of events influencing the technology innovation process, which was then shaped by successive waves of innovation and entrepreneurship. The formal beginning of the venture capital era can easily be linked with the development of the semiconductor industry and the advancement of a wide-array of electronics devices from the personal computer (PC) to the mobile phone as well as other related industries, which developed contemporaneously in Silicon Valley. When consumers required applications for the PC, the software industry re sponded to market needs and flourished, developing new applications increasing the utility of the home computer. Other peripherals, such as the hard disc drive, developed in response to the need for data storage driven by the new applications. T he development of the telecommunications industry and the link to the PC industry creating a connected environment spawned the network equipment industry and a new way for enterprises to conduct business. The wireless market developed as consumers demanded mobility. The commercial development of the Internet linked everyone together, which was all part of the new technology ecosystem. The semiconductor industry growth accelerated in the late 1960s, w hen entrepreneurs left established companies (predominantly Fairchild and RCA) and started more than 30 start-ups in Silicon Valley including Intel, Intersil, National Semiconductor and Advanced Micro Devices. As a result of these and other innovations, semiconductor industry accelerated in the 19 70s wit h funding from early venture capitalists, angel investors and corporate development funds. The innovations developed by these new companies opened up a variety of PAGE 34 34 commercial sectors including the watch industry, consumer electronics, instrumentation, telecommunications as well as the auto industry, all made possible by a significant cost reduction of electronic functions In the early days, the semiconductor entrepreneurs had little choice but to focus on nascent and niche markets such as pocket radios, hearing aids and military applications. However, over the next few decades, the technical innovations that would improve speed, capacity and cost eventually enabled the rise of the PC market revolution and a new group of start-ups such as Apple, Inc. (1977), Tandy Computer(1977) and Atari, Inc. (1972). 10 developed in 1970 and the first microprocessor was in 1971. Further PC adoption was enabled by the floppy disk for storage and software for applications. 11 The entrepreneurs in the semiconductor market created new markets for their products by engineering new end products and promoting these reference designs. The home computer became the personal computer. With the introduction of word processing, every PC became a typewriter and with the arrival of the modem, which permitted e-mail and Internet access, new consumer uses were discovered. The scope of PC applications ranged from games, spreadsheets and graphics, but an important impetus to sales growth was educational applications. It is not surprising that the correlation of PC sales to the Thomson Semiconductor reported IRR from 1990-2000 was .9750, which supports that the rate of innovation in the PC ecosystem had a significant impact on VC returns. 10 Incorporation dates are from Wikipedia 11 Computer History Museum website PAGE 35 35 The relationship between computers and semiconductors emerged in the late 1960s. In 1968, military application accounted for 50% of semiconductor output, 30% of computer manufacturing and 20% of industrial goods. By 1979, military use accounted for only 10%, computers 30% and industrial and consumer goods 6%. 12 The semiconductor industry grew rapidly and doubled in growth from $41 million in 1964 to $120 million in 1966;by 1970 the market had reached $420 million. 13 In 1970, the Total Available Market (TAM) was $2.4 billion and the cost to build a foundry was estimated at $6 million. By 2005, the worldwide TAM for semiconductor had grown to $245 billion and the cost to build a new foundry had risen to approximately $3 billion. By the early 1970s, Silicon Valley was occupied by semiconductor companies, computer firms using their devices and programming and service companies serving both. Between 1968 and 1975, 30 venture capital firms were formed, including Mayfield fund (1968), Arthur Rock and Associates (1969), and Kleiner Perkins (1972). 14 The lure of new technologies and markets as well as the availability of venture financing led to the proliferation and growth of startups in the semiconductor industry. By 1980, there were 89 firms with $4 billion under management, and by 2004 there were 1,068 venture firms with $261 billion of capital under management. 15 Many believe that the availability of venture capital exploded after the successful IPO of Apple Computer that debuted with a $1.3 billion market capitalization in 1980. 12 U.S. Department of Commerce, a report on the U.S. Semiconductor industry, page 8 Article Semiconductor productivity gains linked to multiple innovations 1988. Mark Scott Sieling. 13 Making of Silicon Valley, Innovation and the Growth of High Tech, 1930-1970, page 255. 14 Making of Silicon Valley, Innovation and the Growth of High Tech, 1930-1970, page 258. 15 NVCA Yearbook 2004. PAGE 36 36 The venture capital industry continued to back new companies addressing new market sub segments in the semiconductor ecosystem. The availability of 3 rd party design software gave way to Electronic Design Automation (EDA) market. Before the EDA market was established, semiconductors were designed by hand and manually laid out. ECAD (1982) was one of the early vendors of EDA software and was the foundation for Cadence, 16 which became a dominant market player. The original expert knowledge that was once required and limited to a few highly talented engineers had now been simplified. The formation of the semiconductor equipment industry was a transitional point in the market development. Applied Materials (AMAT) was founded in 1967 and went s equipment and process technology innovation helped reduce the cost per transistor by 20 million times over the last 40 years 17 Applied Materials business model initiated a change in industry dynamics by encouraging semiconductor vendors to shift responsibility for development of manufacturing technology to equipment suppliers, third party sources of technology, which allowed semiconductor companies to focus on product development and applications rather than process and manufacturing expertise. This evolution led to the rise of a water foundry industry and the development of companies such as TSMC in 1987, UMC in 1980 and LSI Logic in in the 1980s and 1990s. 16 Wikipedia overview of Cadence. 17 Applied Materials website PAGE 37 37 Venture Capital Returns in the Semiconductor Industry In 2002-2003, VantagePoint Venture Partners developed a proprietary database of private venture backed semiconductor companies, which included 1,467 private semiconductor companies of which the investment team met with 30% (approx. 390) over a 1year period. From the VantagePoint research, it was concluded that the cost of development of a single product had risen from $3 million in the late 1980s to $15 million to $20 million in 2003, depending upon the technology utilized. The data collected showed that approximately 3% of the companies surveyed had achieved sustainable revenue in excess of $10 million annually. Approximately 30% of those companies experienced flat revenue growth and had stalled at the $10 million level. It was concluded that meeting the not guarantee success or an eventual IPO. A sampling of semiconductor venture-backed companies that went public revealed that the VCs whom invested in those companies realized an average of 5.65x cash on cash return. The median return was 2.85x. The companies included in the analysis followed by their incorporation date includes: AnalogicTech (1997), Atheros (1998), Hittite Microwave (1985), Ikanos (1999), PowerDsine (1999), Saifun (1996), Silicon Image (1995), SIRF (1995), Marvell (1995) and Volterra (1996). The time of investment dates back from 1997 to 2003. The average capital invested by the VCs in each company exceeded $90 million, which had become the benchmark of required capital for a Fabless semiconductor company using leading edge technology. The returns varied significantly depending upon investment timing, pre-IPO valuation, company gross margins and the amount of capital raised. Only three of the above-mentioned companies had generated returns in excess of 10x: Leadis, Marvell and PowerDsine. PAGE 38 38 Excluding these three companies, the average cash on cash return of the remaining companies was 4.21x and the median return was 2.70x. The cash on cash return numbers are based upon the IPO valuation, not the exit date which is unknown. In 2004 and 2005, there were several venture-backed semiconductor IPOs. The group of companies that went public included AnalogicTech AATI (1997), PowerDsine, Volterra (1993), Ikanos (1999), PortalPlayer (1999), Saifun (1998), Leadis (2000) and Netlogic (1995). The average return to investors at the time of the IPO was calculated as 5.4x based upon data collected by VantagePoint Venture Partners. Of this group of companies, PowerDsine, PortalPlayer and Saifun were acquired post-IPO; AATI and Ikanos were trading below their IPO price. As of February 2010, Netlogic was trading at 2.46x while Volterra was trading at 1.47x of their respective market capitalizations at the time of their IPO. The analysis by VantagePoint Venture Partners showed that the semiconductor market from 1980 to 2007 had a Compound Annual Growth Rate (CAGR) of 11.2% based upon data collected from the Gartner Group. The last six cycles had ranged from 3 to 7 years, with an average time peak to trough of 2.7 years The cycle that ended in 2007 had an extended trough of approximately 2.5 years. It was also determined that the growth from 2003 to 2007 had slowed to approximately 7.7%. In 2007, the Global Semiconductor market was 268.9 billion, a 3.3% increase over 2006, and in 2008 the size of the market declined to 258.3 billion and again fell to 226.3 billion in 2009, representing a -12.4% change 18 18 iSuppli Corporation annual reports 2006, 2007, 2008 and 2009 found on The figures exclude foundry sales. PAGE 39 39 It was clear that by 2007, the industry had become extremely challenging for venture-backed companies as the market leaders had more resources, more capital and more experience in developing complete applications. The venture-backed companies were also challenged to negotiate favorable foundry pricing with companies such as TSMC as they were focused on serving their largest customers, especially during periods of high foundry utilization. U.S. companies were also faced with market competition from Taiwan and China, where Fabless start-ups were catching up on design knowledge and were able to develop products at a lower cost and closer to the end customer. New entrants into the market such as Taiwan based Mediatek (1997) posed a difficult competitor for many Silicon Valley based companies in the consumer semiconductor market. As the market continued to mature, start-up companies were facing other challenges such as the need for constant innovation Law 19 In order to constantly adjust to the rapid pace of change in the market and the competition, start-ups were faced with significant challenges to refresh their products every 6 months requiring constant innovation and enhancements while the cost of development had increased substantially. At the same time, the rate of constant price19 -term trend in the history of computing hardware, in which the number of transistors that can be placed inexpensively on an integrated circuit has doubled approximately every two years. memory capacity, sensors and even the number and size of pixels in digital cameras. All of these are improving at (roughly) exponential rates as well. This has dramatically increased the usefulness of digital force of technological and social change in the late 20 th and early 21 st centuries. The trend has continued for more than half a century and is not expected to stop until 2015 or later. The law is named after Intel co-founder Gordon E. Moore, who introduced the concept i n a 1965 paper. It has since been used in the semiconductor industry to guide long-term planning and to set targets for research and development. PAGE 40 40 performance improvement in the semiconductor industry had been staggering and as a consequence, changes in the semiconductor market occurred extremely rapidly. Semiconductors Industry Public vs. Private Since the 1980s, the semiconductor industry has been considered to be the biggest driver of the technology economy, which was driven by the pervasiveness of semiconductor devices across all major end-markets. Two decades ago, the preponderance of semiconductor devices was targeted towards the PC industry. Today, semiconductors are ubiquitous in applications ranging from mobile phones and routers to heart monitors, automobiles, bar code readers and even children's toys. To understand the nature of venture returns in the semiconductor market, it is illuminating to understand the relative performance of the Nasdaq Composite (IXIC), Russell Index( RUT ) and the Philadelphia Semiconductor Index (SOX) as well as specific public market stocks to venture capital returns. The analysis includes Texas Instruments (TXN), Applied Materials (AMAT), Intel Corporation (INTC), Xilinx (XLNX), AnalogicTech (AATI) and PLX Technology (PLXT). These companies were chosen for their market leadership in innovation in a specific sub segment of the semiconductor industry. The PHLX Semiconductor Sector (SOX) Index was introduced on December 1, 1993, with an initial value of 1 00 reached a high of 1,266.39 in July 2000. As of 12/29/2009, the SOX index closed at 356.18, substantially below the 2000 levels. The average value since inception was 404.24. In the summary chart below, the time-weighted mean return or the geometric mean (T-Mean) the average mean (A-Mean) returns as well as the standard deviation (SD) was calculated for all of the assets. PAGE 41 41 Table 3-1: Semiconductor Returns Summary TXN AMAT INTC XLNX PLXT AATI Time Pe riod T Mean SD T Mean SD T Mean SD T Mean SD T Mean SD T Mean SD 19 90 19 95 44.08% 0.8575 57.17% 0.9442 35.52% 0.3585 78.14% 1.1740 19 95 2000 40.06% 0.8806 29.20% 1.0316 24.10% 0.4248 40.66% 1.1215 37.67% 19 98 2000 20.96% 0.4572 22.41% 0.7 297 15.29% 0.4854 29.96% 0.2891 37.67% 1990 2000 31.85% 0.6774 32.11% 0.8448 25.32% 0.3534 43.13% 0.9444 37.67% 2000 2008 7.95% 0.2401 2.28% 0.8404 5.08% 0.5347 2.16% 0.7422 11.50% 1.3659 40.93% 0.9206 20 07 2009 16.69% 0.4848 5.63% 0. 2534 10.58% 0.3414 7.00% 0.3036 26.65% 0.5956 14.53% 0.8778 IXIC RUT SOX Thomson Semi PCS Time Period T Mean SD T Mean SD T Mean SD T Mean SD T Mean SD 19 90 19 95 35.52% 0.3003 11.46% 0.2641 32.87% 27.55% 0.2415 18.63% 0.1254 19 95 2000 24.10% 0.320 7 6.31% 0.1259 25.20% 0.0021 51.55% 0.5861 41.81% 0.0292 19 98 2000 15.29% 0.4862 2.26% 0.0519 25.32% 0.0021 51.06% 0.8114 40.60% 0.0193 1990 2000 25.32% 0.3067 7.74% 0.2073 13.10% 0.0021 33.61% 0.4214 28.83% 0.1510 2000 2008 5.08% 0.3704 5.47% 0.3001 9.64% 0.0102 2.20% 0.5415 16.01% 0.1188 20 07 2009 10.58% 0.3465 8.11% 0.2841 32.87% 0.0035 10.34% 0.4446 18.25% 0.0073 During 1995-2000, the Thomson Semi returns outperformed the public market semi stocks, the SOX index, as well as the broader market indices, reflecting a period of rapid growth and significant changes in the industry. However post200 7, semi venture returns have turned negative and under-performed the larger established companies During 1998-2000, with the proliferation of startup Fabless companies, increased equipment and EDA software sales, all of the assets in the semiconductor industry performed exceedingly well The flow of funds into semiconductor venture investments also rose substantially during the 1990s. From 1995 until 2000, the dollars invested by PAGE 42 42 VCs increased by 17.32x, and the number of investments increased from 60 annually to 265 companies. The next section examines the correlation coefficients of returns among assets. During 1990-1995, a high growth period for semiconductor companies, the correlation coefficient of the stocks included to the broader market indices ranged from .5132 to .5749 to the IXIC and .3990 to .4370 to the RUT. The Thomson Venture correlation was within a range of .1211 to .4131 to all of the public stocks assets. Between 1990-1995, the correlation of the TS to PC sales was .9750, the R 2 was .9507, demonstrating the strong relationship between venture returns and innovation in the PC market that attracted new purchases. The public companies do not demonstrate the same behavior although they benefitted from the increase in PC sales directly or indirectly. Table 3-2: Semiconductor Correlation Coefficient Matrix 1990-1995 Correlation TXN AMAT INTC XLNX IXIC RUT TS PC Sales TXN 1.0000 0.4349 0.495 4 0.4025 0.5254 0.4008 0. 4131 0.5065 AMAT 0.4349 1.0000 0.3939 0.3625 0.5132 0.4040 0. 2897 0.2743 INTC 0.4954 0.3939 1.0000 0.3385 0.5749 0.3990 0. 2870 0.3555 XLNX 0.4025 0.3625 0.3385 1.0000 0.5185 0.4370 0. 1211 0.2212 IXIC 0.5254 0.5132 0.5749 0.5185 1.0000 0.9169 0. 1273 0.1298 RUT 0.4008 0.4040 0.3990 0.4370 0.9169 1.0000 0. 1371 0.5075 TS 0.2750 0.3997 0.3062 0.3208 0.3932 0.3197 1.0000 0.9750 PC Sales 0.5065 0.2743 0.3555 0.2212 0.1298 0.5075 0.9750 1.0000 The period of 1995-2000 includes the SOX Index, which began trading in 1994 is included in the analysis from 1995 onward. The correlation coefficients for TXN, AMAT, INTC and XLNX increased from the 1990-1995 levels as their stock returns begin to converge to the broader market indices. Their correlations to the IXIC >.6122 to .6724, PAGE 43 43 less correlated to the RUT >.4630-.5324. However they are negatively correlated to the SOX index, a reflection of the composition of the SOX index, which was heavily weighted toward manufacturing companies. The Thomson venture correlation remains relatively low to all. The semiconductor market peaked in 1995, hit the trough in 1996 and then begin its upturn in 1997 during the Asian financial crisis, which impacted the market. Table 3-3 : Semiconductor Correlation Coefficient Matrix 1995-2000 Correlation TXN AMAT INTC XLNX IXIC RUT SOX TS PC Sales TXN 1.0000 0.6341 0.5525 0.5937 0.6346 0.5324 0.3411 0.2 547 0.3015 AMAT 0.6341 1.0000 0.5741 0.5873 0.6321 0.5093 0.6776 0. 3181 0.3974 INTC 0.5525 0.5741 1.0000 0. 5462 0.6724 0.4630 0.8516 0.2 824 0.6714 XLNX 0.5937 0.5873 0.5462 1.0000 0.6122 0.4952 0.9597 0. 2596 0.0663 IXIC 0.6346 0.6321 0.6724 0.6122 1.0000 0.8806 0.9122 0. 4735 0.1062 RUT 0.5324 0.5093 0.4630 0.4952 0.8806 1.0000 0.7494 0.35 38 0.3693 SOX 0.3411 0.6776 0.8516 0.9597 0.9122 0.7494 1.0000 0.3189 TS 0.2399 0.2509 0.2650 0.3143 0.4684 0.3544 0.3189 1.0000 0.1922 PC Sales 0.3015 0.3974 0.6714 0.0663 0.1062 0.3693 0.1922 1.0000 In examining the specific period between 1998-2000, the bubble period, there was an increase in the correlation coefficients of all assets, excluding the SOX, to the Nasdaq Composite and the Russell 2000. PLXT, a small Fabless company founded with less than $5 million of venture capital, was added to the analysis to determine if the behavior of a recent smaller IPO company was different than the more mature negatively correlated to the TS. During the bubble years, the correlation coefficients and R 2 of the semiconductor companies and the TS to PC sales was in the range of .8595 to 1.0000. There were strong PC sales in preparation for the Y2000 transition. PAGE 44 44 The TS correlation to PC sales was .9523. The R 2 for all of the assets to PC sales ranged from .7387 to 1.0000. Table 3-4 : Semiconductor Correlation Coefficient Matrix 1998-2000 Correlation TXN AMAT INTC XLNX PLXT IXIC RUT SOX TS PC Sales TXN 1.0000 0.6494 0.5400 0.6153 0.2975 0.6642 0.5792 0.3411 0. 2172 1.0000 AMAT 0. 6494 1.0000 0.5834 0.6322 0.3226 0.6641 0.5475 0.6776 0.5 133 0.9431 INTC 0.5400 0.5834 1.0000 0.5611 0.2317 0.6790 0.4765 0.8516 0. 2123 0.8595 XLNX 0.6153 0.6322 0.5611 1.0000 0.3083 0.6629 0.5643 0.9597 0. 4057 0.9621 PLXT 0.2975 0.3226 0.2317 0.3083 1.0000 0.4243 0.4107 0.7652 0.2 201 IXIC 0.6642 0.6641 0.6790 0.6629 0.4243 1.0000 0.8805 0.9122 0.51 25 0.9831 RUT 0.5792 0.5475 0.4765 0.5643 0.4107 0.8805 1.0000 0.7494 0 .3029 0.2649 SOX 0.3411 0.6776 0.8516 0.9597 0.7652 0.9122 0.7494 1. 0000 0.4158 TS 0.4344 0.5053 0.3186 0.5334 0.2436 0.5115 0.2896 0.4158 1.0000 0.9523 PC Sales 1.0000 0.9431 0.8595 0.9621 0.9831 0.2649 0.9523 1.0000 In summary, during the decade of 1990-2000, the returns for an investor buying public market semiconductor securities in 1990, as the market for semiconductor applications was growing rapidly, were impressive. During the period between 1990 and 2000, TXN returned 4,931%, AMAT 8,765%, INTC 5426%, XLNX 6093%. During this same period, $100 investment into the Thomson Semi would have yielded $673 in 2000. Between 2000-2009, $100 invested in TS would have generated a negative return of 25%. TXN negative 53.26%, AMAT negative 62.34%, INTC negative 56.32%, XLNX negative 47.90%, and PLXT were down -85.65%. The next period of 2000-2009 was difficult for the semiconductor market as it encountered a long cyclical downturn after the technology crash. The correlation coefficients of the more mature companies such as TXN, AMAT, and INTC increased further to the broader markets as they matured and earnings growth slowed down. The Thomson Semi continued to demonstrate a low correlation to the industry, PC sales and PAGE 45 45 as well as to the broader markets. Between 2000 -2008, the average return for VC backed semiconductor investments was -.11%. From 2002 to 2008, venture capital investments into private semiconductor companies remained relatively constant at approximately 50% of the capital in 2000, reflecting the relatively poor returns and the diminishing opportunities for start-ups to differentiate themselves against the market leaders. In 2009, as a result of poor returns over the last decade, venture capital investments into the semiconductor market was down to $1.6 billion and only 47 companies received funding globally. The market has matured and technology innovation has slowed impacting returns in both the public market as well as new investments. In the analysis between 2000-2008, AATI, a venture -backed company was added to the list of assets. Table 3-5 : Semiconductor Correlation Coefficient Matrix2000-2008 C orrelation TXN AMAT INTC XLNX AATI PLXT IXIC RUT SOX TS PC Sales TXN 1.0000 0.6956 0.6476 0.6871 0.3567 0.3980 0.6968 0.5694 0.0658 0 .3389 0.8436 AMAT 0.6956 1.0000 0.7052 0.7170 0.3967 0.4054 0.7427 0 .5913 0.0781 0.2044 0.8654 INTC 0.6476 0.7052 1.0000 0.6807 0.4259 0.3864 0.7671 0.6020 0.0487 0.2769 0.8237 XLNX 0.6871 0.7170 0.6807 1.0000 0.4222 0.3992 0.7447 0.5897 0.0812 0.2032 0.8699 AATI 0.3567 0.3967 0.4259 0.4222 1.0000 0.3171 0.5318 0. 5307 0.4994 0.1143 0.2804 PLXT 0.3980 0.4054 0.3864 0.3992 0.3171 1.0000 0.4914 0.4452 0.1170 0.2201 0.2638 I XIC 0.6968 0.7427 0.7671 0.7447 0.5318 0.4914 1.0000 0.8718 0.1642 0.2684 0.7822 RUT 0.5694 0.5913 0.6020 0.5897 0.5307 0.4452 0.8718 1.000 0 0.0119 0.1265 0.4431 SOX 0.0658 0.0781 0.0487 0.0812 0.4994 0.2494 0.1642 0.0119 1.0000 0. 0955 0.2690 TS 0.2402 0.2462 0.1004 0.0539 0.3279 0.1081 0.1813 0.2576 0.0955 1.0000 0.0434 PC Sales 0.8436 0.8654 0.8237 0.8699 0.2804 0.2638 0.7822 0.4431 0.2690 0.0434 1.0000 The correlations of the large semi companies are highly correlated to the broader markets. The Thomson Semi correlation to the public companies turns negative despite PAGE 46 46 15 successful exits for VC backed companies between 2005 and 2009. In September 2009, only 1 of those venturebacked companies was trading above its opening share price. The last semiconductor IPO was in December 2007. The 800+ remaining private companies must rely upon M&A, even for a positive outcome, valuations (Teknovus, Dune in 2010) is in the $100 million compared to $400 million prior to 2000. AATI and PLXT, market capitalizations below $500 million, are less correlated than the larger more mature companies. All assets show a low correlation to PC sales. By 2007-2008, the public market semiconductor companies are significantly more correlated to the broader market indices and the industry leaders such as INTC, TXN and AMAT signaling maturity of the industry segment. The Thomson Semi remains either negatively correlated or just slightly correlated to the public comparables The correlation to PC sales has diminished, except for TS ,w hich remains highly correlated. Table 3-6 : Semiconductor Correlation Coefficient Matrix 2007200 8 Correlation TXN AMAT INTC XLNX AATI PLXT IXIC RUT SOX TS PC Sales TXN 1.0000 0.6597 0.7210 0.6853 0.4086 0.4221 0.6952 0.6396 0.9411 0.3418 0.1305 AMAT 0.6597 1.0000 0.6989 0.6798 0.4226 0.4386 0.7250 0.6819 0.6468 0.1771 0.4424 INTC 0.7210 0.6989 1.0000 0.7159 0.4763 0.4520 0.8254 0.7407 0.0658 0.1187 0.4930 XLNX 0.6853 0.6798 0.7159 1.0000 0.4855 0.4405 0.7449 0.6862 0.9033 0. 1548 0.1155 AATI 0.4086 0.4226 0.4763 0.4855 1.0000 0.3688 0.5669 0.5665 0.4442 0. 1939 0.2785 PLXT 0.4221 0.4386 0.4520 0.4405 0.3688 1.0000 0.534 7 0.5666 0.1617 0. 3457 0.4263 I XIC 0.6952 0.7250 0.8254 0.7449 0.5669 0.5347 1.0000 0.9467 0.1403 0. 1118 0.4646 RUT 0.6396 0.6819 0.7407 0.6862 0.5665 0.5666 0.9467 1.0000 0.6723 0.0098 0.2740 SOX 0.9411 0.6468 0.0658 0.9033 0.4442 0.1617 0.1403 0.67 23 1.0000 0.4630 0.8360 TS 0.6905 0.6864 0.4728 0.3083 0.4442 0.6885 0.5464 0.4650 0.4630 1.0000 0.7828 PC Sales 0.1305 0.4424 0.4930 0.1155 0.2785 0.4263 0.4646 0.2740 0.8360 0.7828 1.0000 The semiconductor industry has matured and market growth has slowed. The opportunity to identify a start up that could create a new market segment will be quite PAGE 47 47 difficult unless new applications emerge related to a new industry such as CleanTech since there is substantial overlap in skills. Venture activity in the sector has declined dramatically since 2000. In 2009, venture investment in the semiconductor space totaled $1.16 billion. The number of investments paled in comparison to other categories such as biotech, Internet, and CleanTech. Capital efficiency, industry cyclicality and the relatively low exit multiples are three obvious reasons. It can cost perhaps as much as $100 million to get a leading-edge digital chip to the market, in what becomes a bet-the PAGE 48 48 CHAPTER 4 CLEANTECH INDUSTRY Over the past 30 years, the VC industry has shifted its focus from time to time to find the dynamic, fast growing, high potential return industries for investment. The computer hardware, computer software, medical devices and Internet-specific companies all for a relatively brief period of time were the single largest recipient of venture capital investments. The focus has now shifted to the CleanTech industry. In the early 1980s, in the immediate aftermath of the energy crises of the 1970s and early 1980s e nergy-related venture capital investments were the largest recipient of U.S. venture capital. Since 1999, oil prices have increased from a low of $17 per barrel to over $50 per barrel in 2005, reaching their peak in July 20 07 of $147.27 per barrel that attracted significant amounts of investment and attention. Between 2005 and 2008, the VC CleanTech sector attracted $14.8 billion of capital. 20 Between 20042008, VC increased the dollars invested in CleanTech by 53% CAGR. 21 However in 2009, the investment in the global CleanTech sector was approximately $5.613 billion down 33% from 8.4 billion in 2008. 22 Since 2005, as a reaction to the increase in oil prices in 2003sector. The media attention and government focus surrounding the industry has been significant driver of investment activity. Financial incentives in the form of grants, tax credits and subsidies from governments around the world are critical for market development. Favorable Renewable Energy policies are part of the attraction for VCs, along with 20 Cleantech Group LLC. 21 Cleantech Group LLC. 22 Cleantech Group LLC. PAGE 49 49 rising global demand for energy, resource constraints, increasing environmental pressure high oil prices and urbanization. The challenge for VCs is that the CleanTech sector although undergoing rapid transformation, in many facets, is different and more challenging; capital efficiency, the time and scale necessary to develop projects, as well the involvement from the government to set favorable policies to support the economics is unparalleled. In emerging countries such as China, the government is focused on increasing renewable energy generation to 20% by 2020 to reduce its dependency upon traditional energy sources, which is necessary for continued economic growth. The government is therefore a participant in the market development, fostering new company development and coordination. Over the last decade, China has experienced a significant surge in electricity and oil usage fueled by urbanization and the growth of the middle class, renewable energy has positive economic implications for its future and for the development of the sector in China. VCs returns in the U.S. are dependent upon three conditions first, oil prices rise to a sufficient level to inflict economic pain on the consumer, enterprises, the usage of oil economies such as China and India, government intervention into the economic equation with incentives to adopt renewable energy and new energy bills mandating certain objectives. By the early 1990s, energy investments were attracting less than 3% of all U.S. venture capital and by 2000 these investments accounted for only 1% of the $119 billion PAGE 50 50 invested that year by the U.S. venture capital community. 23 The significant increase in oil prices from 2002-2004 correlates with a revived interest in the sector by venture capital firms. In 2002, VC invested approximately $883 million and by 2008 the dollar invested increased to $8.4 billion. 24 The significant increase was between 2005 and 2006, when the amount invested increased from $2.07 billion to $4.5 billion. However in 2009, almost 20 years lat er the CleanTech sector represented 27%, approximately $5.9 billion, of all venture investments in the U.S. 25 The number of firms investing and building dedicated resources has also increased. The number of firms between 2004 and 2008 that invested in CleanTech increased from 30 to 117. 26 In 2004, there were 9 dedicated CleanTech funds, rising to 39 by the end of 2008. 27 The CleanTech sector has once again attracted investment by corporate venture funds including BP, Chevron, Applied Materials, GE and DuPont. The corporate VC reemerged in the CleanTech sector as large companies seek to diversify their products and services and offer venture-backed companies access to the market. In 2007, for instance, Chevron Texaco Technology Ventures invested in three Cle anTech companies: BrightSource Energy, a developer of utility-scale solar plants; Konarka Technologies, a developer of photovoltaic materials; and Southwest Windpower, a pr oducer of small wind turbines. Semiconductor equipment manufacturer AMAT 23 The Price Waterhouse Coopers, data used in the analysis, defines the Energy/Industrial sector to include environmental, agricultural transportation, manufacturing, construction of utility related products and services. 24 The Cleantech Group LLC 25 The Cleantech Group LLC 26 2009 Preqin Private Equity Cleantech Review. 27 2009 Preqin Private Equity Cleantech Review. PAGE 51 51 diversified into the solar industry by a series of acquisitions coupled with internal include Italian microelectronics maker Baccini, for $334 million in early 2008 Switzerland-based HCT Shaping Systems SA, a producer of thin-film silicon wafer technology, for $438 million in 2007; and Applied Films Corp a Colorado-based producer of thin-film technology, for $464 million in 2006. 28 Venture Capital Returns in Cleantech Industry In 2009, venture investments in CleanTech were down 33% from the record level of 2008. However, investment in the CleanTech sector declined less than other venturebacked sectors. The top CleanTech sectors for investment included solar 21%, transportation including electric vehicles and batteries 20%, energy efficiency 18%, biofuels, and Smart Grid and water were all less than 10% of the overall amount. A study conducted by the CleanTech Venture Network found that between 1999 and 2005, more than $7.3 billion of venture capital was invested in CleanTech companies in 1,085 investment rounds going to 628 companies in the US and Canada. The average investment amount per round was $6.7 million, and by 2008 the amount had risen to $16.2 million, which can be misleading given the amount of capital invested in some of the more high profile companies. The development of market leaders in the CleanTech industry has been unlike other venture-backed industries The CTIUS index includes 78 CleanTech companies of which less than five were U.S venture backed companies. The majority of the companies, which are heavily weighted in the industry indices, have been in business 28 ; PAGE 52 52 for more than 20 years or were incubated in large companies before being divested. First, Solar, founded in 1999, had a history of technology development that goes back decades. Harold McMaster, who made his first fortune in the late 1940s with Permaglass, founded FirstSolar McMaster was one of the world's experts on tempered glass. He began working on solar cell development in the 1980s. The primary and early investor in First Solar remained with the company through the difficult early years when the company did not generate any significant revenue and the market was still nascent. In contrast to First Solar, Yingli Solar and Suntech (discussed below) both started with financial support from various local Chinese governments before the initial VC round. The Baoding Government originally backed Yingli, founded in 1998. One year before it was listed on Nasdaq, Yingli issued $17 million of Series A Preferred Shares to a local Chinese VC firm at a price of US$2.10 per share. In 2007, Yingli issued Series B Preferred shares at $4.835 per share. The mezzanine investors in Yingli, at the IPO price, recorded cash on cash return of 5.0x for the Series A and 2.17x for the Series B a high of $31.83. Suntech, the leading Chinese solar company, was founded in 2001 and received a seed investment of $5.0 million from the Wuxi Government. Suntech raised an additional $80 million from an investor group including Goldman Sachs, Actis China as well as Dragontech Energy. The Series A investment round was completed at $2.30 per eries A investors generated a cash on cash return of 6.52x. PAGE 53 53 In order to determine whether there is a consistency in the return spectrum across the various segments of the CleanTech market and to highlight any differences in returns between China and the U.S, a few other companies were examined. EnerNOC was founded in 2001 with a seed investment of less than $100,000. The company then issued an additional $17.75 million in preferred shares and at the IPO price in 2007; those VCs generated 5.96x cash on cash return. Foundation, a later stage investor, invested in EnerNOC in 2005 and generated a 2.62x return at the IPO price. A123 is a venture-backed company in the battery market that raised more than $350 million from more than 15 investors over 11 rounds of investment. Since 2001, the largest A123 venture investor, North Bridge Venture Partners, which holds 8.86 million shares, or 9% of the total, was valued at about $146 million as of February 23, 2010. 29 North Bridge invested an estimated $41.3 million over seven rounds, according to Thomson Reuters, for a paper combined 32.9 million shares at the IPO, which were worth $541 million based on the price as of February 25, 2010. That same group invested at least $161 million in the company, according to Thomson Reuters, which means the group saw an unrealized return multiple of 3.36x on their investment. Over the past 2 years, t he CleanTech sector has seen some very large venture investments; $550 million for Project Better Place, a start-up designed to electrify large segments of the transportation sector by having consumers subscribe to electric transportation services; $100 million for Serious Materials LLC, which is developing a proces ; and $286 29 Data is from Thomson Reuters publication PeHUB. PAGE 54 54 million for U.S. solar panel maker Solyndra, which enabled the company to secure a $535 million loan backed by the Energy Department to build a second manufacturing plant. In 2009, there were 5 venture-backed companies that raised $100 million or more in additional funding. In 2009, CleanTech public offering proceeds totaled an estimated $4.7 billion in 32 IPOs. This constituted a yearon-year increase of 11% in volume and 2% in the amount raised. China accounted for 72% of the global IPO proceeds, according to data tracked by the CleanTech group. Between 2005 and 2007, 14 renewable energy companies went public, including solar companies LDK, Trina Solar, JA Solar, Suntech, Sunpower, Yingli, First Solar, Canadian Solar, Akeena Solar, Renesolar, Spire, Solarfun Power Holdings and several others related to the solar industry ecosystem. Several solar companies accessed the IPO markets between 2006 and 2007, including Canadian Solar (2006), LDK (2007), Renesolar (2007) and Trina Solar (2006). In 2008, solar stocks performed very well, oil prices were high, discussions of carbon caps to favor clean energy companies occurred and investors pursued companies green companies in anticipation. Governments globally provided financing and subsidies to make these technologies cost competitive. In 2009, oil prices dropped, reflecting the global recession. Government subsidies were cut and worst of all, solar companies that had ramped up production to meet anticipated demand were seeing sales dry up and industry overcapacity. In 2009, solar panels experienced a steep price drop, rising inventory levels while the cost of production and materials dropped despite an increase in installed capacity. PAGE 55 55 Today there are still over 1,500 venture-backed companies addressing the solar industry, and at least 50 of the companies are direct competitors to the public companies. Between 2002 -2009, $4.8 billion was invested by VCs into the solar market 30 with emphasis on cost reduction and improvement in cell efficiency. The VCs in these companies expect to generate the multiples that the early entrants realized when prospective growth rates were high fueled by subsidies and financial projections. According to a KPMG report, 31 the value of CleanTech M&A increased from $1.4 billion in 2004 and 2005 to $19.1 billion in 2008. Interestingly, more than 45% of the acquirers were adding capabilities to their existing businesses, while 70% of the M&A activities were in energy generation. Valuations were approximately 1.2x Enterprise value/ revenue. In 2009, there were an estimated 505 clean-technology M&A transactions globally, totaling $31.8 billion. However, this number could be misleading as an indication that VCs ar e directly benefiting from global M&A activity. Many of the reported transactions have been large divestitures of assets. For example, Bord Gais, the Irish energy provider, acquired SWS Natural Resources (Ireland), a large wind operator, for $720 million. Another example is Panasonic, which acquired a majority stake in Sanyo Electric, the world's largest rechargeable-battery maker for $4.6 billion. The takeover makes Panasonic a dominant player in the fast-growing market for hybrid car batteries. 30 Estimated investment amount determined by VantagePoint Venture Partners. 31 KPMG Webinar April 2,2009. PAGE 56 56 Cleantech Industry Public vs. Private Investment Returns The CleanTech analysis includes comparing the Thomson Energy/Industrial (TI) quarterly returns to the Nasdaq Composite (IXIC), the Russell 2000 Index (RUT), Zoltek (ZOLT), Yingli (YGE), First Solar (FSLR), Suntech Power (STP), Crude Oil Index (OIX) The S&P Global Clean Energy Index (SPGTCLNT), EnerNOC (ENOC) and the CleanTech Index (CTIUS). In order to understand the lifecycle of the venture opportunity in the CleanTech Sector, it was important to compare and contrast the venture returns to industry benchmarks and the public market securities The time periods analyzed included 2000-2009 and 2007-2008. There was insufficient data prior to 2000. Table 4-1 summarizes the time T-Mean, A-Mean and the SD for the components included in the sector analysis. The SD confirms the market volatility in the sector as the companies went public in an enthusiastic market for CleanTech and now are trading at multiples reflecting their financial performance. The most volatile have been in the area of renewable energy generation and energy efficiency. Table 41: Mean Returns and Standard Deviation for the Cleantech Sector CleanTech ZOLT YGE FSLR ST P Time Period T Mean SD T Mean SD T Mean SD T Mean SD 2000 2008 0.84% 0.7 813 70.47% 176.71% 4.7451 19.07% 1.1137 20 07 20 09 9.17% 0.7286 19.79% 1.3320 95.24% 3.8806 28.71% 1.0567 CleanTech ENOC IXIC RUT OIX Time Period T Mean SD T Mean SD T Mean SD T Mean SD 2000 2008 52.89% 1.52% 0.3704 5.47% 0.3001 5.78% 0.1867 20 07 20 09 36.96% 2.4822 2.16% 0.3465 8.11% 0.2841 21.88% 0.1277 PAGE 57 57 CleanTech ECO CTIUS TE M_Cell Time Period T Mean SD T Mean SD T Mean SD T Mean SD 2000 2008 4.95% 0.4275 2.70% 0.4166 16.25 % 0.8014 0.89% 0.0865 20 07 20 09 1.04% 0.5099 11.34% 0.3256 16.50 % 0.3293 3.71% The CleanTech stocks underperform the market, similar to the semiconductor companies during this period. Both industries are extremely capital intensive and today dominated by large companies with revenues in excess of $1 billion. The stock returns for CleanTech stocks and the industry has dropped since 2007, the returns are significantly below the early gains after the IPOs in the sector. The equity return calculation matrix summarizes the returns that a public market investor would realize depending upon the purchase date and exit data. Assuming that the investor purchased the shares at the time of the IPO, the returns peaked between Q3 2007 and Q2 2008. The returns were as follows from the IPO date to the peak, STP (172.88%), YGE (169.58%) and FSLR (854.42%). Between 2000-2009, the Thomson Energy (TE) is negatively correlated to the majority of stocks included in the analysis. In the early days of venture funding, the expectations were for the CleanTech industry to achieve rapid expansion at Internet speeds of adoption. The early solar and wind companies did achieve rapid sales growth, supported by government policy and subsidies. However, the expectations for public companies was extremely high and as actual results were reported, valuations began to significantly decline further than the private company valuations, which were still based upon frothy expectations. According to Cambridge Research Associates, the Energy Sector vintage funds of 2002, 2003, 2005, 2006 and 2007 have significant dollar PAGE 58 58 weighted internal rate of returns of 45.29%, 48.14%, 38.48%, 37.51% and 26.87% respectively. In analyzing the returns of the solar stocks, they are more interrelated to the ECO than CTIUS due to the composition allocation. The CTIUS has a 30.9% sector weighting to energy generation and included FSLR (2.77%) and STP (2.05%) but does not include YGE. The ECO assigns a 30% weighting to the Renewable Energy weighting in the CTIUS are less than 1% and they are not a component of the ECO. Table 4-2: CleanTech Correlation Coefficient Matrix 2000-2008 Correlation ZOLT YGE FSLR STP ENOC IXIC RUT TE ZOLT 1.0000 0.4600 0.4454 0.4361 0.3964 0.2683 0.3612 0.2552 YGE 0.46 00 1.0000 0.6104 0.7278 0.3620 0.5689 0.5457 0.5799 FSLR 0.4454 0.6104 1.0000 0.5898 0.3156 0.5167 0.4923 0.2319 STP 0.4361 0.7278 0.5898 1.0000 0.3253 0.5753 0.5543 0.1053 ENOC 0.3964 0.3620 0.3156 0.3253 1.0000 0.4805 0.4886 0.4179 IXIC 0.2683 0.56 89 0.5167 0.5753 0.4805 1.0000 0.8718 0.0327 RUT 0.3612 0.5457 0.4923 0.5543 0.4886 0.8718 1.0000 0.0976 OIX 0. 2475 0. 5959 0. 5023 0. 3986 0. 8968 0. 8968 0. 4487 0.1134 ECO 0.6177 0.7477 0.6794 0.7358 0.5157 0.8441 0.8543 0.1756 CTIUS 0.3386 0.6710 0.6239 0.6730 0.4866 0.8510 0.8602 0.2467 Correlation OIX ECO CTIUS TE P_Modules P_Cells ZOLT 0. 2475 0.6177 0.3386 0.0175 YGE 0. 5959 0.7477 0.6710 0.1899 FSLR 0. 5023 0.6794 0.6239 0.2841 STP 0. 3986 0.7358 0.6730 0.0134 1.0000 1.0000 ENOC 0. 8968 0.5157 0.4866 0.3997 IXIC 0. 8968 0.8441 0.8510 0.0292 RUT 0. 4487 0.8543 0.8602 0.0091 0.1935 0.3490 OIX 1.0000 0.2951 0.1599 0.1134 0.1134 0.3458 PAGE 59 59 ECO 0. 6291 1.0000 0.9119 0.0896 0.7773 0.9467 CTIUS 0. 8133 0.9119 1.0000 0.0379 0.34 38 0.4796 Examining the shipment of solar PV cells had a perfect correlation for STP, but module shipments were perfectly negatively correlated A dditional analysis needs to be done to further understand the relationship The periods between 2005 and 2008 and then 2007 and 2008 showed very similar results. The Thomson Energy index is negatively correlated to the other components. The three solar companies become slightly more correlated to the industry benchmarks in the range of .6280 to .7664, with FLSR being on the low end. Table 4-3: CleanTech Correlation Coefficient Matrix 2007-2008 Correlation IXIC RUT OIX ECO CTIUS TE ZOLT 0.6661 0.6788 0. 5874 0.7001 0.6773 0. 0932 YGE 0.5689 0.5457 0. 5969 0.7477 0.6710 0. 5799 FSLR 0.5202 0.4959 0. 5023 0.6834 0.6280 0.2 319 STP 0.6101 0.5819 0. 4707 0.7664 0.7063 0.0 977 ENOC 0.4805 0.4886 0. 8968 0.5157 0.4866 0.4799 IXIC 1.0000 0.9467 0. 8761 0.8684 0.8623 0.2 674 RUT 0.9467 1.0000 0. 7670 0.8712 0.8445 0. 3177 OIX 0.2513 0.2419 1.0000 0.3347 0.3678 0.0703 ECO 0.8684 0.8 823 0.3347 1.0000 0.9153 0.2 423 CTIUS 0.8623 0. 9275 0.3678 0.9153 1.0000 0.22 51 Favorable sustainable economics are ultimately the driving force behind innovation and development in the CleanTech industry. Various governments have implemented policies to support the renewable energy industry by supporting feed in tariffs to support the adoption during the formative development period, which is necessary to compete with conventional industry suppliers that are some of the most subsidized industries PAGE 60 60 globally. The solar public companies valuations have declined, reflecting the sentiment of expected for future growth F ewer companies have been able to access the IPO market, and adoption cycles are longer than originally expected. Venture capitalists must find solutions to the capital intensity required by these companies as well as demonstrate patience to scale the manufacturing in order to achieve the pricing targets that is associated with economies of scale. PAGE 61 61 CHAPTER 5 INTERNET INDUSTRY The greatest influx of capital into the venture industry occurred during the Internet boom. The Internet revolution was not a single invention but a series of developments of interlinked technologies, protocols and standards for networking between computers The appearance that the Internet was rapidly commercialized is misleading. It was only after decades of technology development by DARPA (the U.S. Defense Research Agency) and the release of the restriction for commercial use that the Internet was widely commercialized, enabling new business models, services and communication applications for business and consumers alike. The development of the TCP/IP technology and its adoption as a standard enabled every PC to become an Internet portal. This was pivotal in the market development. The non-proprietary, open nature of Internet protocols encouraged interoperability, making it feasible and easy for start ups to enter the market. By the time the word Internet became mainstream to the public and private investors, the enabling technologies associated with the Internet were already 20 years old. The opening of the network truly began in 1988, when the U.S. Federal Networking Council approved interconnection of NSFNET to the commercial MCI mail system. Companies such as AOL, Prodigy, CompuServe and Netscape were formed, and this was the beginning of the client server revolution. During the 1990s, the Internet grew by 100% per year, with explosive growth between 1996 and 1997. Venture-backed companies identified new business models and applications including e-commerce, online communities, email, search, gaming, news and online collaborative software among many other applications, and by the late 1990s, demand for Internet services was growing rapidly. Investors responded by PAGE 62 62 investing heavily in Internet infrastructure. Several hundred billions of dollars were spent on installing network capacity to meet not only current demand but also optimistic expectations of potential future demand. While Internet usage continued to grow, it did so much less quickly than originally anticipated. As a result, network capacity greatly exceeded demand, and many companies suffered financial problems or went out of business. From 1990-2000, world Internet usage grew 380.3%. 32 During this period of enormous growth, businesses entering the Internet market scrambled to find economic models that work ed. Free services supported by advertising shifted some of the direct costs away from the consumer but on ly temporarily. Services such as free web pages, chat rooms and message boards for community building were available but not monetized. Online sales grew rapidly for books music CDs and computers, but the profit margins were slim. The arrival of Internet start-ups had a dramatic impact on venture capital investments. The industry went through a period of unprecedented growth throughout the 1990s and a virtual explosion during the two years between 1998 and 2000, with a fivefold increase of investments in those two years alone. In 2000, more than 45 % of all venture capital investments had been made in internet-related companies According to Thomson Financial and the NVCA, Internet investment totaled $720 million in 1999 and exploded to $7.9 billion in 2000, but dropp ed down to $2.5 billion in 2001 and down yet again to $1.1 billion in 2002 Internet-specific companies attracted $2.9 billion going into 32 www. internetworldstats.com PAGE 63 63 629 deals in 2009, which represented a drop off of 39%in dollars, and a 30% decline in deals from 2008, when $4.8 billion went into 902 companies. 33 There were many start-ups that pioneered new Internet applications. One of the early companies in the Internet space (1994) and AltaVista (1995) were the respective industry leaders. By August 2001, the directory model had begun to give way to more sophisticated search engines, giving rise to the formation of Google (1998), which had developed new approaches to relevancy ranking. Suddenly, the low cost of reaching millions worldwide, coupled with the possibility of selling to those people at the same moment they were reached, offered an opportunity to overturn established businesses in advertising, mail-order sales, customer relationship management and many more areas. The web was the new killer application, and it could bring together unrelated buyers and sellers in seamless and low-cost ways. Entrepreneurs from around the world developed new business models and ran to the nearest venture capitalist for funding. While some of these entrepreneurs had business experience, the majority were simply people with ideas who didn't manage the capital influx prudently. Many Internet companies at this time were solely dependent upon advertising revenue that never materialized and simply raised capital based upon the numb Additionally, many dot-com business plans were 33 According to the MoneyTree Report by PricewaterhouseCoopers and the National Venture Capital Association (NVCA), based on data from Thomson Reuters. PAGE 64 64 predicated on the assumption that by using the Internet, they would bypass and displace the distribution channels of existing businesses and therefore not have to compete with them. When the established businesses with strong existing brands developed their own Internet presence, these hopes were shattered. Since 2000, the start-up survivors have refined their business models such that they can demonstrate earnings and profitability. Companies such as Amazon expanded aggressively during the late 1990s to offer books, CDs, videos, DVDs, electronics, toys, tools, home furnishings, apparel and kitchen gadgets. Amazon entered in agreements with brand name retailers including Inc., Target Corporation, Circuit City Stores, Inc. and the Borders Group and made them partners.The success of Google shows that a dominant market position in the Internet era w as often contestable. In the case of the market for web searches, one could argue that switching costs for a single end-user are low relative to other market segments such as Internet auctions, where a coordinated move among most or at least some of the agents linked to the network would be required to justify the decision by a single customer to switch to a competitor giving EBay a defensible market position T huge impact on consumer usage and justified switching from leading portals such as MSN to a new company Ve nture capital returns in the Internet sector for vintage funds 1996, 1998 and 1998 were stellar, according to Cambridge Research Associates LLC. In the Internet eCommerce segment, the IRR in 1996, 1997 and 1998 was 225.68%, 702.99% and 270.75% respectively. In the Internet e-Business, the IRR for the same periods was 82.21%, 138.84% and 107.76% respectively. The vintage years of 2000 and 2001 PAGE 65 65 generated low returns, but since 2002 the returns have exceeded the overall venture industry, reflecting the liquidity availability for Internet companies. Between 2005 and 2009, there were 88 M&A exits and between 2004 and 2008, there were approximately 53 Internet IPOs. In 1999, Kleiner Perkins and Sequoia had the foresight to each invest $12.5 million in Google for a 10% stake. In the summer of 2004, these investments by Kleine r Perkins and Sequoia were worth $2.03 billion at Google's IPO price of $85 per share. While they originally intended to sell about 10% of their stakes in the IPO, both Kleiner Perkins and Sequoia did not sell at the IPO. In the case of Kleiner Perkins, they made their first large distribution of shares to limited partners on November 17, 2004, when they distributed about 5.7 million shares, about a quarter of their total stake, at $172.5 0 per share, which equates to about $983 million. They made another distribution of 11.4 million shares, about 54% of their stake, at $203.66 per share for $2.3 billion. In May 2004, they made another distribution of 1.1 million shares worth $247 million. In total, to date they have distributed shares worth $3.549 billion 34 EBay also produced significant returns for Benchmark Capital. In 1997, Benchmark invested $6.7 million, and by the spring of 1999, their stake was worth $5 billion. 35 Investors that bought public stocks of the Internet companies at the time of the IPO also generated impressive returns. If you purchased the following companies at the time of their respective IPOs then sold the stocks at December 31, 2009 the returns were as follows; Yahoo (1348.33%), Sina (59.81%), EBay (1145.92%), Amaz on 34 The information on Sequoia Capital and Kleiner Perkins investment in Google is from the website (6/24/2005). 35 Benchmark Capital, Wikipedia.com. PAGE 66 66 (8435.96%), Google (399.81%). Priceline was the only stock in the analysis that declined in value, dropping 60.43%. Investors that purchased shares of the companies listed above in the IPO and sold prior to the technology decline in 2000, except for Priceline, generated returns ranging from 1000% to 7800% on their money. Investors that purchased shares post crash also generated very good returns. Table 5-1 summarizes the time T-Mean, A-Mean and the SD for the components included in the sector analysis. Also included were PC Sales (PCS), Domain Names Registered (DN) and Households with Internet Connections (Inet) Table 5-1: Mean Returns and Standard Deviation for the Internet Sector Internet GOOG YHOO SINA EBAY AMZN Time Period T Mean SD T Mean SD T Mean SD T Mean SD T Mean SD 1998 2000 52.29% 3.5266 9.82% 0.5198 68.85% 4.9752 2000 2009 24.73% 0.8420 4.54% 0.9811 47.49% 1.5582 31.00% 1.3148 21.97% 1.4581 2007 2009 25.92% 0.6106 2.61% 0.3222 27.17% 0.5649 4.54% 0.4650 46.91% 1.1216 Internet PCLN MOX IXIC RUT Time Period T Mean SD T Mean SD T Mean SD T Mean SD 1998 2000 96.75% 13.97% 0.4862 2.26% 0.0519 2000 2009 22.49% 7.1408 18.07% 0.7184 1.52% 0.3704 5.47% 0.3001 2007 2009 135. 16% 0.5155 21.37% 0.5899 2.16% 0.3465 8.11% 0.2841 Internet TI PCS DN Inet Time Period T Mean SD T Mean SD T Mean SD T Mean SD 1998 2000 35.96% 0.4533 40.60% 0.0194 2000 2009 10.02 % 0.1618 16.01% 0.1188 14.30% 0.8659 130.56% 0.5256 2007 200 9 7.71 % 0. 1135 18.25% 0.0012 15.50% 0.1134 12.07% 0.0103 In the period 2000-2001, a drastic correction occurred that hit all dot.com and Internet-based stocks alike. This adjustment expressed the newfound skepticism that PAGE 67 67 the hoped-for effects that had driven those high valuations would not materialize. Starting in late 2002, rehabilitation occurred for a selective group of surviving Internet stocks. In the 2002-2004 periods, Internet companies that were clear market leaders and managed to convincingly assert their leadership position and, above all, start to generate profits, outperformed the general market. The correlation summaries were assembled to determine the relationship between VC returns, the benchmark indices and a sample of public companies. Even though the Internet started to be commercialized in the early 1990s, there were already several companies by the mid and late 1990s. The initial period examined was 1995-2000, and since this period covers the technology bubble, the returns are highly correlated >.50 with the Nasdaq Composite and the MOX Index, except for China-based Sina and Amazon. Similar to the semiconductor and the CleanTech venture returns, the Thomson Internet has a low correlation to the public companies, the Nasdaq Composite, the Russell 2000. fr-FR Table 5-2 : Internet Correlation Coefficient Matrix 1995-2000 fr-FR Correlation YHOO SINA EBAY AMZN PCLN MOX IXIC RUT TI PCS YHOO 1.0000 0.3469 0.5383 0.5927 0.3956 0.7399 0.5373 0.4543 0.2607 0.4053.5927 0.2965 0.5121 1.0000 0.4001 0.6527 0.4721 0.4104 0.1857 0.2866 PCLN 0.3956 0.2307 0.4501 0.4001 1.0000 0.5588 0.4020 0.3867 MOX 0.7399 0.4164 0.68 53 0.6527 0.5588 1.0000 0.8559 0.7543 IXIC 0.5373 0.3976 0.5369 0.4721 0.4020 0.8559 1.0000 0.8806 0.3663 0.9831 RUT 0.4543 0.3915 0.4673 0.4104 0.3867 0.7543 0.8806 1.0000 0. 2921 0.2649 TI 0.5508 0.2360 0.2893 0.7208 0.8857 0.7346 0.5443 1.0000 0.7 769 PCS 0.4053 0.2866 0.1062 0.3693 0.7769 1.0000 PAGE 68 68 For the period between 1998 and 2000, the height of the technology bubble, the correlation is similar to the period between 1995-2000. fr-FR Table 5-3 : Internet Correlation Coefficient Matrix: 1998 -2000 fr-FR Corr elation YHOO SINA EBAY AMZN PCLN MOX IXIC RUT TI PCS YHOO 1.0000 0.3469 0.5383 0.6431 0.3956 0.7399 0.6014 0.4987 0.2607 0.3359.6431 0.2965 0.5121 1.0000 0.4001 0.6527 0.4897 0.4174 0.1857 0.2866 PCLN 0.3956 0.2307 0.4501 0.4001 1.0000 0.5588 0.4020 0.3867 MOX 0.7399 0.4164 0.6853 0.6527 0.5588 1.0000 0.8559 0.7543 IXIC 0.6014 0.3976 0.5369 0.4897 0.4 020 0.8559 1.0000 0.8805 0.3663 0.9831 RUT 0.4987 0.3915 0.4673 0.4174 0.3867 0.7543 0.8805 1.0000 0. 2921 0.2649 TI 0.5558 0.2360 0.2679 0.7208 0.8857 0.7737 0.5537 1.0000 0.7769 PCS 0.3359 0.2866 0.9831 0.2649 0.7769 1.0000 The period between 2000-200 8 shows that the stocks and the benchmark indices are all highly correlated. Thomson Internet remains lowly correlated to the public comparables. Priceline, due to the performance of the business, begins to diverge; Sina remains less correlated to the other public stocks; and the overall correlation among the assets drops from previous levels. fr-FR Table 5-4 : Internet Correlation Coefficient Matrix 2000-2008 fr-FR Corr elation GOO G YHO O SINA EBAY AMZN PCLN MOX IXIC RUT TI PCS GOOG 1.0000 0.3516 0.3930 0.4 440 0.4060 0.3220 0.6184 0.6065 0.5250 0.3857 0.5007 YHOO 0.3516 1.0000 0.3331 0.5327 0.4656 0.3111 0.6810 0.6066 0.4885 0.1803 0.3562 SINA 0.3930 0.3331 1.0000 0.3179 0.2663 0.2417 0.4359 0.4022 0.3718 0.0036 0.3401 EBAY 0.4440 0.5327 0.3179 1.0000 0.4783 0.3630 0.6973 0.6585 0.5440 0.0035 0.0518 AMZN 0.4060 0.4656 0.2663 0.4783 1.0000 0.3077 0.6088 0.5439 0.4562 0.0986 0.0236 PCLN 0.3220 0.3111 0.2417 0.3630 0.3077 1.0000 0.4846 0.4338 0.3927 0.0320 0.0467 MOX 0.6184 0.6810 0.4359 0.6973 0.608 8 0.4846 1.0000 0.9250 0.7758 0. 0708 0.8110 IXIC 0.6065 0.6066 0.4022 0.6585 0.5439 0.4338 0.9250 1.0000 0.8718 0. 2188 0.7990 RUT 0.5250 0.4885 0.3718 0.5440 0.4562 0.3927 0.7758 0.8718 1.0000 0. 2122 0.5356 TI 0.5909 0.3239 0.1566 0.2842 0.2597 0.2505 0 .6316 0.6370 0.4595 1.0000 0.3136 PAGE 69 69 PCS 0.5007 0.3562 0.3401 0.0518 0.0236 0.0467 0.8110 0.7990 0.5356 0.3136 1.0000 The Thomson Internet correlation coefficients between 2007-2008 changed dramatically as the returns began to converge to the indices and the public stocks. fr-FR Table 5-5: Internet Correlation Coefficient Matrix 2007-2008 fr-FR Correlation GOOG YHOO SINA EBAY AMZN PCLN MOX IXIC RUT TI PCS GOOG 1.0000 0.3464 0.5032 0.5469 0.5340 0.4073 0.7422 0.7419 0.6449 0. 3846 0.7160 YHOO 0.3464 1.0000 0.4370 0. 4334 0.2913 0.2669 0.5643 0.5158 0.4846 0. 0573 0.4032 SINA 0.5032 0.4370 1.0000 0.4752 0.4151 0.3716 0.6903 0.6105 0.5471 0. 6547 0.3372 EBAY 0.5469 0.4334 0.4752 1.0000 0.5120 0.4124 0.7288 0.7313 0.6917 0. 4312 0.6401 AMZN 0.5340 0.2913 0.4151 0.5120 1. 0000 0.3903 0.6814 0.6374 0.5586 0. 8599 0.7341 PCLN 0.4073 0.2669 0.3716 0.4124 0.3903 1.0000 0.5831 0.5372 0.5177 0.0111 0.4534 MOX 0.7422 0.5643 0.6903 0.7288 0.6814 0.5831 1.0000 0.9477 0.8886 0. 6566 0.7678 IXIC 0.7419 0.5158 0.6105 0.7313 0.6374 0 .5372 0.9477 1.0000 0.9467 0. 6626 0.5437 RUT 0.6449 0.4846 0.5471 0.6917 0.5586 0.5177 0.8886 0.9467 1.0000 0. 6990 0.3819 TI 0.5618 0.4254 0.5701 0.4670 0.8447 0.6794 0.5968 0.5834 0.4687 1.0000 0.5216 PCS 0.7160 0.4032 0.3372 0.6401 0.7341 0.4534 0.767 8 0.5437 0.3819 0.5216 1.0000 The Internet has become an integral part of everyday life and it is likely that it will continue to transform, although at a much slower rate, over the next decade, with new applications transforming traditional industries and further linking the consumer and business globally. The required capital for Internet start-ups has historically been low compared to other sectors, making multiple returns more achievable in an environment with lower exit multiples PAGE 70 70 CHAPTER 6 WINDOW OF CONVERGENCE Th is chapter summarizes the behavior of the public stocks compared to the indices, based upon daily returns grossed up quarterly Exit timing can dramatically impact venture capital returns and since the VC usually has a lock-up period of 18 0 days and cannot sell or distribute shares to the LPs during this period, they are exposed to ongoing to market risk. The ime to Convergence for the purpose of this analysis was defined as the time period between the IPO date for more recent IPOs or analysis commencement date and when the returns are correlated >.50 for more than three quarters to the benchmark indices or other assets. It is the period when investors can still generate returns uncorrelated to the overall market. This analysis was completed for the three sectors, and the findings were consistent. Within approximately 2-3 years after the IPO, the returns of the new issuances will begin to converge to the respective indices. This finding is important in determining an exit strategy post IP O, since the VC will essentially have a positive exit window for 18 -24 months depending upon their ownership level. In the semiconductor market, given the level of maturity, the group of stocks w as broken down into two sections, the more mature companies and the more recent IPOs. Texas Instruments, Intel, Xilinx and Applied Materials were compared with the SOX index, with each other and then the broader market benchmark indices. PLX and AnalogicTech were compared separately due to their more recent IPO dates. The convergence observations were as follows for the semiconductor industry: Texas Instruments (TXN) Publicly traded in 1974 PAGE 71 71 Converges with Nasdaq Composite i n Q2 1992 Converges with Russell 2000 i n Q2 2001 No signs of convergence with SOX Converges with AMAT in Q1 1994 Converges with INTC in Q2 1994 Converges with XLNX in Q2 1995 Applied Materials (STP) Publicly traded in 1972 Converges with Nasdaq Composite i n Q4 1992 Converges with Russell 2000 i n Q4 1994 No signs of convergence with SOX Converges with TXN in Q1 1994 Converges with INTC in Q2 1995 Converges with XLNX in Q2 1995 Intel (INTC) Publicly traded in 1971 Converges with Nasdaq Composite i n Q1 1990 Converges with Russell 2000 i n Q4 2000 No signs of convergence with SOX Converges with XLNX in Q1 1995 Xilinx (XLNX) Publicly traded on 6/12/1990 Converges with Nasdaq Composite in Q1 1993 Converges with Russell 2000 in Q4 1999 Converges with SOX Q2 2000 AnalogicTech (AATI) Publicly traded on 8/4/2005 PAGE 72 72 Converges with Nasdaq Composite in Q2 2007 Converges with Russell 2000 in Q3 2007 No signs of convergence with SOX Converges to PLXT in Q2 2007 Converges to INTC in Q4 2007 Converges to XLNX and TXN in Q1 2008 Converges to AMAT in Q3 2008 PLXT Technology (PLXT) Publicly traded on 4/6/1999 Converges with Nasdaq Composite in Q1 2001 Converges with Russell 2000 in Q1 2003 No signs of convergence with SOX Q2 Converges to INTC, XLNX and TXN in Q3 2001 Converges to AMAT in Q3 2003 The convergence of the large capitalized stocks occurred in 1994 and 1995. In terestingly, there are few signs of convergence with the SOX, which has been regarded as the industry benchmark indices. XLNX, AATI and PLXT converged to the Nasdaq Composite in approximately in 7 -11quarters. The next part of the analysis examined the convergence of assets in solar. Suntech, First Solar and Yingli were compared with the industry and broader market benchmark indices. The Convergence Observations were as follows for the solar industry : Suntech Power (STP) Publicly traded on 12/14/05 Converges with Nasdaq Composite i n the 12 th quarter Converges with Russell 2000 i n the 13 th Quarter PAGE 73 73 No Signs of Convergence with Crude Oil prices Converges with WilderHill Clean Energy Index i n the 6 th quarter Converges with CleanTech Index i n the 6 th quarter Converges with S&P Global Clean Energy Index i n the 1 st quarter since its inception in Q4 2008. First Solar (FSLR) Publicly traded on 11/17/06 Converges with Nasdaq Composite i n the 8 th quarter Converges with Russell 2000 i n the 8 th Quarter No Signs of Convergence with Crude Converges with WilderHill Clean Energy Index i n the 4 th quarter Converges with CleanTech Index i n the 4 th quarter Converges with S&P Global Clean Energy Index i n the 1 st quarter (since its inception in q4 2008) Yingli (YGE) Publicly traded on 6/8/07 Converges with Nasdaq Composite i n the 7 th quarter Converges with Russell 2000 i n the6 th Quarter No Signs of Convergence with Crude Oil Converges with WilderHill Clean Energy Index i n the2 nd quarter Converges with CleanTech Index i n the 2 nd quarter Converges with S&P Global Clean Energy Index i n the 1 st quarter, since its inception in Q4 2008 Converges with STP i n the 3 rd quarter Converges with FSLR in the 4 th quarter The solar stocks returns first converge within 2-6 quarters to the industry benchmark. There appears to be a time convergence in 2008 for the solar stocks, PAGE 74 74 signaling that the market began to evaluate the companies on a consistent methodology. The solar stocks then converge to the Nasdaq Composite and the Russell 2000 within 7-12 quarters. The Internet returns of Google, Yahoo, Sina, Amazon and Priceline were compared with the industry and broader market benchmark indices The Convergence Observations were as follows for the Internet industry: Google (GOOG) Publicly traded on 8/19/2004 Converges with Nasdaq Composite in Q1 2006 Converges with Russell 2000 i n Q1 2006 No signs of convergence with MOX in Q1 2006 Converges with YHOO in Q2 2005 Converges with Sina in Q1 2006 Converges with EBay in Q3 2006 Converges with Amazon in Q2 2006 No signs of convergence with PCLN Yahoo (YHOO) Publicly traded on 4/12/1996 Converges with Nasdaq Composite i n Q3 1996 Converges with Russell 2000 i n Q1 1998 No signs of convergence with MOX in Q1 1999 Converges with Sina in Q3 2000 Converges with EBay in Q4 2000 Converges with Amazon in Q1 1998 Converges with PCLN in Q3 2000 PAGE 75 75 Sina (SINA) Publicly traded on 4/12/2000 Converges with Nasdaq Composite i n Q1 2001 Converges with Russell 2000 i n Q4 2002 Converges with MOX in Q1 2001 No signs of convergence with EBay Converges with Amazon in Q1 2001 Converges with PCLN in Q2 2001 EBay (EBAY) Publicly traded on 9/21/1998 Converges with Nasdaq Composite in Q2 2001 Converges with Russell 2000 in Q1 2001 Converges with MOX in Q3 2000 Converges with Amazon in Q1 2001 Converges with PCLN in Q2 1999 Amazon (AMZN) Publicly traded on 5/15/1997 Converges with Nasdaq Composite in Q1 1998 Converges with Russell 2000 in Q1 1998 No signs of convergence with MOX in Q1 2001 Converges with PCLN in Q2 1999 Priceline (PCLN) Publicly traded on 3/30/1999 Converges with Nasdaq Composite in Q2 2000 Converges with Russell 2000 in Q2 2000 No signs of convergence with MOX in Q2 2000 PAGE 76 76 The convergence time period for the Internet industry was more rapid in some cases than observed in CleanTech and Semiconductors. The time to convergence ranged from 1 quarter to 8 quarters, depending upon the sector. For VC investors that have substantial ownership, managing the exit strategy is important based upon the above convergence observations. PAGE 77 77 CHAPTER 7 CHINA EMERGING VENTURE CAPITAL SECTOR China's modern history began six decades ago with a grand experiment aimed at national development through a planned economic system based on public ownership. It failed miserably, marked by inefficient state-owned enterprises and waste. The central government stopped tinkering with its centrally planned economic system and began a transformation to a market economy. The three fundamental elements of a market economy: property rights, an open market and private enterprise, are absent in a planned economy. However, in China over the last 30 years, the government implemented various economic reforms, including limited property rights and personal incentives, developing a market system while maintaining the system of public ownership and a planned economy. The Chinese economic liberalization, which culminated at the WTO accession in 2001, has produced miraculous growth and development. GDP has grown at an average growth rate of close to 10 % per year for the past 30 years. The rapid pace of development has enabled China to become the world's third-largest economy. in China. Real GDP growth is shifting from investment to consumption based driven by -2009 produced 356 IPOs, which raised $114.96 billion. 36 In 2008, VantagePoint Venture Partners analyzed 2007 liquidity events in China. An IPO is the preferred exit path for most CEOs, so it is not surprising that in 2007, there were 242 IPOs compared to 29 M&A transactions, of which 45% returned less than 1x capital. Of the 272 IPOS, 44 36 Zero2Ipo Research. PAGE 78 78 IPOs were for manufacturing, 35 for IT, 29 for CleanTech and 6 for the Internet companies. The exit distribution for the 2007 companies was as follows: Figure 7-1. China Exit Multiples In 2007, based upon the IPO price, Chinese VCs did very well on Internet investments, which generated on average a return of 9.3x on invested capital. The returns for retail companies was 7.9x, CleanTech was 3rd at 7.8x, while IT returned 5.0x on invested capital. The top 10 venture capital exits included Goldwind at 53.0x, China High Speed Transmission at 51.9x, and the Internet companies Perfect World an d Alibaba at 31.2x and 30.3x respectively. Others included in the list were Cinsure, Western Mining, China Digital TV, Air Media, Belle International and e-house. The inflow of venture capital in China has grown from $1.298 billion in 2002 to $5.85 billion in 2009 representing a mere .012% of GDP. In the first 11 months of 2009, 133 Chinese companies went public on 3 domestic and 9 foreign exchanges, raising a PAGE 79 79 combined US$40.70B or an average US$306.04M. Of the 133 Chinese enterprises listed between Jan. and Nov. 2009, VC backed 58, or 43.6% of the total, or Private Equity Funds. The destinations of overseas-listed Chinese enterprises were diversified. There were 62 Chinese enterprises listed on nine overseas markets including HKMB, NASDAQ and NYSE. In 2009, the domestic A-share market underwent tremendous changes such as the completion of IPO system reform and the resumption of domestic IPO. The launch of ChiNext marks an important milestone for capital formation for small and mid-sized enterprises (SMEs) in China as well as for the venture capital community, which were dependent upon overseas IPOs early on in the industry s development. The ChiNext exchange is favorable for driving economic growth and enhancing the development of the venture capital industry by stimulating enthusiasm for entrepreneurship and innovation. Since the launch of ChiNext in October 2009, 61 companies have gone public, with a total market capitalization of Yuan 266,123,872,461. The current P/E ratio is 82.65, compared to 35.71 for the Shanghai Stock Exchange. Regardless of the high valuations, the ChiNext has provided venture capitalists and entrepreneurs with a liquidity option for domestic companies. Of the initial 28 companies which began trading on October 30, research from Chinese private equity data provider Zero2IPO shows that 23 were venture capitalor private equity backed. Four of these companies had received funding from foreign firms. PAGE 80 80 CHAPTER 8 ROLLING BETA In this section, the rolling beta of the Thomson Venture Overall Periodic IRR returns from 1990 until 2008 were calculated on a 12-quarter rolling basis using the S&P 500 as the market benchmark. Additionally, the rolling beta was calculated for the IXIC as well as iShares the MSCI Emerging Markets Index Fund again using the S&P 500 as the benchmark for the same interval. The VC rolling 12-quarter beta oscillates around 1, which is an expected sine venture capital returns are highly dependent upon the exit multiples driven by the overall public markets. The rolling beta was calculated the rolling pointto -point quarterly return of S&P 500 from the past 12 quarters. The increased volatility post-2000 for Thomson overall venture d at a relative to the S&P 500 is related to the technology crash in 2000-2001, the devaluation of venturebacked companies and an increase in liquidations impacting overall venture returns during this period. Since 2005, the rolling beta has hovered around 1, reflecting the dependency on IPOs, instead of M&A events, to drive positive returns. PAGE 81 81 CHAPTER 9 CONCLUSION Over the last 10 years, overall venture capital returns have been impacted by the technology crash in 2000, the events of 2008 and an overall slowdown in innovation relative to the 1960s until the late 1990s. Although the overall outcome for the industry has been disappointing for the LPs over the last 10 years, there have some been bright spots for venture capitalists globally. Venture returns are highly dependent upon the rate of innovation and the development of new industries, not just technology development. VCs have a window of opportunity to invest that is dependent upon the market development cycle and extraneous conditions. The semiconductor market matured over a 40-year period. As the industry developed, venture capitalists had an opportunity to fund innovation during the transitional phases. Today, the semiconductor market is no longer an attractive market for venture capital as it unlikely that a revolutionary change in technology or application will be discovered In the CleanTech sector, which is still undergoing a transformational change, other challenges exist: government support, subsidies and capital efficiency. Some segments of the CleanTech industry have already matured, while others are in the early stages of deployment. It is always a possibility that a company will emerge that can change the cost equation in the solar market or have a significant technology breakthrough which will be rewarded by the public markets or via an acquisition. The CleanTech sector today requires government policy and financial intervention to support the current economic model until scale can be achieved. In order for VCs to generate the required PAGE 82 82 returns, three conditions must be present for favorable market conditions: oil prices rise to a sufficient level to inflict economic pain on the consumer and governments, the emerging economies such as China and India, or government intervention into the economic equation to incentives the consumer to adopt renewable energy such as feed in tariffs. The Internet sector is still undergoing market changes as companies and startups identify new revenue models and markets driven by the communization of the end users and the consumer demand for mobility. The Internet sector has produced stellar returns both pre and post the 2000 technology crash. The VC returns in the Internet space are more highly correlated to the public markets and the benchmark indices, which is a reflection of a more robust liquidity market and shortened time horizon holding period from date of initial investment. It is not easy to pick out a dominant theme for the future when so much has happened in so many sectors to so many companies over the past 40 years in the U.S. It is highly evident that the venture capital industry will need to go back to the basics: long-term plays, focus on creating value through developing new lasting markets, managing the amount of capital invested and not investing in a hype or bubble. PAGE 83 83 LIST OF REFERENCES Cambridge Associates LLC, U.S. Venture Capital Index and Benchmark Statistics, Non-Marketable Alternative Assets, September 30, 2009. Chen, P., GT Baierl, & PD Kaplan. (2002). Venture Capital and its Role in Strategic Asset Allocation The Journal of Portfolio Management Cochrane, John H. (2001). "The Risk and Return of Venture Capital." Working paper, University of Chicago. Cochrane, John H. (2003) of Business, University of Chicago. Cringely, R. X. ( 1992) Accidental Empires Reading, MA: Addison-Wesley. Dooley, J.J., U.S. Department of Energy 2008: Trends in U.S. Venture Capital Investments Related to Energy: 1980-2007 Friedman, A. & D. Cornford. (1989). Computer Systems Development: History, Organization and Implementation Chichester: Wiley. Gilder, George. "Computer Industry." The Concise Encyclopedia of Economics. David R. Henderson, ed. Originally published as The Fortune Encyclopedia of Economics, Warner Books. 1993. Library of Economics and Liberty [Online] available from; accessed 17 February 2010; Internet. Gompers, P. (1995 ). Optimal Investment, Monitoring, and the Staging of Venture Capital, Journal of Finance, 50(5), 1461-1489. Gompers, P., & Lerner, J. (March 22, 1997 ). onference on The Economic Foundations of Venture Capital. Gompers, Paul, & Lerner, Josh. (2001). The Money of Invention Massachusetts: Harvard. Gompers, Paul, Kovner, A., Lerner, J. & David Scharfstein, (2006). Specialization and Success: Evidence from Venture Capital, Unpublished working paper, Harvard University. Gompers, Paul. (1996). Grandstanding in the Venture Capital Industry. Journal of Financial Economics, 42, 133-156. Gompers, Paul A. & Josh Lerner. (2000). Money Chasing Deals? The Impact of Fund Inflows on Private Equity Valuations, Journal of Financial Economics, 55, 281-325. PAGE 84 84 Gompers, Paul A. & Josh Lerner (1997). Risk and Reward in Private Equity Investments: The Challenge of Performance Assessment, Journal of Private Equity, 1, 512. Gompers, Paul, Anna Kovner, Josh Lerner, & David Scharfst ei n* December 2005: Venture Capital Investment Cycles: The Impact of Public Markets. Gompers, Paul, Anna R. Kovner, Josh Lerner, & David Scharfstein. "Skill vs. Luck in Entrepreneurship and Venture Capital: Evidence from Serial Entrepreneurs." NBER Working Paper Series, No. 12592, 2006. Greenstein, S (2000). Building And Delivering The Virtual World: Commercializing Services For Internet Access. Journal of Industrial Economics, 48 391-411. Greenstein, S. (1998). Commercializing the Internet, IEEE Micro, 18 67. Hellmann, Thomas F. & Puri, Manju, The Interaction between Product Market and Financing Strategy: The Role of Venture Capital (May 1999). Sauder School of Business Working Paper. Hellmann, Thomas & Manju Puri (2002), On the Fundamental Role of Venture Capital Economic Review (published by the Atlanta Federal Reserve Bank, 87, No. 4THOMAS ). Hochberg, Yael, Johnson School of Management Cornell University; Alexander Ljungqvist, Stern School of Business New York University and CEPR; Yang Lu, Kedrosky, Paul, Right-Sizing the U.S. Venture Capital Industry (June 10, 2009). Kauffman Foundation Small Research Projects Research. Keung, M. (2003). A New Empirical Study of Grandstanding in the Venture Capital Industry. Kortum, S. & Lerner, J. (2000). Assessing the contribution of venture capital to innovation RA ND Journal of Economics, 31 674-692. Lecuyer, Christophe. (2005). Making Silicon Valley: Innovation and the Growth of High Tech, 1930-1970 Cambridge, MA: MIT Press. Lerner, J. (2002). Boom and Bust in the Venture Capital Industry and the Impact on Innovation Economic Review. Masayuki Hirukawa, Northern Illinois University Masako Ueda, University of Wisconsin, Madison and CEPR: VENTURE CAPITAL AND INNOVATION: WHICH IS FIRST? PAGE 85 85 McCarthy, Jonathan, What Investment Patterns across Equipment and Industries Tel l Us about the Recent Investment Boom and Bust. Current Issues in Economics and Finance, Vol. 10, No. 6, May 2004. Available at SSRN: Meyer, T. & Mathonet, Y.P. (2005) Beyond the J Curve: Managing a Portfolio of Venture Capital and Private Equity Funds London: John Wiley & Sons. Parker, N. China. Quigley, John M., & University of California, Berkeley. July. Quigley, J. & Woodward, S. (2003) An Index for Venture Capital ," Department of Economics, Working Paper Series 1059, Department of Economics, Institute for Business and Economic Research, UC Berkeley Sagari, Silvia B. & Guidotti, Gabriela (1991). Venture Capital Operations and their Potential Role in LDC Markets. World Bank Working Paper No. WPS540. Sieling, Mark Scott. Se miconductor productivity gains linked to multiple innovations Sorensen, Morten. (2004). How Smart is Smart Money? A Two-Sided Matching Model of Venture Capital, Unpublished working paper, University of Chicago,. Spremann Dr. Klaus, (s/bf-HSG), University of St. Gallen, Doctoral Seminar, International Finance WS2000/01: Valuation of dot.com The Economist, (1997, January 25). Venture Capitalists: A really big adventure The Economist. The Industrial Era 1970-1971: Winston B (1998). Media Technology and Society: A History: From the Telegraph to the Internet Routledge. Woodward, S. & Robert E. Hall, R.E. (2004 ) "Benchmarking the Returns to Venture,"NBER Working Papers 10202, National Bureau of Economic Research, Inc. Woodward, S. (2009) and Sand Hill Econometrics, August 2009, Measuring Risk for Venture Capital and Private Equity Portfolios. Tian, X. & Wang, T. (2009).Tolerance for Failure and Corporate Innovation Available at SSRN: PAGE 86 86 BIOGRAPHICAL SKETCH Melissa Cannon Guzy was born in 1965. She grew up in Woodbridge, Connecticut and graduated from the Hopkins School in 1983. She attended Wellesley College and earned her Bachelor of Science in finance from the University of Florida in 1986. Upon graduating with her B.S. in finance, Melissa worked for Prudential Securities in the Investment Bank in a variety of positions. In 2001, Melissa joined VantagePoint Venture Partners and is the Group Leader of the Emerging Markets-Asia Investment Team. She has 19 years of experience within the semiconductor industry as an entrepreneur, CEO and investor. Before joining VantagePoint, she founded and served as CEO of a VantagePoint-backed semiconductor packaging company, where she oversaw the development of leading-edge 3-D packaging that set new industry standards for high-density backplanes. Melissa is a Hopkins Fellow and participated in the Women's Leadership Program at Harvard University. Upon completion of her M.S. in finance, Melissa continued as a Partner at VantagePoint Venture Partners. Contact Us | Permissions | Preferences | Technical Aspects | Statistics | Internal | Privacy Policy © 2004 - 2013 University of Florida George A. Smathers Libraries.All rights reserved. Terms of Use for Electronic Resources and Copyright Information Powered by SobekCM
|
http://ufdc.ufl.edu/UFE0041483/00001
|
CC-MAIN-2018-09
|
refinedweb
| 21,274
| 52.49
|
Opened 9 years ago
Closed 9 years ago
Last modified 9 years ago
#10877 closed (worksforme)
search_fields raise TypeError if field names are given as unicode objects.
Description
If you specify field names in search_fields as unicode objects, when you search you get the following exception:
Exception Type: TypeError Exception Value: __init__() keywords must be strings Exception Location: /usr/lib/python2.4/site-packages/django/contrib/admin/views/main.py in get_query_set, line 230
For example, the following code raises this
TypeError:
from django.contrib import admin class MyAdmin(admin.ModelAdmin) search_fields = [u'title']
This can be worked around by converting such unicode specified field names to strings:
search_fields = [str(field) for field in unicode_search_fields]
Other similar specifiers, such as 'list_filter' and 'exclude' handle unicode objects the same way they handle strings.
Change History (3)
comment:1 Changed 9 years ago by
comment:2 Changed 9 years ago by
Closing as worksforme since no new information has come to light.
comment:3 Changed 9 years ago by
Just found this searching for my username. Somehow I never saw the updates. I was indeed using 1.0.2 when I found this bug, for the record.
What version of Django are you using? Specifically if you are using 1.0.2 or less this has been fixed since(r10510).
|
https://code.djangoproject.com/ticket/10877
|
CC-MAIN-2018-34
|
refinedweb
| 218
| 56.35
|
See Also edit
Description editMJ: In a recent (oct-2006) OO frenzy on the Tcl chatroom resulting in code gems like neo and eos, I decided to write a Self-like extension for Tcl.The resulting C extension can be downloaded from [1].MJ Link is down and I lost the source. If anyone has a copy and can share it here I would be much obliged.After loading the extension, one object (Object) is known and the three commands (self, next and super) are imported. Every other object in the system will be cloned from Object or its clones and will have slots defined containing values or methods.ObjectThe generic prototype Object can be used to create new objects with clone. The generic object contains the following slots:
- clone new: creates a new object with name $new
- slot name value: create a value slot that will return $value if called without arguments and set the value if called with one argument
- slot name args body: will create a method slot
- slots: returns a list of all the defined slots on the object
- destroy: will destroy the object
- parents*: contains a list with all the parents of the object. This information is used for method dispatch. Object initially has no parents. Note that the parents* slot can be a value slot (as it is by default) TODO or a method slot which can allow dynamic inheritance hierachies as long as a list of objects is returned.
- parents* {receiver of the clone message}
- self slotname args: will call slot $slotname on the receiver of the initial message. Without arguments it will return the name of the receiver.
- next: will call the same slot with the same args as currently executing, but the dispatcher will start looking for the slot only in the parents of the implementor of the currently executing slot, which is not the same as self for inherited slots.
- super slotname args: will call a different slot $slotname on the parents of the implementor of the currently executing slot.
- Variable slots are not real variables, so it is not possible to add traces to them.
- Currently method dispatch in a very deep parent child chain is slow. Doing dispatch on a slot that is defined 999 objects higher in the inheritance tree takes approximately 1000 microseconds on my machine, where as in the TIP 257 [2] implementation it takes only 12. This can be resolved using slot/call chain caching.
- It would be nice to be able to delegate methods based on their signature (an unknown on steroids), which is very useful for building megawidgets. For instance:
a slot delegate* {} a delegate* {{cget .t} {* {self unknown}}} # the unknown slot is now a normal slot. # delegates will be called with the slotname and argsExamples
package require self # create a Point object. Object clone Point # add a to_s slot to display information of the object Object slot to_s {} { return "[self]" } # add x and y slots for the point, notice that these slots cannot be called for now. Point slot x {args} {error "abstract slot, override in clone"} Point slot y {args} {error "abstract slot, override in clone"} # extend default behavior from parent (Object) Point slot to_s {} { return "id: [next] ([self x],[self y])" # Here next will search for a slot named to_s in the parents of the implementor of the current method (Point) # finding the Object slot to_s and the execute it in the context of the receiver (which will be a clone of Point) } # define a point factory Point slot create {name x y} { self clone $name $name slot x $x $name slot y $y } # clone a Point Point clone p1 # to_s will fail because the x and y slots in Point are called catch {p1 to_s} err puts $err # use the Point factory which will define x and y slots Point create p1 0 0 # to_s will now work puts [p1 to_s]Intercepting slot calls for debugging purposes
Object clone A A slot test args {return} A clone a a test A clone debug debug slot test {args} {puts "called test with $args"; next} a parents* {debug} a test 1 2 3Demonstrating unknown to create a read-only text widget
# example demostrating how to override a widget package require self package require Tk proc debugtext name { text $name rename $name _$name Object clone $name $name slot unknown args { puts "[self] $args" _[self] {*}$args } $name slot destroy {} { destroy _[self] rename _[self] {} next } return $name } debugtext .t pack .t -expand 1 -fill both button .b -text "Make readonly" -command make_ro pack .b proc make_ro {} { # allows on the fly redefining of behaviour .t slot insert args {puts stderr "readonly"} .t slot delete args {puts stderr "readonly"} }
With all of these [Self]-like extensions, is it possible to make singleton objects by removing the clone function? Does that even make sense for prototype-based object systems? -- escargo 20 Oct 2006NEM: A singleton would just be an object. Perhaps an example of what you would be using the singleton for would be useful? I tend to avoid singletons. About the only place I use them is when defining the base case of some structure (e.g., if you define a binary search tree as two cases: Branch(left,right,val) and Empty, then the Empty case can be a singleton).MJ: As NEM already mentions above, a singleton only makes sense in a class based OO system where you want to instantiate a class only once. In a prototype based OO system everything is a singleton (there are no classes). However if you just want to disallow cloning of a specific object you can use the fact that clone is just a slot and redefine it e.g.:
% Object clone a % a slot clone {args} {error "cannot clone"} % a clone b cannot clone
NEM: I've not tried the implementation yet, but I very much like the specification of this extension. If I make a slot contain an object, what is the syntax for sending messages to that object? From your description, it sounds like it would be something like:
MyObj slot pos [Point create $x $y] puts "pos = [[MyObj pos] to_s]"Is that correct? Would it be possible to make it like the following?
puts "pos = [MyObj pos to_s]"MJ: In the point implementation from above that create call should actually be Point create pos $x $y (note that automatic clone naming is trivial to add in the clone slot). Apart from that, you are correct. I guess it would be possible to implement this, but I cannot see a clear way to add this in the current implementation and not break anything else. It will certainly make slot dispatch more complicated; it has to do number of arguments checking for instance. Even figuring out if a slot contains another object is not straightforward in the current implementation. However, one could implement it with the existing functionality something like this:
Object slot addChild {object} { self slot $object {slot args} "return \[$object \$slot \{*\}\$args \]" } Object clone a Object clone pos pos slot to_s {} {return "I am [self]"} a addChild pos puts [a pos to_s] # even nested Object clone b b slot to_s {} {return "I am [self]"} pos addChild b puts [a pos b to_s] Or more elaborate: package require self Object slot children* {} Object slot addChild {name object} { self slot $name {slot args} "return \[$object \$slot \{*\}\$args \]" self children* [lappend [self children*] $object] } Object slot delete {} { foreach child [self children*] { $child delete } self destroy } Object clone Point Object slot to_s {} { return "[self]" } namespace eval self { namespace eval objs { variable counter } } Point slot x {args} {error "abstract slot, override in clone"} Point slot y {args} {error "abstract slot, override in clone"} Point slot to_s {} { return "id: [next] ([self x],[self y])" } Point slot create {x y} { set obj [self clone ::self::objs::obj[incr ::self::objs::counter]] $obj slot x $x $obj slot y $y return $obj } Object clone MousePointer MousePointer addChild pos [Point create 130 140] MousePointer pos x MousePointer pos to_s MousePointer delete # child is gone info commands ::self::objs::*On a side note, implementing something like this will take away some of the simplicity of the design IMO and I have tried to make the extension as simple as possible while still offering enough flexibility.Zarutian 2006-10-26 15:35 UTC: I find this extension interesting but I haven't tried it out yet but plan to do just that.
|
http://wiki.tcl.tk/17100
|
CC-MAIN-2017-26
|
refinedweb
| 1,413
| 60.48
|
This:
Routing Slip and Loading Balancer EIPs
Introduction
Today’s businesses are not run on single monolithic systems, and most businesses have a full range of disparate systems. There is an ever increasing demand for those systems to integrate and to integrate with external business partners and government systems.
Let’s face it, integration is a hard problem. To help deal with the complexity of integration problems, the Enterprise Integration Patterns (EIP) have become the standard way to describe, document, and implement complex integration problems. Gregor Hohpe and Bobby Woolf’s book Enterprise Integration Patterns1 has become the bible in the integration space and an essential reading for any integration professional.
The EIP book distils 64 patterns in approx 700 pages; Camel implements nearly all those patterns in addition to eight patterns. This article is devoted to two of the most powerful and feature rich patterns, Routing Slip and Loading Balancer.
Routing Slip EIP
There are times when you need to route messages in dynamic fashion. For example, you may have an architecture processing incoming messages that have to undergo a sequence of processing steps and business rule validations. Since the nature of the steps and validations varies widely, we implement each type of the step as a separate filter. The filter acts as a dynamic model to apply the business rule(s) and validations when applicable.
This architecture could be implemented using the pipes and filters together with the Filter EIP. However, as often with the EIP patterns, there is a nicer way known as the Routing Slip EIP pattern. Routing Slip acts as a dynamic route that dictates the next step a message should undergo. Figure 1 shows this principle.
The Camel Routing Slip EIP requires using a preexisting header as the attached slip; this means you must prepare and attach the header beforehand.
We start with a simple example that shows how to use the Routing Slip EIP, which uses the sequence outlined in figure 1. In Java DSL the route is as simple as follows:
from("direct:start").routingSlip("mySlip"); And, in Spring XML, it's easy as well: <route> <from uri="direct:start"/> <routingSlip headerName="mySlip"/> </route>
This example assumes the incoming message contains the attached slip in the header with the key “mySlip”. The following test method shows how you should fill out the key.
public void testRoutingSlip() throws Exception { getMockEndpoint("mock:a").expectedMessageCount(1); getMockEndpoint("mock:b").expectedMessageCount(0); getMockEndpoint("mock:c").expectedMessageCount(1); template.sendBodyAndHeader("direct:start", "Hello World", "mySlip", "mock:a,mock:c"); assertMockEndpointsSatisfied(); }
As you can see, the value of the key is simply the endpoint URIs separated by a comma. Comma is the default delimiter; however, the routing slip supports using custom delimiters. For example, to use a semicolon you do:
from("direct:start").routingSlip("mySpli", ";");
And, in Spring XML:
<routingSlip headerName="mySlip" uriDelimiter=";"/>
NOTE
The Camel Routing Slip may be improved in the future to use an Expression to compute the slip. Currently, it requires the slip to preexist as a header. The Camel team will also implement more EIP patterns as annotations in future releases of Camel, which means a @RoutingSlip annotation is most likely to appear.
The example we just covered expected a preexisting header to contain the routing slip. What if the message does not contain such a header? Well, in those situations, you have to compute the header beforehand in any way you like. In the next example we show how to compute the header using a bean.
Using a bean to compute the routing slip header
To keep things simple, we have kept the logic to compute a header that either contains two or three steps:
public class ComputeSlip { public String compute(String body) { String answer = "mock:a"; if (body.contains("Cool")) { answer += ",mock:b"; } answer += ",mock:c"; return answer; } }
All we know how to do is to leverage this bean to compute the routing slip header prior to the Routing Slip EIP. In Java DSL, we can use the method call expression to invoke the bean and set the header as highlighted below:
from("direct:start") .setHeader("mySlip").method(ComputeSlip.class) .routingSlip("mySlip");
And, in Spring XML, you can do it as follows:
<route> <from uri="direct:start"/> <setHeader headerName="mySlip"> <method beanType="camelinaction.ComputeSlip"/> </setHeader> <routingSlip headerName="mySlip"/> </route>
The source code for Camel in Action contains the examples we have covered in the chapter8/routingslip directory, which you can try using the following maven goals:
mvn test -Dtest=RoutingSlipSimpleTest mvn test -Dtest=SpringRoutingSlipSimpleTest mvn test -Dtest=RoutingSlipTest mvn test -Dtest=SpringRoutingSlipTest
You have now seen the Routing Slip EIP in action which concludes the third of the four patterns from table 8.1 we will visit in this chapter.
Load Balancer was not distilled in the EIP book, which necessarily is not a bad thing. If the authors of the EIP book wrote a second edition, they most likely would’ve added a pattern about load balancing. In the next section you will learn about Camel’s built-in Load Balancer EIP, which makes it easy for you to leverage in cases where a load balancing solution is not in place.
Load Balancer EIP
You may already be familiar with the load balancing2 concept in computing. Load balancing is a technique to distribute work load across computers or other resources, in order to optimize utilization, improve throughput, minimize response time and avoid overload. This service can be provided either in the form of a hardware device or as a piece of software. Such a piece of software is provided in Camel as the Load Balancer EIP pattern.
In this section we will learn about the Load Balancer EIP pattern by walking through an example. Then, we’ll see the various types of load balancer Camel offers out of the box. Then, we’ll focus on the failover type. You can also build your own load balancer, which we cover last.
Introducing Camel Load Balancer EIP
The Camel Load Balancer EIP is a Processor that implements the interface org.apache.camel.processor.loadbalancer.LoadBalancer. The LoadBalancer offers methods to add and remove Processors that should participate. By using Processor instead of Endpoints, the load balancer is capable of balancing anything that you can define in your Camel routes. You can draw a parallel to the Content-Based Router EIP pattern, which is capable of letting messages choose different route paths.
However, it’s most often you would use the Load Balancer EIP to balance across a number of external remote services. Such an example is illustrated in figure 2, where a Camel application needs to load-balance across two services.
When using the Load Balancer EIP, you have to select a strategy. A common and easy-to-understand strategy would be to let it take turns among the services, which is known as the round robin strategy. We will take a look at all the strategies Camel provides out of the box.
We will now illustrate how to use the Load Balancer with the round robin strategy based on a small test example. First, we show the Java DSL with the load balancer highlighted:
from("direct:start") .loadBalance().roundRobin() .to("seda:a").to("seda:b") .end(); from("seda:a") .log("A received: ${body}") .to("mock:a"); from("seda:b") .log("B received: ${body}") .to("mock:b");
And, the equivalent route in Spring XML is as follows:
<route> <from uri="direct:start"/> <loadBalance> <roundRobin/> <to uri="seda:a"/> <to uri="seda:b"/> </loadBalance> </route> <route> <from uri="seda:a"/> <log message="A received: ${body}"/> <to uri="mock:a"/> </route> <route> <from uri="seda:b"/> <log message="B received: ${body}"/> <to uri="mock:b"/> </route>
The routes will load balance across the two nodes which is sending the message to the external services:
.to("seda:a").to("seda:b")
Suppose we start sending messages to the route. What would happen is that the first message would be sent to the “seda:a” endpoint and the next would go to “seda:b”. Then, the third message would start all over and be send to “seda:a”, and so forth.
The source code for Camel in Action contains this example in the chapter8/loadbalancer directory, which you can try using the following maven goals:
mvn test -Dtest=LoadBalancerTest mvn test -Dtest=SpringLoadBalancerTest
If you run the example, the console would output something like this:
[Camel Thread 0 - seda://a] INFO route2 - A received: Hello [Camel Thread 1 - seda://b] INFO route3 - B received: Camel rocks [Camel Thread 0 - seda://a] INFO route2 - A received: Cool [Camel Thread 1 - seda://b] INFO route3 - B received: Bye
In the next section we will review the various load balancer strategies you can use with the Load Balancer EIP.
Load balancer strategies
A load balancer strategy is used for dictating which Processor should process an incoming message and it’s totally up to each strategy how they chose the Processor. Camel provides six strategies, listed in table 1.
Table 1 Load balancer strategies provided by Camel
The first four strategies are easy to set up and use in Camel. For example, using the random strategy is just a matter of specifying in Java DSL:
from("direct:start") .loadBalance().random() .to("seda:a").to("seda:b") .end();
And, in Spring XML:
<route> <from uri="direct:start"/> <loadBalance> <random/> <to uri="seda:a"/> <to uri="seda:b"/> </loadBalance> </route>
However, the sticky strategy requires that you provide a correlation expression, which is used to calculate a hashed value to indicate which processor to use. Suppose your messages contain a header indicating different levels. Using the sticky strategy you can have messages with the same level choose the same processor over and over again.
In Java DSL, you would provide the expression as highlighted:
from("direct:start") .loadBalance().sticky(header("type")) .to("seda:a").to("seda:b") .end();
And, in Spring XML:
<route> <from uri="direct:start"/> <loadBalance> <sticky> <correlationExpression> <header>type</header> </correlationExpression> </sticky> <to uri="seda:a"/> <to uri="seda:b"/> </loadBalance> </route>
The source code for Camel in Action contains examples for the strategies listed in table 1 in the chapter8/loadbalancer directory. For example, to try random, sticky, or topic you can use the following maven goals:
mvn test -Dtest=RandomLoadBalancerTest mvn test -Dtest=SpringRandomLoadBalancerTest mvn test -Dtest=StickyLoadBalancerTest mvn test -Dtest=SpringStickyLoadBalancerTest mvn test -Dtest=TopicLoadBalancerTest mvn test -Dtest=SpringTopicLoadBalancerTest
The failover strategy is a more elaborate strategy, which we cover now.
Using the failover load balancer
Load balancing is often used to implement failover—the continuation of a service after a failure. The Camel failover load balancer detects the failure when an exception occurs and reacts by letting the next processor take over processing the message.
Given the route snippet below, the failover will always start by sending the messages to the first processor (“direct:a”) and only in case of a failure it will let the next (“direct:b”) take over, and so forth.
from("direct:start") .loadBalance().failover() .to("direct:a").to("direct:b") .end();
And, the equivalent snippet in Spring XML is as follows:
<route> <from uri="direct:start"/> <loadBalance> <failover/> run the example, it’s constructed to send in four messages, the second of which will failover to be processed by the “direct:b” processor. The other three messages will be processed successfully by “direct:a”.
In this example, the failover load balancer will react to any kind of exception. However, you can provide it with a number of exceptions that make it reacts.
Suppose we only want to failover if an IOException is thrown (which indicates communication errors with external services, such as no connection). This is very easy to configure, as shown in Java DSL:
from("direct:start") .loadBalance().failover(IOException.class) .to("direct:a").to("direct:b") .end();
And, in Spring XML:
<route> <from uri="direct:start"/> <loadBalance> <failover> <exception>java.io.IOException</exception> </failover> <to uri="direct:a"/> <to uri="direct:b"/> </loadBalance> </route>
In this example, we only specified one exception; however, you can specify multiple exceptions.
You may have noticed in the failover examples that the load balancer works by always choosing the first processor and the failover to subsequent processors. In other words, the first processor is the master and the others are slaves.
The failover load balancer offers a strategy that combines round robin with failure support.
USING FAILOVER WITH ROUND ROBIN
The Camel failover load balancer in round robin mode allows you to use the best of two worlds. This allows you to distribute the load evenly between the services and have automatic failover as well.
In this scenario, you have three configuration options on the load balancer that dictate how it operates at runtime, as listed in table 2.
Table 2 Failover load balancer configuration options
To better understand the options from table 2 and how the round robin mode works, we start with a fairly simple example.
In Java DSL, you have to configure the failover with all of the options, as highlighted below:
from("direct:start") .loadBalance().failover(1, false, true) .to("direct:a").to("direct:b") .end();
In this example, we have set the maximumFailoverAttempts to 1, which means we will at most try to failover once. That means we will try at most two times, one for the initial request and then the failover attempt. If both fail, Camel propagates the exception back to the caller. In this example, we have disabled the Camel error handler. We will discuss this option in more detail in the next example. And, the last parameter indicates that we use the round robin mode.
In Spring XML, you configure the options as attributes on the <failover/> EIP:
<route> <from uri="direct:start"/> <loadBalance> <failover roundRobin="true" maximumFailoverAttempts="1"/> are curious about the inheritErrorHandler configuration option, please look at the following examples in the source code for Camel in Action:
mvn test -Dtest=FailoverInheritErrorHandlerLoadBalancerTest mvn test -Dtest=SpringFailoverInheritErrorHandlerLoadBalancerTest
This concludes our tour of the failover load balancer. The next section teaches us how to implement and use our own custom strategy, which you may want to use when you need a special balancing logic.
Using a custom load balancer
When implementing a custom load balancer, you would often extend the LoadBalancerSupport class, which provides a good starting point. Listing 1 shows how we have implemented the custom load balancer.
Listing 1 Custom load balancer
import org.apache.camel.Exchange; import org.apache.camel.Processor; import org.apache.camel.processor.loadbalancer.LoadBalancerSupport; public class MyCustomLoadBalancer extends LoadBalancerSupport { public void process(Exchange exchange) throws Exception { Processor target = chooseProcessor(exchange); target.process(exchange); } protected Processor chooseProcessor(Exchange exchange) { String type = exchange.getIn().getHeader("type", String.class); if ("gold".equals(type)) { return getProcessors().get(0); } else { return getProcessors().get(1); } } }
As you can see, it doesn’t take much code. In the process() method we invoke the chooseProcessor() method, which essentially is the strategy that picks the processor to process the message. In this example, it will pick the first processor, if the message is gold type, and the second processor, if not.
In Java DSL, you use a custom load balancer as highlighted:
from("direct:start") .loadBalance(new MyCustomLoadBalancer()) .to("seda:a").to("seda:b") .end();
And in Spring XML, you need to declare a Spring <bean/> tag:
<bean id="myCustom" class="camelinaction.MyCustomLoadBalancer"/>
Which you then refer to from the <loadBalance> EIP:
<route> <from uri="direct:start"/> <loadBalance ref="myCustom"> <to uri="seda:a"/> <to uri="seda:b"/> </loadBalance> </route>
The source code for Camel in Action contains this example in the chapter8/loadbalancer directory, which you can try using the following maven goals:
mvn test -Dtest=CustomLoadBalancerTest mvn test -Dtest=SpringCustomLoadBalancerTest
We have now covered all about the Load Balancer EIP in Camel. You now know that, when you are in need of a load balancing solution, you can maybe settle with a build in Load Balancer EIP in Camel. In fact, you can even build your own custom strategy when the out-of-the-box strategies don’t meet your requirements. For example, you could build a strategy that acquires load stats from services and picks the service with the lowest load.
Summary
Thanks to the arrival of the EIP book on the scene, we now have a common vocabulary and concepts when we architect applications to tackle today’s integration challenges. This article reviews two of the most complex and sophisticated patterns in great details.
We covered Routing Slip EIP and Load Balancer EIP, the latter of which was not portrayed in the EIP book. We cover how it works and the six different strategies Camel offers out of the box.
You can visit the Camel website4 for a list of the EIP patterns implemented in Camel and for additional information about those patterns.
Hi Krishna,
Do you still do work in Camel Routes? If you do please let me know. I have a technical scenario I will like to discuss with you. It involves load-balancing in one aspect and fail-over in another but only one component needs to have fail-over the other container will just perform round-robin, weighted-round-robin and sticky load-balancing.
Please let me know if you will be interested in giving me some pointers on this situation.
Thanks!
|
http://www.javabeat.net/routing-slip-and-loading-balancer-eips-in-camel/
|
CC-MAIN-2014-49
|
refinedweb
| 2,885
| 51.58
|
On Thu, Nov 24, 2011 at 3:32 PM, PJ Eby <pje at telecommunity.com> wrote: > You're right; I didn't think of this because I haven't moved past Python 2.5 for production coding as yet. ;-) Yeah, there's absolutely no way we could have changed this in 2.x - with implicit relative imports in packages still allowed, there's too much code such a change in semantics could have broken. In Py3k though, most of that code is already going to break one way or another: if they don't change it, attempting to import it will fail (since implicit relative imports are gone), while if they *do* switch to explicit relative imports to make importing as a module work, then they're probably going to break direct invocation (since __name__ and __package__ will be wrong unless you use '-m' from the correct working directory). The idea behind PEP 395 is to make converting to explicit relative imports the right thing to do, *without* breaking dual-role modules for either use case. > However, if we're going on the basis of how many newbie errors can be solved > by Just Working, PEP 402 will help more newbies than PEP 395, since you must > first *have* a package in order for 395 to be meaningful. ;-) Nope, PEP 402 makes it worse, because it permanently entrenches the current broken sys.path[0] initialisation with no apparent way out. That first list in the current PEP of "these invocations currently break for modules inside packages"? They all *stay* broken forever under PEP 402, because the filesystem no longer fully specifies the package structure - you need an *already* initialised sys.path to figure out how to translate a given filesystem layout into the Python namespace. With the package structure underspecified, there's no way to reverse engineer what sys.path[0] *should* be and it becomes necessary to put the burden back on the developer. Consider this PEP 382 layout (based on the example __init__.py based layout I use in PEP 395): project/ setup.py example.pyp/ foo.py tests.pyp/ test_foo.py There's no ambiguity there: We have a top level project directory containing an "example" package fragment and an "example.tests" subpackage fragment. Given the full path to any of "setup.py", "foo.py" and "test_foo.py", we can figure out that the correct thing to place in sys.path[0] is the "projects" directory. Under PEP 402, it would look like this: project/ setup.py example/ foo.py tests/ test_foo.py Depending on what you put on sys.path, that layout could be defining a "project" package, an "example" package or a "tests" package. The interpreter has no way of knowing, so it can't do anything sensible with sys.path[0] when the only information it has is the filename for "foo.py" or "test_foo.py". Your best bet would be the status quo: just use the directory containing that file, breaking any explicit relative imports in the process (since __name__ is correspondingly inaccurate). People already have to understand that Python modules have to be explicitly marked - while "foo" can be executed as a Python script, it cannot be imported as a Python module. Instead, the name needs to be "foo.py" so that the import process will recognise it as a source module. Explaining that importable package directories are similarly marked with either an "__init__.py" file or a ".pyp" extension is a fairly painless task - people can accept it and move on, even if they don't necessarily understand *why* it's useful to be explicit about package layouts. (Drawing that parallel is even more apt these days, given the ability to explicitly execute any directory containing a __main__.py file regardless of the directory name or other contents) The mismatch between __main__ imports and imports from everywhere else, though? That's hard to explain to *experienced* Python programmers, let alone beginners. My theory is that if we can get package layouts to stop breaking most invocation methods for modules inside those packages, then beginners should be significantly less confused about how imports work because the question simply won't arise. Once the behaviour of imports from __main__ is made consistent with imports from other modules, then the time when people need to *care* about details like how sys.path[0] gets initialised can be postponed until much later in their development as a Python programmer. >> (which are *supposed* to be dead in 3.x, but linger in >> __main__ solely due to the way we initialise sys.path[0]). If a script >> is going to be legitimately shipped inside a package directory, it >> *must* be importable as part of that package namespace, and any script >> in Py3k that relies on implicit relative imports fails to qualify. > > Wait a minute... What would happen if there were no implicit relative > imports allowed in __main__? > Or are you just saying that you get the *appearance* of implicit relative > importing, due to aliasing? The latter - because the initialisation of sys.path[0] ignores package structure information in the filesystem, it's easy to get the interpreter to commit the cardinal aliasing sin of putting a package directory on sys.path. In a lot of cases, that kinda sorta works in 2.x because of implicit relative imports, but it's always going to cause problems in 3.x. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia
|
https://mail.python.org/pipermail/import-sig/2011-November/000403.html
|
CC-MAIN-2019-13
|
refinedweb
| 912
| 64.51
|
Repeat a scipy csr sparse matrix along axis 0
sort sparse matrix python
scipy sparse multiply matrix
compressed sparse row format to numpy array
scipy csr to csc
scipy matrices
numpy sparse tensor
matlab sparse in python
I wanted to repeat the rows of a scipy csr sparse matrix, but when I tried to call numpy's repeat method, it simply treats the sparse matrix like an object, and would only repeat it as an object in an ndarray. I looked through the documentation, but I couldn't find any utility to repeats the rows of a scipy csr sparse matrix.
I wrote the following code that operates on the internal data, which seems to work
def csr_repeat(csr, repeats): if isinstance(repeats, int): repeats = np.repeat(repeats, csr.shape[0]) repeats = np.asarray(repeats) rnnz = np.diff(csr.indptr) ndata = rnnz.dot(repeats) if ndata == 0: return sparse.csr_matrix((np.sum(repeats), csr.shape[1]), dtype=csr.dtype) indmap = np.ones(ndata, dtype=np.int) indmap[0] = 0 rnnz_ = np.repeat(rnnz, repeats) indptr_ = rnnz_.cumsum() mask = indptr_ < ndata indmap -= np.int_(np.bincount(indptr_[mask], weights=rnnz_[mask], minlength=ndata)) jumps = (rnnz * repeats).cumsum() mask = jumps < ndata indmap += np.int_(np.bincount(jumps[mask], weights=rnnz[mask], minlength=ndata)) indmap = indmap.cumsum() return sparse.csr_matrix((csr.data[indmap], csr.indices[indmap], np.r_[0, indptr_]), shape=(np.sum(repeats), csr.shape[1]))
and be reasonably efficient, but I'd rather not monkey patch the class. Is there a better way to do this?
Edit
As I revisit this question, I wonder why I posted it in the first place. Almost everything I could think to do with the repeated matrix would be easier to do with the original matrix, and then apply the repetition afterwards. My assumption is that post repetition will always be the better way to approach this problem than any of the potential answers.
from scipy.sparse import csr_matrix repeated_row_matrix = csr_matrix(np.ones([repeat_number,1])) * sparse_row
scipy.sparse.csr_matrix, with another sparse matrix S (equivalent to S.tocsr()) is the standard CSR representation where the column indices for row i are stored in indices[indptr[i]:indptr[i+1]] csr_matrix((data, indices, indptr), dtype=int).toarray() array([[2, 1, 0, 0], [0, 1, 1, 1]]) Return indices of maximum elements along an axis. Returns a copy of row i of the matrix, as a (1 x n) CSR matrix (row vector). log1p Element-wise log1p. max ([axis, out]) Return the maximum of the matrix or maximum along an axis. maximum (other) Element-wise maximum between this and another matrix. mean ([axis, dtype, out]) Compute the arithmetic mean along the specified axis. min ([axis, out])
It's not surprising that
np.repeat does not work. It delegates the action to the hardcoded
a.repeat method, and failing that, first turns
a into an array (object if needed).
In the linear algebra world where sparse code was developed, most of the assembly work was done on the
row,
col,
data arrays BEFORE creating the sparse matrix. The focus was on efficient math operations, and not so much on adding/deleting/indexing rows and elements.
I haven't worked through your code, but I'm not surprised that a
csr format matrix requires that much work.
I worked out a similar function for the
lil format (working from
lil.copy):
def lil_repeat(S, repeat): # row repeat for lil sparse matrix # test for lil type and/or convert shape=list(S.shape) if isinstance(repeat, int): shape[0]=shape[0]*repeat else: shape[0]=sum(repeat) shape = tuple(shape) new = sparse.lil_matrix(shape, dtype=S.dtype) new.data = S.data.repeat(repeat) # flat repeat new.rows = S.rows.repeat(repeat) return new
But it is also possible to repeat using indices. Both
lil and
csr support indexing that is close to that of regular numpy arrays (at least in new enough versions). Thus:
S = sparse.lil_matrix([[0,1,2],[0,0,0],[1,0,0]]) print S.A.repeat([1,2,3], axis=0) print S.A[(0,1,1,2,2,2),:] print lil_repeat(S,[1,2,3]).A print S[(0,1,1,2,2,2),:].A
give the same result
and best of all?
print S[np.arange(3).repeat([1,2,3]),:].A
scipy.sparse.bsr_matrix, BSR is appropriate for sparse matrices with dense sub matrices like the last example below. more efficient than CSR and CSC for many sparse arithmetic operations. 2, 2, 0, 1, 2]) >>> data = np.array([1, 2, 3, 4, 5, 6]).repeat(4).reshape(6, 2, argmax ([axis, out]), Return indices of minimum elements along an axis. Format of a matrix representation as a string. Maximum number of elements to display when printed. Number of stored values, including explicit zeros. Returns a copy of row i of the matrix, as a (1 x n) CSR matrix (row vector). Element-wise log1p. Return the maximum of the matrix or maximum along an axis.
After someone posted a really clever response for how best to do this I revisited my original question, to see if there was an even better way. I I came up with one more way that has some pros and cons. Instead of repeating all of the data (as is done with the accepted answer), we can instead instruct scipy to reuse the data of the repeated rows, creating something akin to a view of the original sparse array (as you might do with
broadcast_to). This can be done by simply tiling the
indptr field.
repeated = sparse.csr_matrix((orig.data, orig.indices, np.tile(orig.indptr, repeat_num)))
This technique repeats the vector
repeat_num times, while only modifying the the
indptr. The downside is that due to the way the csr matrices encode data, instead of creating a matrix that's
repeat_num x n in dimension, it creates one that's
(2 * repeat_num - 1) x n where every odd row is 0. This shouldn't be too big of a deal as any operation will be quick given that each row is 0, and they should be pretty easy to slice out afterwards (with something like
[::2]), but it's not ideal.
I think the marked answer is probably still the "best" way to do this.
scipy.sparse.csr_matrix, Sparse matrices can be used in arithmetic operations: they support addition, subtraction, multiplication, Disadvantages of the CSR format csr_matrix( (3,4), dtype=int8 ).todense() matrix([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]], dtype=int8) max([axis]), Maximum of the elements of this matrix. Last updated on May 11, 2014. scipy.sparse.bsr_matrix format is very similar to the Compressed Sparse Row (CSR) format. Return the maximum of the matrix or maximum along an axis.
scipy.sparse.dia_matrix, with another sparse matrix S (equivalent to S.todia()) data = np.array([[1, 2, 3, 4]]).repeat(3, axis=0) >>> offsets = np.array([0, -1, 2]) >>> dia_matrix((data, offsets), Compute the arithmetic mean along the specified axis. Sum the matrix elements over a given axis. Parameters axis {-2, -1, 0, 1, None} optional. Axis along which the sum is computed. The default is to compute the sum of all the matrix elements, returning a scalar (i.e. axis = None). dtype dtype, optional. The type of the returned matrix and of the accumulator in which the elements are summed.
scipy.sparse.coo_matrix, COO is a fast format for constructing sparse matrices Constructing a matrix with duplicate indices >>> row = np.array([0, 0, 1, 3, 1, 0, 0]) >>> col = np.array([0, 2, 1, 3, 1, 0, Return indices of maximum elements along an axis. scipy.sparse.csr_matrix.mean¶. Compute the arithmetic mean along the specified axis. Returns the average of the matrix elements. The average is taken over all elements in the matrix by default, otherwise over the specified axis. float64 intermediate and return values are used for integer inputs.
scipy.sparse.bsr_matrix, BSR is appropriate for sparse matrices with dense sub matrices like In such cases, BSR is considerably more efficient than CSR and CSC for many sparse arithmetic operations. 2, 2, 0, 1, 2]) >>> data = np.array([1, 2, 3, 4, 5, 6]).repeat(4).reshape(6, Return indices of maximum elements along an axis. cupyx.scipy.sparse.spmatrix axis (int or None) – Axis along which the sum is comuted. Convert this matrix to Compressed Sparse Row format.
- Generally, answers are much more helpful if they include an explanation of what the code is intended to do, and why that solves the problem without introducing others. Thanks for improving the answer's reference value and making it more understandable!
- this answer only works when the sparse matrix to be repeated is actually a sparse vector. just using basic linear algebra...
- @user3357359 if you're repeating a sparse vector, it's seems much quicker to just do something like
sparse_row[np.zeros(repeat_number),:]
S[np.arange(3).repeat([1,2,3]),:]is genius, and in some quick testing was also much faster than my method.
|
http://thetopsites.net/article/50759652.shtml
|
CC-MAIN-2020-50
|
refinedweb
| 1,507
| 58.89
|
I'm trying to use ArcGIS JavaScript map component on the same page with Jodit WYSIWYG HTML editor. Both components use obfuscated JavaScript code. The problem is that the obfuscated code of both components uses the same replacements for functions names, which leads to collision. For example, both codes contain a top level function named "O". As the result, a call to the function leads to a wrong code (it depends on the order of loading of JavaScript files). On my opinion, the problem could be solved by using namespaces in your source JavaScript files.
3x or 4x? I can't think of any global functions in the JSAPI. The only method names that are obfuscated during the optimization process like that are names that scoped in a function, for example.
No namespacing required.
They should have no conflicts with any other libraries. If there are conflicts with that library it could be they are obfuscating global function names.I would think they would just shadow it though. Tough to tell.
|
https://community.esri.com/thread/231205-js-functions-names-collision
|
CC-MAIN-2020-40
|
refinedweb
| 171
| 76.22
|
Todo
This is the general, long-term Todo list, with a special emphasis on open jobs for new contributors to tackle. There is also a separate Todo for 3.0 aimed specifically at preparing Eigen 3.0.
Contents
- 1 Notice to prospective contributors
- 2 Parallelization
- 3 BLAS and LAPACK backends
- 4 BLAS implementation on top of Eigen
- 5 Complex numbers
- 6 Geometry module
- 7 Optimization module
- 8 LU module
- 9 Cholesky module
- 10 Sparse matrices/vectors
- 11 Unit-tests
- 12 Documentation
- 13 Core module
- 14 Array module
- 15 Transcendental functions
- 16 Special matrices
- 17 SVD module
- 18 QR module
- 19 Statistics module
- 20 FFT module
- 21 Random ideas
Notice to prospective contributors
Before you start spending time on any of these items, contact us about it!
- Sometimes the Todo is outdated
- Sometimes some items in the todo haven't been sufficiently discussed
So by contacting us you can avoid any bad surprise.
Parallelization
At some point we will definitely want that parallelization occurs inside Eigen itself. Details are there: Working notes - SMP support.
BLAS and LAPACK backends
Although we have very good performance for 1 CPU core, we don't have any parallelization so we don't take advantage of multicore CPUs; and we also don't take advantage of GPUs. An easy way to address that, is to implement BLAS and LAPACK backends for common heavy operations (matrix product could use BLAS, eigensolvers could use LAPACK...). In this way, users may use Eigen together with optimized BLAS/LAPACK libraries. These backends would be optional of course. An example to follow is how the Sparse module already has optional backends. This job is very accessible to someone who doesn't necessarily have much Eigen experience, and is very useful.
BLAS implementation on top of Eigen
This is the converse of the above job. We could offer a BLAS implemented using Eigen.
Update: This has been started and already well advanced by Gael, see blas/ and lapack/ directories. The eigen_blas library is complete. The eigen_lapack currently implements cholesky and lu decomposition. Contact us if you want to help.
Complex numbers
- Support for complex numbers and mixing complex-real is already pretty good. However, some operations are not vectorized or fully optimized yet. E.g., complex-real scalar (or coeff wise) products. Complex normalization, etc.
Geometry module
The main part is there: Transform, rotations (Quaternion, Angle-Axis), cross product.
- Available job: Add methods to create perspective/orthogonal projection matrix ?
- Available job: Extend the current HyperPlane and ParametrizedLine classes with all features one could expect to see here, and complete this set of basic geometry primitives with at least AlignedBox and HyperSphere. Perhaps some other primitives could be added too.
- Need a fixed-size Gram-Schmidt here, so it's not clear whether it belongs here or in SVD (or QR !). Is this a case for having it in a separate module?
- Available job: Add quaternion fitting. This is an algorithm taking two sets of points and finding the rotation making the first set of points as close as possible to the second set of points. Of course this can be done by performing a Gram-Schmidt on the transition matrix, but there is a quaternion-based method that is reportedly faster, and in wide use in chemistry. This would be useful for OpenBabel. See this link: and this paper: Update: see this thread on the mailing list: (we still need someone to actually do make it happen)
Optimization module
Available job
This module should provide easy to use optimization routines for non-linear minimization, differential equations, etc. For instance, I'm thinking to algorithms such as Levenberg-Marquardt for least-squares problems, the multiple Runge-Kutta variants for ODE, etc. Those routines should provide a flexible interface. For instance, with respect to the derivatives, it should allow to either provide them manually, ask the routines to compute them using finite differences or using an automatic differentiation library (eg., adolc).
Note that this module is highly related to the Sparse module and other linear and dense solver modules (LU, Cholesky, etc.).
Status There is an experimental module in unsupported/NonLinear, which provide good implementation of Levenberg-Marquardt and non-linear system solving, based on the algorithms found in the well-known netlib package "MINPACK". It can use the unsupported/NumericalDiff' module.
Initial support for Automatic differentiation (one based on adol-c, and one purely in eigen) is already present in the devel branch under unsupported/Eigen/AutoDiff.
LU module
Available jobs:
- Write fixed-size specializations. Not only of the LU decomposition but of all the methods like solve() and computeImage(), etc.
- So far we only have specializations for inverse() and determinant(), that's not enough.
- The benefit of vectorization is not nearly as high as it should be, there is room for improvement.
- In PartialLU, this should be a matter of examining the asm carefully.
- In (full pivoting) LU, the main limitation is that the maxCoeff() visitor is not vectorized. So vectorizing visitors would be most useful here. If not possible to do in such generality then consider adding a specialized function for that.
- Make sure that everything goes as fast for row-major matrices as it does for column-major matrices. At least insofar as it doesn't require to write 2x more code everywhere. Also, the choice of code paths should be made at compile-time, i.e. compile only 1 path.
Cholesky module
- Write a meta unroller for small fixed sizes (LLT) : already tried but this did not bring any significant benefits...
Sparse matrices/vectors
- The goal is to provide a seamless integration of sparse matrix on top of Eigen with both basic linear algebra features and more advanced algorithms like linear and eigen value solvers. Those high level algorithms could be implemented using external libraries while providing a unified API.
- This module is just started, see the specific SparseMatrix page for the details.
Unit-tests
- Keep adding more!
Documentation
- Keep updating
- The online tutorial needs love. badly.
- Especially the Geometry and Advances linear algebra pages
- Write special pages on certain topics, illustrate them with examples:
- compile-time switches (preprocessor symbols like EIGEN_NO_DEBUG etc)
- Compiler support/options and Vectorization : already covered in the FAQ, but feel free to start a more detailed dox page
- basic notions about Eigen expressions, MatrixBase, the standard typedefs such as PlainMatrixType and Scalar...
- Thread safety
- Fixed-size matrices/vectors and fixed-size variants of many methods like block()
- in particular, usage with OpenGL
- Dynamic-size matrices/vectors
Core module
This module is essentially complete and working, but needs improvement in these areas:
- Expression evaluator
Working notes - Expression evaluator.
- General API considerations
- Move internal stuffs into dedicated namespace(s) ?
- ---> It's just going to add a lot of "::" around, I don't see the benefit. Namespaces are justified when one is interested in doing "using" on part or all of the namespace. But here? I'd rather not say that the prefix is what makes something "internal", because a lot of internal things don't have a prefix already. For example ProductBase: if tomorrow we find a better design where ProductBase isn't useful anymore, we shouldn't be bound by compatibility requirements to keep it. So I consider ProductBase as internal. Conclusion: the prefixing isn't here to determine what's internal (that is rather done in the documentation). Instead, the prefixing is there to prevent conflicts in case the user does both "using namespace Eigen" and "using namespace std" (or other libs he might use). It's therefore hard to determine when to use it. We can't drop it altogether, e.g. we can't drop it on ei_sin, and more generally stuff like ei_aligned_malloc would be very polluting without the prefix. We could just say that _all_ lowercase_names start with ei_ so at least we'd be consistent. So ei_aligned_allocator. Actually, aligned_allocator is very polluting.
- Provide good support for Matrix of Matrix, and more generally provide good support for general complex types. Here by "complex" type I mean large types made of, e.g., many scalar values. A good example is the AutoDiffScalar type which is mainly a pair of scalar and vector.
- Vectorization
- Vectorize diagonal product. The current diagonal product takes an actual matrix that just happens to be diagonal. We need instead it to take a DiagonalMatrix expression (that's just a wrapper around the diagonal vector). We can then remove the old path as it can be implemented (if needed, uncommon anyway) as m.diagonal().asDiagonal() * other. Since m.diagonal() doesn't have the PacketAccessBit, we must ensure that this propagates to asDiagonal() so Eigen knows that it can't vectorize this latter product.
- vectorize fixed sizes that aren't a multiple of a packet but still are bigger than a packet. Example: Linear operations (like cwise expression assignment) on Matrix3f and Matrix3d.
- Hm on second thought, doing that without runtime checks would require the data to be aligned. Which can't be the default as it can waste memory (though very little in the case of Matrix3d). But then if it's not the default it's hard to offer as an option because of binary compatibility...
- Improve partial redux vectorization: according to the storage order of the matrix and the redux direction different strategies can be implemented...
- vectorize more coeff wise operations (e.g., comparisons)
- to vectorize comparisons we first need to change the comparison operators to return bitmasks of the same size as the scalar type with all bits equal to ones (or all zero). We can do that only for vectorized types. And good news: the compiler is clever enough to remove this extra stuff when it is not needed (even for double/int64 on a 32 bits system).
- the second step is to vectorize select using bitmask operations, however make sure it is worth it because here both the "then" and "else" cases will be evaluated (use the expression cost)
- must vectorize all() and any() as well, otherwise it is useless !
- Explicitly vectorized loops are usually surrounded by two small scalar loops. We therefore have to take care the compiler does not try to vectorize them itself! Hopefully it is smart enough to find out the iteration counts is always smaller than a packet size. Otherwise, gcc-4.4 provides new function attributes allowing to tweak any -f options (as well as several -m options) on a per function basis.
- Misc Features
- Add stride to Map ? Note the current workaround is to use block, e.g.: Map<MatrixXf>(ptr,stride,cols).block(offset,0,rows,cols)
- Shallow copy of dynamic matrix: we already have efficient swap for them, what is really needed is to add a .isTemporary()/.markAsTemporary() to MatrixStorage which could use the last bit if the size to store the state. Then the Matrix constructor and operator= that take the same Matrix type as input can simply check the temporary state to select between a simple swap or a deep copy (see the Sparse module). Note that since the temporary state is know at compile time we could also use a flag but then we create artificial new types.
- could be done only for c++1x using Rvalue references/move semantic
- Peel loops where appropriate.
- (NOTE: isn't this outdated? We do use alloca currently) Consider the use of alloca (dynamic allocation on the stack for dynamic-sized temporary). After some benchmarks, it seems this really pay off for matrices smaller than 16x16, for larger matrices the gain is not so obvious. In practice alloca cannot be called in MatrixStorage, and it as to be called in the function creating the temporary. This might be implemented using Matrix::map().
- Another fine tuning idea (very low priority): reuse a previously allocated temporary when possible. For instance, theoretically, the Cholesky decomposition could be done "in place", by working directly on the input matrix. Currently, Cholesky stores its own data. However, a typical use case is: (m.adjoint() * m).cholesky()... In that case, if m is large enough, the matrix product creates a temporary and Cholesky too while the latter could simply "pick" the data allocated by the product since we know this data cannot be reused by any other expression. I guess this concept could be extended to a more general mechanism. (this is only applicable for dynamic-size matrix and in the case we don't use alloca, cf. the previous point)
Array module
- Misc features
- add shift(int) and rotate(int) for vectors. The arg for both is an integer, guess negative would mean "on the left", and positive "on the right", thought that seems quite western-oriented.
Transcendental functions
It would be great if Eigen used Julien Pommier's fast, SSE optimized log/exp/sin/cos functions available from this page. Done for sin, cos, log, exp
Available job: do the same for other functions taking inspiration from [cephes] (tan, acos, asin, atan, etc.) note: have to discuss which functions are useful and which aren't.
Special matrices
(bjacob handling this at the moment) Goal; provide good support for: triangular, self-adjoint, diagonal, band, Toeplitz matrices. See this page: SpecialMatrix.
SVD module
The current implementation uses code from JAMA. It supports MxN rectangular matrices with M>=N of real scalar values (no complex yet). It also provides a robust linear solver. job (bjacob handling this at the moment):
- rewrite the decomposition algorithm in a clean and understandable way,
- with support for complex matrices,
- Consider support for reduced versions :
- and maybe with specializations for small fixed-size if needed (since it a high level algo, it should be automatic though).
- I don't think we can expect GCC to adequately unroll loops in the case of fixed size. That alone may justify doing fixed-size specializations. Then quite often there also shortcuts available for small sizes.
QR module
at the moment the following seem mostly being taken care of by Andrea and bjacob
- implement Givens QR : taken care of by Andrea (check this branch)
- implement efficient QR decomposition for 2x2 and 3x3 matrix with optional computation of R (maybe using Gram-Schmitd instead of Householder transformations ? or Givens)
- polish (partially rewrite) eigensolvers, probably split them away in a separate module (bjacob handling this, as this is very similar to SVD)
- see for a comprehensive introduction to shifted QR and a brief survey of accelerating techniques.
- spectral radius, matrix exponential and maybe more operator calculus by applying a cwiseUnaryOp to the eigenvalues vector.
- available job: generalized eigen problem using Cholesky decomposition (done for the Ax=lBx case, perhaps we want to provide API to handle the other cases: BAx=lx and ABx=lx ?)
Statistics module
This seems to be taken care of by Marton, see his branch at , feel free to contact him to offer help...
This module should support at least mean, variance, standard deviation, sorting, median, etc. all in a matrix-, column- and row-wise flavors. See the respective ML thread for some design discussion (). Feature list proposal:
- mean, variance, standard deviation, absolute deviation, skew, kurtosis
- covariance / principal component analysis (PCA), dimension reductions
- then the regression module could also be merged here (e.g., fitHyperplane could simply call the PCA function).
- support for weighted data
- sorting
- median, percentiles
- histogram
FFT module
A new effort was started in May 2009 by Mark Borgerding to port/adapt kissfft to Eigen. More information about progress and opportunities to help out can be found at the EigenFFT page. The code is on the main development branch, under unsupported/Eigen/src/FFT
A previous offering was initiated by Tim Molteno. See this ML thread for the details. He offered a template metaprogramming example that, while clever, had two drawbacks. It was only capable of power-of-2 transforms and the size must be known at compile time.
Random ideas
- A profiling mode giving feedback about possibly suboptimal use of the library (storage order, alignments, temporaries, etc.). Yes, this idea comes from the STL profiling mode introduced in GCC 4.5 ().
|
http://eigen.tuxfamily.org/index.php?title=Todo
|
CC-MAIN-2017-43
|
refinedweb
| 2,664
| 54.22
|
can anyone tell me how do i insert into existing exe file another exe so that it executes with it?
i need it very bad because we will be having our computer lab this coming week. i have no idea how to perform it
This is a discussion on insert another exe into exe within the C Programming forums, part of the General Programming Boards category; can anyone tell me how do i insert into existing exe file another exe so that it executes with it? ...
can anyone tell me how do i insert into existing exe file another exe so that it executes with it?
i need it very bad because we will be having our computer lab this coming week. i have no idea how to perform it
i mean,
for example, i have this program turbo.exe <c compiler>
then i what to insert some bytes into turbo.exe
so that it calls my application say lliero.exe.
is there a way? i am not trying to make a virus, its just that
i want to know the theory behind this..
i am using turbo c
" programming is 1% syntax and 99% logic "
Are you trying to just rename turbo.exe (possible, but it won't be ususable) or are you trying to do that with any exe? Sounds like you just want to make an execuatable to rename a file. Is that right?
Well, if so, this code is Borland compliant:
# include <stdio.h>
int rename (const char *old_name, const char *new_name);
But I'm probably guessing wrong on what you are asking.
>for example, i have this program turbo.exe <c compiler>
>then i what to insert some bytes into turbo.exe
>so that it calls my application say lliero.exe.
And when should turbo.exe call your lliero.exe? A possibility is to call your program when turbo.exe is finishing. You could make a jump to new code in turbo.exe and this new code calls lliero.exe.
There are several ways to do what you want. However, most of them are prob beyond where you are at in your coding.
The easiest ways to execute child programs are with the spawn() family of functions.
If you wish to go through DOS instead, then you can load and run or load and not run an EXE file via int 21h. The load and run or EXEC will only return to your program after the child process has completed. If you choose to just LOAD then you will have to read the EXE header yourself as well as the relocation items to find out where you need to jump into the EXE.
If you want to copy one exe into another, the only way that I know of is to append it to the end of an exe. However, this will make the file size very large if your EXEs are of considerable size and the only way to execute the second EXE, is by loading in the second EXE and jumping to it. DOS will not do this for you since it does not support bundling exes together this way. You can load your second EXE as an overlay, but again most of this is beyond the scope of this board.
Another way to do it, is to compile your second C file as a straight binary and append that to the end of your EXE file. This way you would only have to load in the bytes as they appeared after the first EXE. However, this will eliminate any relocation items and will essentially produce a flat binary in memory. You can also include raw data in EXEs using this method. Be very vareful not to overwrite any of the bytes from the first EXE and make sure that your first EXE does not run headlong into your second binary or your second EXE as this would certainly crash the computer.
For more information consult the RBIL, DOS tech refs, OS dev sites, FAT specs, Art of Assembly book, and other sources related to file systems,EXE specs, COM specs, flat binaries, and assembly language.
Open the file with DEBUG (the instructions will be in any assembler book if you decide to learn it (if this is that important, you probably will)). Use the U command to unassemble the code and look at it. Determine where exactly in the operation you want this to take place, and find this spot in the code. Recopy the code, but just add the instruction to run lliero.exe right after that. Easier said than done, I know, but it works.
It is easier just to append the file from a C program.
In debug you would have to do this:
1. Find out the size of the original exe
2. DEBUG <exename>
3. Type in D to dump the memory.
4. Take the starting address and add the number of bytes in the
file to it
5. Find out which sector the new file resides on and how many
sectors long it is.
6. Type in L <address> <drive> <firstsector> <number>
where address is the ending address of the program you just
loaded into memory.
7. Then type in N to name the new file.
8. Type in W to write out the new file to disk. However, you must
remember to grab the new contents by specifying the length
to write.
Like I said much easier to write in C since there are about a billion file i/o functions given to you.
guys thank you very much....
ill try some....
i will try to dis-assemble some exe then do some assembly stuff...
cause all i want is my program to execute when a certain program starts... yea...
thank you very much!
|
http://cboard.cprogramming.com/c-programming/14258-insert-another-exe-into-exe.html
|
CC-MAIN-2014-23
|
refinedweb
| 969
| 81.83
|
Windows services don’t have any user interface, when we want to create any long running code we create windows services which runs in background. It has the capability to start automatically when computer boots.
Example to demonstrate how to create Windows Service and install
To create Windows Service open new project and form new project dialog box select “Windows Service”.
When you press OK the result will look like this
By default, the wizard adds “Service1.cs”; rename it to the name you desire your service to be. Here I’ve named it to “SampleWinService.cs”. By default two overridden method are declared “OnStart” and “OnStop”
Now for coding switch to code view.
Code
//here we are creating file so add “System.IO;” namespace to your project.
//overriding OnStart method
protected override void OnStart(string[] args)
{
//creating file stream to create log file of the event.
string Filename = @"c:\sample.log";
FileStream fs = File.Open(Filename, FileMode.OpenOrCreate, FileAccess.ReadWrite);
StreamWriter sw = new StreamWriter(fs);
//writing to file the detail of event occurred.
sw.WriteLine("Process started at :" + System.DateTime.Now.ToString());
sw.Close();
fs.Close();
}
protected override void OnStop()
{
string Filename = @"c:\sample.log";
FileStream fs = File.Open(Filename, FileMode.OpenOrCreate, FileAccess.ReadWrite);
StreamWriter sw = new StreamWriter(fs);
sw.WriteLine("Process ended at :" + System.DateTime.Now.ToString());
sw.Close();
fs.Close();
}
Now once the code for windows service is finished. Add a ProjectInstaller to your project.
To add project installer right click on the design view of the the file and select “Add Installer”.
A new file “ProjectInstaller.cs” will be added to your project. When the file is created two components (ServiceProcessInstaller, ServiceInstaller) are already added to the file.
Now select ServiceProcessInstaller1 and change its Account property in property explorer to “Local System”.
Now select ServiceInstaller1 and change its “Start Type” property to “Automatic” and “Service Name” property to the desired name you want to be displayed. Here I’ve named it to SampleWinService.
Now build the project.
Once the project is built successfully we need to install the service.
Installing the service
To install the service open “Visual Studio Command Prompt” as an Administrator. Now to install the service use “Installutil” command with the file name of the service created after building your project, situated in “..\bin\debug” folder of the project, with its path.
Example
InstallUtill ..\bin\debug\SampleWinService.exe
(‘..’ in the example means the root directory, don’t use ‘..’ write the full path)
This will install the service to your computer. To check your service write “compmgmt.msc” in Run Prompt, Computer Management Dialog Box will appear. In that select Service under Services and Applications Option.
Here you can see your service running.
To uninstall your service, in the Command Prompt write
Installutil ..\bin\debug\SampleWinService.exe /u
Its a good article to start to learn windows service.
Thanks.
Really , Its a fine article on window service, how use it (easy and simple view)?
Thanks
|
https://www.mindstick.com/Articles/99/windows-service
|
CC-MAIN-2018-34
|
refinedweb
| 491
| 60.41
|
It looks like you're new here. If you want to get involved, click one of these buttons!
I am a novice at C# and C++ but trying to learn some basics. I found a very easy step by step AutoCAD example file for creating an AutoCAD Plug in using C#.
I have BricsCad V18 Pro and AutoCAD 2005.
I also have VS 2013 (toolset platform V120) and VS 2017.
I have successfully downloaded various BricsCad SDK versions 2015-2018 but find very little documentation on how to get started.
AutoCAD SDK's are no longer available for AutoCAD 2005. The Example file I ran Netload in Acad 2005 on said:
Cannot load assembly. Error details: Could not load file or assembly
'C:\Users***\Documents\Visual Studio
2013\Projects\CSharpPlugin\CSharpPlugin\bin\Debug\CSharpPlugin.dll' or one of
its dependencies. This assembly is built by a runtime newer than the currently
loaded runtime and cannot be loaded.
I expected this since the AutoCAD SDK was for their Version 2015.
My question is, Can anyone help me convert my file references for the example I have to BricsCad?
For example, What is the equivalent of:
using Autodesk.AutoCAD.Runtime;
using Autodesk.AutoCAD.ApplicationServices;
using Autodesk.AutoCAD.DatabaseServices;
using Autodesk.AutoCAD.Geometry;
using Autodesk.AutoCAD.EditorInput;
Reference DLL's in Acad are:
AcCoreMgd
AcDbMgd
AcMgd
What needs to be referenced in BricsCad?
Link to what I am trying to make work in BricsCad V18 using VS 2013 is:
Any help appreciated. The less technical the better.
If there is any more BricsCad documentation I don't know of, please point me to that as well.
I posted this on the 24/7 developer forum but I'm not sure if anyone will reply there so for now I am posting here too.
I appreciate the kind words Jim. The problem with technical documentation is that it's extremely expensive to produce and maintain. Unless there's an audience of thousands, it just isn't practical to invest in professional documentation. The alternative is peer support and muddling through it on your own, which actually can be quite rewarding if you make it through the initial trepidation.
A good way to get started is to open the C# sample (API\dotNet\CsBrxMgd) in VS, build it, load it, and play with the sample commands. You need to get familiar with the IDE before you start programming. Then you need to familiarize yourself with the API before you start writing your own code form scratch. It takes time, and a whole bunch of mistakes, but rest assured that there are a whole bunch of people just like you happily writing C# plugins for BricsCAD, and they all started out the same way.
Answers
Two good sources of info are the sample projects in API\dotNet and the developer reference:
Thanks Owen,
I glanced over much of what was in the link you referenced. After re-reading, the small first plug in Autocad project I am trying to run in C# would I guess be what BricsCad calls .net. I'm capable of learning and have worked with lots of Autolisp but even for that I needed to refresh what I knew 20 year ago. For the last 5 or so years before I retired we were using Revit for all new construction projects. I kept using AutoCAD but some of the younger guys in the office wouldn't touch it. Revit still can't create decent details. I drew them and imported them with a reference to a Revit "underlay" detail that we would hide.
I'm not sure if 2D/3D Cad will still be supported at all a few years from now but I still hear from many one or 2 person architectural firms who can't afford AutoCAD, can't take the needed time to learn Revit don't need it for small projects and can't afford it even if the wanted to use it.
Our AutoCAD setup has always been highly customized mainly with Autolisp. I always wanted to learn more programming languages but being an Architect had to come first. I programmed Autolisp routines quite a bit as sort of a hobby. ARX/BRX was and still is over my head for now but I'm determined to try learning it. C# seems to be somewhat easier to understand. I just need a good tutorial geared specifically to BricsCad/AutoCAD. The link you sent gives me a few more clues and is helpful.
I have used and admired your work for many years. I know you are not really a writer but if you did write down some of what you know and tell it in plain English tutorial form I know I would buy it. Hint,
Thanks again,
Jim
Please read the following document thorolly:
|
https://forum.bricsys.com/discussion/comment/35396/
|
CC-MAIN-2020-45
|
refinedweb
| 808
| 64.51
|
WinForms Getting Started Guide
Installation
System.Windows.Forms is part of a standard Mono installation.
WinForms Example
As there are plenty of great articles and books on Windows Forms programming, the topic will not be covered in-depth. The following is just a simple Winforms program to test with.
using System; using System.Drawing; using System.Windows.Forms; public class HelloWorld : Form { static public void Main () { Application.Run (new HelloWorld ()); } public HelloWorld () { Button b = new Button (); b.Text = "Click Me!"; b.Click += new EventHandler (Button_Click); Controls.Add (b); } private void Button_Click (object sender, EventArgs e) { MessageBox.Show ("Button Clicked!"); } }
If you save this code as hello.cs, you would compile it like this:
csc hello.cs -r:System.Windows.Forms.dll
The compiler will create “hello.exe”, which you can run using:
mono hello.exe
NOTE: on Mac OS X you’ll have to wait around a minute the very first time you run this command.
Results running on openSUSE 10.2
|
http://www.mono-project.com/docs/gui/winforms/getting-started-guide/
|
CC-MAIN-2018-26
|
refinedweb
| 162
| 63.56
|
Main conceptsMain concepts in.
Another way of running Streamlit is to run it as a Python module. This can be useful when configuring an IDE like PyCharm to work with Streamlit:
# Running $ python -m streamlit run your_script.py # is equivalent to: $ streamlit run your_script.py
Tip
You can also pass a URL to
streamlit run! This is great when combined with
Github Gists. For example:
$ streamlit run
Development flowDevelopment flowData flow.
Whenever a callback is passed to a widget via the
on_change (or
on_click) parameter, the callback will always run before the rest of your script. For details on the Callbacks API, please refer to our Session State API Reference Guide. dataDisplay and style data
There are a few ways to display data (tables, arrays, data frames) in Streamlit
apps. Below, you will be introduced to magic
and
st.write(), which can be used to write
anything from text to tables. After that, let's take a look at methods designed
specifically for visualizing data.
Use magicUse magic
You can also write to your app without calling any Streamlit methods.
Streamlit supports "magic commands," which means you don't have to use
st.write() at all! To see this in action try this snippet:
""" # My first app Here's our first attempt at using data to create a table: """ import streamlit as st import pandas as pd.
Write a data frameWrite a data frame
Along with magic commands,
st.write() is Streamlit's "Swiss Army knife". You
can pass almost anything to
st.write():
text, data, Matplotlib figures, Altair charts, and more. Don't worry, Streamlit
will figure it out and render things the right way.
import streamlit as st import pandas as pd. Let's understand when to use these features and how to add colors and styling to your data frames..
import streamlit as st import numpy as np dataframe = np.random.randn(10, 20) st.dataframe(dataframe)
Let's expand on the first example using the Pandas
Styler object to highlight
some elements in the interactive table.
import streamlit as st import numpy as np import pandas as pd dataframe = pd.DataFrame( np.random.randn(10, 20), columns=('col %d' % i for i in range(20))) st.dataframe(dataframe.style.highlight_max(axis=0))
Streamlit also has a method for static table generation:
st.table().
import streamlit as st import numpy as np import pandas as pd dataframe = pd.DataFrame( np.random.randn(10, 20), columns=('col %d' % i for i in range(20))) st.table(dataframe)
Draw charts and mapsDraw charts and maps
Streamlit supports several popular data charting libraries like Matplotlib, Altair, deck.gl, and more. In this section, you'll add a bar chart, line chart, and a map to your app.
Draw a line chartDraw a line chart
You can easily add a line chart to your app with
st.line_chart(). We'll generate a random
sample using Numpy and then chart it.
import streamlit as st import numpy as np import pandas as pd chart_data = pd.DataFrame( np.random.randn(20, 3), columns=['a', 'b', 'c']) st.line_chart(chart_data)
Plot a mapPlot a map
With
st.map() you can display data points on a map.
Let's use Numpy to generate some sample data and plot it on a map of
San Francisco.
import streamlit as st import numpy as np import pandas as pd map_data = pd.DataFrame( np.random.randn(1000, 2) / [50, 50] + [37.76, -122.4], columns=['lat', 'lon']) st.map(map_data)
WidgetsWidgets".
Widgets can also be accessed by key, if you choose to specify a string to use as the unique key for the widget:
import streamlit as st st.text_input("Your name", key="name") # You can access the value at any point with: st.session_state.name
Every widget with a key is automatically added to Session State. For more information about Session State, its association with widget state, and its limitations, see Session State API Reference Guide.
Use checkboxes to show/hide dataUse checkboxes to show/hide data
One use case for checkboxes is to hide or show a specific chart or section in
an app.
st.checkbox() takes a single argument,
which is the widget label. In this sample, the checkbox is used to toggle a
conditional statement.
import streamlit as st import numpy as np import pandas as pd if st.checkbox('Show dataframe'): chart_data = pd.DataFrame( np.random.randn(20, 3), columns=['a', 'b', 'c']) chart_data
Use a selectbox for optionsUse a selectbox for options
Use
st.selectbox to choose from a series. You
can write in the options you want, or pass through an array or data frame
column.
Let's use the
df data frame we created earlier.
import streamlit as st import pandas as pd df = pd.DataFrame({ 'first column': [1, 2, 3, 4], 'second column': [10, 20, 30, 40] }) option = st.selectbox( 'Which number do you like best?', df['first column']) 'You selected: ', option
LayoutLayout.sidebar.columns lets you place widgets side-by-side, and
st.expander lets you conserve space by hiding away large content.
import streamlit as st left_column, right_column = st. Rest assured, though, we're currently working on adding support for those too!
Show progressShow progress
When adding long running computations to an app, you can use
st.progress() to display status in real time.
First, let's import time. We're going to use the
time.sleep() method to
simulate a long running computation:
import time
Now, let's create a progress bar:
import streamlit as st import time 'Starting a long computation...' # Add a placeholder latest_iteration = st.empty() bar = st.progress(0) for i in range(100): # Update the progress bar with each iteration. latest_iteration.text(f'Iteration {i+1}') bar.progress(i + 1) time.sleep(0.1) '...and now we\'re done!'
ThemesThemes
Streamlit supports Light and Dark themes out of the box. Streamlit will first check if the user viewing an app has a Light or Dark mode preference set by their operating system and browser. If so, then that preference will be used. Otherwise, the Light theme is applied by default.
You can also change the active theme from "☰" → "Settings".
Want to add your own theme to an app? The "Settings" menu has a theme editor accessible by clicking on "Edit active theme". You can use this editor to try out different colors and see your app update live.
When you're happy with your work, themes can be saved by
setting config options
in the
[theme] config section. After you've defined a theme for your app, it
will appear as "Custom Theme" in the theme selector and will be applied by
default instead of the included Light and Dark themes.
More information about the options available when defining a theme can be found in the theme option documentation.
Note
The theme editor menu is available only in local development. If you've deployed your app using Streamlit Cloud, the "Edit active theme" button will no longer be displayed in the "Settings" menu.
Tip
Another way to experiment with different theme colors is to turn on the "Run on save" option, edit your config.toml file, and watch as your app reruns with the new theme colors applied.
CachingCaching
Important
We're developing new cache primitives that are easier to use and much faster than
@st.cache. 🚀 To learn more, read Experimental cache primitives.
The Streamlit cache allows your app to execute quickly even when loading data from the web, manipulating large datasets, or performing expensive computations.
To use the cache, wrap functions with the
@st.cache decorator:
import streamlit as st App model.
|
https://docs.streamlit.io/library/get-started/main-concepts
|
CC-MAIN-2022-21
|
refinedweb
| 1,272
| 67.86
|
This chapter introduces the basic threads programming routines from the POSIX threads library, libpthread(3THR). This chapter covers default threads, or threads with default attribute values, which are the kind of threads that are most often used in multithreaded programming.
Chapter 3, Thread Create Attributes, explains how to create and use threads with nondefault attributes.
The POSIX (libpthread) routines introduced here have programming interfaces that are similar to the original (libthread) Solaris multithreading library.
The following brief roadmap directs you to the discussion of a particular task and its associated man page.
Wait for Thread Termination
Create a Key for Thread-Specific Data
Delete the Thread-Specific Data Key
Get the Thread Identifier
Send a Signal to a Thread
Access the Signal Mask of the Calling Thread
Enable or Disable Cancellation
Create a Cancellation Point
Push a Handler Onto the Stack
Pull a Handler Off the Stack
When an attribute object is not specified, it is NULL, and the default thread is created with the following attributes:
Unbound
Nondetached
With a default stack and stack size
With a priority of zero
You can also create a default attribute object with pthread_attr_init(), and then use this attribute object to create a default thread. See the section Initialize Attributesfor details.
Use pthread_create(3THR) to add a new thread of control to the current process.
Prototype: int pthread_create(pthread_t *tid, const pthread_attr_t *tattr, void*(*start_routine)(void *), void (see pthread_create(3TH zero when it completes successfully. Any other return value indicates that an error occurred. When any of the following conditions are detected, pthread_create() fails and returns the corresponding value.
A system limit is exceeded, such as when too many LWPs have been created.
The value of tattr is invalid.
Use pthread_join(3THR) to wait for a thread to terminate.
Prototype: int pthread_join(thread_t tid, void **status);
#include <pthread.h> pthread_t tid; int ret; void .
If multiple threads wait for the same thread to terminate, they all wait until the target thread terminates, than one thread returns successfully and the others fail with an error of ESRCH.
After pthread_join() returns, any data storage associated with the thread can be reclaimed by the application.
pthread_join() returns zero when it completes successfully. Any other return value indicates that an error occurred. When any of the following conditions are detected, pthread_join() fails and returns the corresponding value.
tid is not a valid, undetached thread in the current process.
A deadlock would exist, such as a thread waits for itself or thread A waits for thread B and thread B waits for thread A.
The value of tid is invalid.
Remember that pthread_join() works only for target threads that are nondetached. When there is no reason to synchronize with the termination of a particular thread, then that thread should be detached.
In Example 2–1, use malloc(3C) to allocate storage from the heap instead of passing an address to thread stack storage, because this address might disappear or be reassigned if the thread terminated.THR) is an alternative to pthread_join(3THR) to reclaim storage for a thread that is created with a detachstate attribute set to PTHREAD_CREATE_JOINABLE.
Prototype: int pthread_detach(thread_t tid);
zero when it completes successfully. Any other return value indicates that an error occurred. When any of the following conditions is detected, pthread_detach() fails and returns the corresponding value.
tid is not a valid thread.
tid is not a valid, undetached thread in the current.
Use.
Use pthread_key_delete(3TH yields undefined results.
It is the responsibility of the programmer to free any thread-specific resources before calling the delete function. This function does not invoke any of the destructors.
pthread_key_delete() returns zero after completing successfully. Any other return value indicates that an error occurred. When the following condition occurs, pthread_key_create() fails and returns the corresponding value.
The key value is invalid.
Use pthread_setspecific(3THR) to set the thread-specific binding to the specified thread-specific data key.
Prototype:.
Not enough virtual memory is available.
key is invalid.
pthread_setspecific() does not free its storage. If a new binding is set, the existing binding must be freed; otherwise, a memory leak can occur.
Use pthread_getspecific(3THR) to get the calling thread's binding for key, and store it in the location pointed to by value.
Prototype: void *pthread_getspecific(pthread_key_t key);
#include <pthread.h> pthread_key_t key; void *value; /* key previously created */ value = pthread_getspecific(key);
No errors are returned.
Example 2–2shows. So,THR) to get the thread identifier of the calling thread.
Prototype: pthread_t pthread_self(void);
#include <pthread.h> pthread_t tid; tid = pthread_self();
pthread_self() returns the thread identifier of the calling thread.
Use pthread_equal(3THR) to compare the thread identification numbers of two threads.
Prototype:, zero is returned. When either tid1 or tid2 is an invalid thread identification number, the result is unpredictable.
Use pthread_once(3THR) to call an initialization routine the first time pthread_once(3TH.
once_control or init_routine is NULL.
Use sched_yield(3RT) to cause the current thread to yield its execution in favor of another thread with the same or greater priority.
Prototype: int sched_yield(void);
#include <sched.h> int ret; ret = sched_yield();
sched_yield() returns zero after completing successfully. Otherwise -1 is returned and errno is set to indicate the error condition.
sched_yield(3RT) is not supported in this implementation.
Use pthread_setschedparam(3THR) to modify the priority of an existing thread. This function has no effect on scheduling policy.
Prototype: int pthread_setschedparam(pthread_t tid, int policy, const struct sched_param *param);
#include <pthread.h> pthread_t tid; int ret; struct sched_param param; int priority; /* sched_priority will be the priority of the thread */ sched_param.sched_priority = priority; /* only supported policy, others will result in ENOTSUP */ policy = SCHED_OTHER; /* scheduling parameters of target thread */ ret = pthread_setschedparam(tid, policy, ¶m);
pthread_setschedparam() returns zero after completing successfully. Any other return value indicates that an error occurred. When either of the following conditions occurs, the pthread_setschedparam() function fails and returns the corresponding value.
The value of the attribute being set is not valid.
An attempt was made to set the attribute to an unsupported value.
pthread_getschedparam(3THR) gets the priority of the existing thread.
Prototype: int pthread_getschedparam(pthread_t tid, int policy, struct schedparam .
The value specified by tid does not refer to an existing thread.
Use pthread_kill(3TH.
pthread_kill() returns zero after completing successfully. Any other return value indicates that an error occurred. When either of the following conditions occurs, pthread_kill() fails and returns the corresponding value.
sig is not a valid signal number.
tid cannot be found in the current process.
Use pthread_sigmask(3TH.
pthread_sigmask() returns zero when it completes successfully. Any other return value indicates that an error occurred. When the following condition occurs, pthread_sigmask() fails and returns the corresponding value.
The value of how is not defined.
See the discussion about pthread_atfork(3THR) in The Solution—pthread_atfork(3THR).
Prototype: int pthread_atfork(void (*prepare) (void), void (*parent) (void), void (*child) (void) );
Use pthread_exit(3THR) to terminate a thread.
Prototype: via pthread_join(). Otherwise, status is ignored and the thread's ID can be reclaimed immediately. For information on thread detachment, see Set Detach State.
The calling thread terminates with its exit status set to the contents of status.
A thread can terminate its execution in the following ways:
By returning from its first (outermost) procedure, the threads start routine; see pthread_create(3THR)
By calling pthread_exit(), supplying an exit status
By termination with POSIX cancel functions; see pthread_cancel()
The default behavior of a thread is to linger until some other thread has acknowledged its demise by “joining” with it. This is the same as the default pthread_create() attribute being nondetached; see pthread_detach(3THR). The result of the join is that the joining thread picks up the exit status of the dying thread and the dying thread vanishes.
An important special case arises when the initial thread — the one calling main(),— returns from calling main() or calls exit(3C). This action causes the entire process to be terminated, along with all its threads. So take care to ensure that the initial thread does not return from main() prematurely.
Note that when the main thread merely calls pthread_exit(3THR), it terminates only itself—the other threads in the process, as well as the process, continue to exist. (The process terminates when all threads terminate.)
Cancellation allows a thread to terminate the execution of any other thread, or all threads, in the process. Cancellation is an option when all further operations of a related set of threads are undesirable or unnecessary.THR) preserves the current cancel state in a referenced variable; pthread_setcanceltype(3THTHR) call.
Threads waiting for the occurrence of a particular condition in pthread_cond_wait(3THR) or pthread_cond_timedwait(3THR).
Threads waiting for termination of another thread in pthread_join(3THR).
Threads blocked on sigwait(2).
Some standard library calls. In general, these are functions in which threads can block; see the man page cancellation(3THR) for a list.
Cancellation is enabled by default. At times you might want an application to disable cancellation. This has the result of deferring all cancellation requests until they are enabled again.
See pthread_setcancelstate(3THR)for information about disabling cancellation.
Use pthread_cancel(3THR)THR) and pthread_setcanceltype(3THR), determine that state.
pthread_cancel() returns zero after completing successfully. Any other return value indicates that an error occurred. When the following condition occurs, the function fails and returns the corresponding value.
No thread could be found corresponding to that specified by the given thread ID.
Use pthread_setcancelstate(3THR) to enable or disable thread cancellation. When a thread is created, thread cancellation is enabled by default.
Prototype:.
The state is not PTHREAD_CANCEL_ENABLE or PTHREAD_CANCEL_DISABLE.
Use pthread_setcanceltype(3TH return value indicates that an error occurred. When the following condition occurs, the function fails and returns the corresponding value.
The type is not PTHREAD_CANCEL_DEFERRED or PTHREAD_CANCEL_ASYNCHRONOUS.
Use pthread_testcancel(3THfor more details.
There is no return value.
Use);
Use pthread_cleanup_pop(3THTH.
|
http://docs.oracle.com/cd/E19683-01/806-6867/tlib-12926/index.html
|
CC-MAIN-2014-35
|
refinedweb
| 1,630
| 50.02
|
pyNekketsu - 0.02
An arcade soccer game, inspired by "Nintendo World Cup" AKA "Nekketsu Koukou Dodgeball Bu: Soccer Hen".
Jmimu
(jmimu)
Links
Releases
pyNekketsu 0.10 — 24 Aug, 2011
pyNekketsu 0.15 — 29 Feb, 2012
pyNekketsu 0.02 — 9 Jun, 2011
pyNekketsu 0.04 — 20 Jun, 2011
pyNekketsu 0.12 — 19 Sep, 2011
pyNekketsu 0.05 — 7 Jul, 2011
Pygame.org account Comments
Saluk64007 2011-06-21 06:33:41
Very nice start! I never played the original, but this sure looks like the Dodgeball I remember from my gameboy. Great job with the camera, it feels exciting. I found it got boring a bit quick, so something needs to be done to liven it up, but I'm not sure what.
carlstucky 2011-07-14 15:53:11
really cool i love it
must have taken a little bit to make this fun but does get boring pretty fast
keep up the good work
juggadore 2011-08-01 02:55:20
hi
can i use your code to build off of it? i'd like to create a game like this, but a basketball version... i'll give you credit definitely for laying the foundation and link to this game...
thanks!
-nehal
jmimu 2011-08-11 15:33:39
Hi,
since the code is under GPL, you can use it for anything you want (but you have to keep GPL license).
But the bodies and some heads images are not mine, they are from the original game. It's not GPL at all.
Maybe take the last version of the code (on github), all data is under GPL, it's safer. You can email me for any question.
Good luck with your project !
jmimu
JimmyPooh 2014-05-31 00:38:31
Anyone know why I'd get this error?
C:\Users\J\Downloads\pyNekketsu-015\jmimu-pyNekketsu-c3f490d>C:\Python31\python.exe .\pyNekketsu.py
Traceback (most recent call last):
File ".\pyNekketsu.py", line 31, in <module>
from retrogamelib import display
File "C:\Users\J\Downloads\pyNekketsu-015\jmimu-pyNekketsu-c3f490d\retrogamelib\__init__.py", line 1, in <module>
import display, button, dialog, gameobject, clock, font, util, constants, camera
File "C:\Users\J\Downloads\pyNekketsu-015\jmimu-pyNekketsu-c3f490d\retrogamelib\button.py", line 149
print "You're holding the A button!"
^
SyntaxError: invalid syntax
|
http://www.pygame.org/project/1894/3344
|
CC-MAIN-2017-43
|
refinedweb
| 383
| 76.62
|
The basic idea behind the XML
Web services is to link one system to other independent of the platform. In
this article we will see how we can make a simple java client that uses .net
web service made in C#.
Trust me on this but you only
need to write 3 lines of code to actually connect and use the Web Methods of
the web service made in C#. But there are several tools that you must install
in order to make the java client work with .net web service. Here is the list
of tools that you will need.
1)
JDK 1.4
(Download 1.4 since it is compatible with Axis).
2)
Axis (A tool
to communicate with the Web Service)
3)
WSDLToJava
(Tool to locate the Web Service)
4)
Eclipse (Cool
Java Editor)
Okay once you downloaded all the
tools you need to install it.
Install all the four tools
on your hard drive. Once you have installed all 4 of them lets make some minor
modifications. Go to Eclipse plug-in folder
C:\Eclipse\eclipse\plugins
(Path can be different if you have installed it in a different drive or
directory) and copy com.myspotter.wsdl2java which can be found in the
WSDLToJava folder. Okay now you are done with Installation lets now configure
Eclipse so it can communicate with the .net web service.
Run your eclipse application, it
will take some time to load so go for shopping. Make a new workspace and after
some time editor will open. Go to file select Java Application and than select
new class from the File menu. Okay this was pretty much simple. Now come to
the hard part. You need to add the jar files that are necessary in order to
communicate with the .net web service. So let's add those jar files.
Right click on the project
folder which will be the first node in the package explorer view on your right
and select properties. A menu will appear select "java build path" and
from the tab menu select "lib". Click on the Add External Jars
and browse in your Axis folder until you find the lib folder. Select all the
jar files in that folder and select open so that they can be added to your
project.
Okay so now you have added the
jar files that's cool. Now you need the wsdl file so that your java client
will know about the web service. Make a small hello world Web service and run
it in a browser and to get its wsdl file do something like this:
Now you can see the wsdl file.
View the source and copy the file anywhere in your hard disk. Now go back to
your eclipse right click on the project folder select import -> File System ->
Click Next. In the From directory give the path of your wsdl file. Since you
have already install the WSDLToJava tool you will see a similar option which
you right click on the wsdl file as you can see in the image below select
Generate:
Okay one last thing that you
also need to check. Right click on the Project folder and select run it will
open a menu as shown below:
Make sure that the Main Class
Text Field is the name of your class which contains the main method.
Okay now you are ready to make a
small application so here is a simple hello world application that you can
test. To test this application select run from the right click menu of the
main method in the package Explorer.
public class myClass { public static void main(String[] args) { System.out.println("Hello World"); } }
Okay until now you have setup
your tools and software so now you are ready to do your main work which is
contacting .net Xml web service. Let's just see a simple web method of our
.net web service written in C#. As expected the method returns simple Hello
World.
[WebMethod] public string HelloWorld() { return "Hello World"; }
Now lets come to the java code.
// Creates Service Locator object Service1Locator locator = new Service1Locator(); // Creates the stub Service1SoapStub myStub = (Service1SoapStub) loc.getService1Soap();
One very important point to note
is that you will start writing the code using the full namespace. It means the
code which you see above is actually this:
org.Tempuri.Service1Locator locator = new Service1Locator(); org.Tempuri.Service1SoapStub myStub = (org.tempuri.Service1SoapStub) loc.getService1Soap();
Once you have made the stub
object which is the proxy of the .net web service you can access your "Hello
World" method the same way as you did when you made the proxy in .net.
System.out.println(stub.HelloWorld()); // This prints Hello World
So, you see that main problem
was the installation and setting up the java client so that it works with .net
service. After that it was pretty much the same. One important thing you must
always remember is that if you make any changes in the Web Service, i.e.
adding new web methods , editing old web methods. Always update the wsdl file
as shown in fig 2 or else you won't be able to see the new/changed method.
|
http://www.gridviewguy.com/Articles/130_Making_a_Java_Client_for_the__NET_WebService.aspx
|
crawl-001
|
refinedweb
| 868
| 73.78
|
The GenBank format. More...
#include <seqan3/io/sequence_file/format_genbank.hpp>
The GenBank format.
genbank is the format used in the GenBank sequence database. See this example record at NCBI for more details about the format.
The genbank format provides the fields seqan3::field::seq and seqan3::field::id. Both fields are required when writing.
There is no truncate_ids option while reading because the GenBank format has no (FASTA-like) idbuffer. Instead, there is the option "complete_header" to indicate whether the whole header is to be read (embl_genbank_complete_header=true) or only the "LOCUS" information should be stored.
Qualities passed to the write function are ignored.
Read from the specified stream and back-insert into the given field buffers.
Write the given fields to the specified stream.
Implements sequence_file_output_format.
The valid file extensions for this format; note that you can modify this value.
|
https://docs.seqan.de/seqan/3-master-user/classseqan3_1_1format__genbank.html
|
CC-MAIN-2021-21
|
refinedweb
| 141
| 52.05
|
When indexing a field in Lucene, you have two index option choices about how the field value is indexed: analyzed and not analyzed. They both useful and serves different purposes, so make sure you know the differences between them and use them correctly.
The ANALYZED and NOT_ANALYZED, are enum type values defined in Field.Index to specify the index option of a field.
public class Field implements IndexableField { ... public static enum Index { /** Index the tokens produced by running the field's * value through an Analyzer. This is useful for * common text. */ ANALYZED , /** Index the field's value without using an Analyzer, so it can be searched. * As no analyzer is used the value will be stored as a single term. This is * useful for unique Ids like product numbers. */ NOT_ANALYZED , ... } ... }
Other enum values includes: NO, NOT_ANALYZED_NO_NORMS, ANALYZED_NO_NORMS. In this post we focus on ANALYZED and NOT_ANALYZED.
Notice that this API is only for compatible with pre Lucene 4.0 API. It has been changed since the release of Lucene 4.0.0. See LUCENE-2308 Separately specify a field's type for more details about this. It has been removed from Lucene 6.0. You should use the new API as following
FieldType type = new FieldType(); type.setTokenized(true);
This will set it to analyzed. Actually, analyzed is default, you only need to call setTokenized when you need to set it to false.
For older API
Analyzed means the text of the field will be analyzed by the analyzer you provided at indexing time, the text will be broken into tokens and terms, this is desirable when the field contains normal text, for example the title and content.
Besides the main body of document, there may be some kind of meta data associated with a document, which should not be simply treated as text, for example, the ISBN number of a book, the serial code of a product, email address, ZIP code, etc.
When the field act like some kind of unique identifier or key to the document, you should use NOT_ANALYZED. The whole field value is indexed as a single term, and case sensitive. Remember in Lucene only terms are searchable, set as NOT_ANALYZED so you can search those meta data.
Most analyzers in Lucene will lowercase all terms, thus search for analyzed field is case insensitive. In NOT_ANALYZED field, there is no analyzer involved, when you search the field, the query text should be exactly the same as the field value, otherwise you will get empty result set.
When querying, the best Query class to query against NOT_ANALYZED field is TermQuery, very like selecting a row by its id in SQL. It would act like a primary key in relational database.
Term t = new Term("serial_code", "83004102"); Query query = new TermQuery(t); TopDocs docs = searcher.search(query, 1);
Lucene Basics tutorials
Lucene Indexing
Adding fields and options
CRUD operations in index
Lucene Searching
Highlight and Fragmentation
Appendix
Articles
|
http://makble.com/lucene-index-option-analyzed-vs-not-analyzed
|
CC-MAIN-2018-09
|
refinedweb
| 490
| 63.9
|
Crowdsourced Coders Take On Immunology Big Data
Soulskill posted about a year ago | from the still-no-cure-for-cancer-oh-wait dept.
(2)
K. S. Kyosuke (729550) | about a year ago | (#42838153) (1)
Nic (2818063) | about a year ago | (#42838447)
Crowdsource Coding? (2, Insightful)
Anonymous Coward | about a year ago | (#42838695):Crowdsource Coding? (2)
thoughtlover (83833) | about a year ago | (#42839787)
spot on, its a freelancer site dressed up (1)
decora (1710862) | about a year ago | (#42840863)
as some kind of leet hacker haven. its full of the same horse shit you find all over those places
"need powerpoint conversion ASAP!"
"java lx2e zorbog buzzheavy lightyear lcick layer"
paywalled? (1)
v1 (525388) | about a year ago | (#42838433)
looks to be paywalled, @ $32 for a single article?
Re:paywalled? (3, Informative)
Verloc (119412) | about a year ago | (#42838675)
try this [prnewswire.com]
not much return (4, Insightful)
v1 (525388) | about a year ago | (#42838713). (3, Interesting)
Kozz (7764) | about a year ago | (#42839339):not much return? think again. (2)
v1 (525388) | about a year ago | (#42839409)
true, but in the short-term, bragging rights and resume' bullet points don't pay the bills.
Re:not much return? think again. (1)
Kozz (7764) | about a year ago | (#42839485)
If you're conquering the challenge for the short-term, you're doing it for all the wrong reasons.
Re:not much return? think again. (1)
v1 (525388) | about a year ago | (#42839613) project and found someone that gave me a way to save at least hundreds of thousands of dollars in my budget, they'd get a lot more than 6k and a handshake. More like 20k and a job offer.
Re:not much return? think again. (1)
easyTree (1042254) | about a year ago | (#42839667)
The whole premise of contests is a scam. Everyone works their ass off for some 'prize'. Only one wins yet the contest hosts get to benefit from all entries.
Re:not much return? think again. (1)
Morpf (2683099) | about a year ago | (#42842079)
That is why I only take part in contests, where my work is not usable in any production environment. Contests for the contest sake. Google CodeJam, Project Euler, ACM ICPC, you name it.
Re:not much return? think again. (1)
Jmc23 (2353706) | about a year ago | (#42851887)
Re:not much return? think again. (0)
Anonymous Coward | about a year ago | (#42840035)
true, but in the short-term, bragging rights and resume' bullet points don't pay the bills.
But $6,000 for two weeks of work does.
Re:not much return? think again. (1)
doesnothingwell (945891) | about a year ago | (#42841621):not much return? think again. (1)
Morpf (2683099) | about a year ago | (#42842055) someone has to maintain the code.
Re:not much return? think again. (0)
Anonymous Coward | about a year ago | (#42845341)
> That contest yielded 2,684 hours of development time with an overwhelming result for just 6k USD
Go to Africa, pick a kid you can't even read. I'm sure that he will sell you 10 000 hours of development time for just $1000. There is no way to get cheaper. Oh, does it matter that he can't code? So suddenly we are talking about quality here. Well that will change things. I'm a software developer and I am not afraid of these contests, because it would be pretty difficult to run a software company using just contests.
Re:not much return (0)
Anonymous Coward | about a year ago | (#42839891)
There are plenty of mapping algorithms which can do this (BWA, TopHat2, BFAST, Lastz) - map a sequence back to reference (with gaps and mismatches) and they're extremely fast. Not clear why the authors of the paper didn't try any of these out before using topcoder.
I wonder who maintains the code once the contest is finished. If there're any edge cases that are not present in the test sets that are used to judge the competition.
Re:not much return (1)
Daniel Dvorkin (106857) | about a year ago | (#42841563):not much return (1)
eulernet (1132389) | about a year ago | (#42854349)
The source codes of the winners are here: [topcoder.com]
Frankly, $6k is a lot of money for the amount of effort (less than 22 hours of work in average).
Re:not much return (0)
Anonymous Coward | about a year ago | (#42865227)
eh? more for a 2 week contest? with a 970 fold improvement, you would think that the code he was replacing was itself low grade. Assuming he worked 40 hour weeks for 2 weeks he was making $75/hr, not bad. More than a lot of CS grads make out the door.
Open government (1)
robot5x (1035276) | about a year ago | (#42839159) which require a change request, 6 months of paperwork and a cheque for at least $50k to make very minor modifications. As it says in 'Open government', there is now an expectation that any solution involving computers or data has to be hugely expensive and time-consuming.
So, I'm really inspired by this story. It says to me that a bit of openness and thinking outside the box is a Good Thing. I'm submitting a paper soon recommending that we develop a strategy moving towards more open platforms and, yes, even merge our IS and HR thinking to do something like competitions and code-outs to get the community and CS enthusiasts working on real world problems.
This begs the question - why is there so little of this thinking currently in the public sector?? Maybe that's a debate for another day!
how lond have you worked there? (2)
decora (1710862) | about a year ago | (#42840899) 'careful decision process involving all the stakeholders'.
The only people who believe in this open stuff are the nutcases like, you know, doctors and scientists. What the fuck do they know?
Re:how lond have you worked there? (1)
Jmc23 (2353706) | about a year ago | (#42851919)
Re:Open government (1)
sorisos (2702365) | about a year ago | (#42841965)
Shitty prize (1)
Alejux (2800513) | about a year ago | (#42842557)
Re:Shitty prize (0)
Anonymous Coward | about a year ago | (#42842937)
Crowdsourcing, for all but the sexiest projects, will not work for pennies forever.
Let's do the math:(fake math, bear with me)
Number of entries: 20
Losing entry time investment: 40 hours (Avg)
Winning entry time investment: 60 hours
Total dev time: 40 * 19 + 60 = 820
If prize is $6000, expected pay is 6000/820 = $7.3
US Minimal wage: $7.25
Note: I did NOT choose the numbers to make the payoff match min wage, it just came up this way.
Where the code is (0)
Anonymous Coward | about a year ago | (#42847843)
Everybody yaps about it, but don't tell where the code is. Here's the link:
Working for free (1)
MotorMachineMercenar (124135) | about a year ago | (#42849199)
Coders work for free? Looks like they've taken a tip from gaming companies, which do Q&A and product testing by outsourcing it to gamers who do it for free - or even pay for the privilege, as has been seen in various betas requiring payment.
|
http://beta.slashdot.org/story/181755
|
CC-MAIN-2014-15
|
refinedweb
| 1,194
| 80.21
|
Linux Kernel Development - Part 1: Hello Kernel!
Our very first program in every language or framework usually is the notorious "Hello World" program. For this Linux Kernel Modules Development introduction we will follow the same concept, but instead of the usual "Hello World" we will make a "Hello Kernel!" and you will understand the reason in a few moments. Note that in this article I will not focus on a deep explanation about this topic for the moment, since this is only the introduction.
But before we dive into code we need to have the minimum understand what is a Kernel Module and where it runs.
Kernel Module and Kernel Space
A Kernel Module is a program or piece of code, usually written in C, that can be loaded or unloaded dynamically in the linux kernel. You can also built-in the module into the linux kernel, in this way it's not possible to dynamically load and unload.
The linux kernel modules runs in a space called Kernel Space, where the kernel runs and provides its services. That means that you should not use some "userland" headers and functions in you code. But don't worry about it, the kernel has most functions that you normally use in C translated to Kernel Space, usually they have the same name but beginning with a "k". For an example, instead of using
printf() you will use
kprintf().
Your very first Kernel Module
Here is the code of our first kernel module.
#include <linux/kernel.h> #include <linux/module.h> /* * Init function of our module */ static int __init hellokernelmod_init(void) { printk(KERN_INFO "Hello Kernel!\n"); return 0; } /* * Exit function of our module. */ static void __exit hellokernelmod_exit(void) { printk(KERN_INFO "Hasta la vista, Kernel!\n"); } MODULE_AUTHOR("Your name here"); MODULE_DESCRIPTION("Simple Kernel Module to display messages on init and exit."); MODULE_LICENSE("MIT"); module_init(hellokernelmod_init); module_exit(hellokernelmod_exit);
OK, let's see what is going on there.
First, we need to include the necessary headers for our module to work.
#include <linux/kernel.h> #include <linux/module.h>
In our case we need
kernel.h for the kernel functions (printk) and
module.h for the module functions (init, exit..), every module needs this header.
Then, we have 2 functions:
static int __init hellokernelmod_init(void) { printk(KERN_INFO "Hello Kernel!\n"); return 0; }
This function will execute when we load the module into the kernel. This is called the "init" function of the module, it needs to return 0 on success. Usually you alloc the memory you need and do the initialization of you module here.
The __init parameters tells the kernel to free the space of this function after the initialization. This only works for built-in kernel modules.
static void __exit hellokernelmod_exit(void) { printk(KERN_INFO "Hasta la vista, Kernel!\n"); }
This function will execute when we unload the module from the kernel. This is called the "exit" or "cleanup" function of the module, basically everything you did in the init function you need to "undo" here, like freeing memory space, closing sockets, finishing devices, etc.
The __exit parameter tells the kernel to omit this function when the module is built-in, since in built in modules this functions would never execute.
MODULE_AUTHOR("Your name here"); MODULE_DESCRIPTION("Simple Kernel Module to display messages on init and exit."); MODULE_LICENSE("MIT");
The Modules API has some macros that helps to define your module, like describing its functionalities, naming the author, license, parameters and so on. That is what we are doing with these MODULE_* macros.
You can see those information with
$ modinfo <PATH-TO>/hellokernel.ko.
module_init(hellokernelmod_init); module_exit(hellokernelmod_exit);
Here we are actually registering our functions to be executed in the beginning and in the exit of the module.
Now we need to compile and run our code. If you try to compile this module with
$ gcc hellokernel.c, you are going to have a bad time.
We need to proper build our kernel module, this Makefile should do the trick for us.
obj-m += hellokernel.o all: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules clean: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean
First we need to say what we are building. Note that the *.o needs to match your module .c file.
The "all" argument will build the module for us. The "-C" option following by the path to the build directory of the
$(shell uname -r) (A.K.A. your kernel version) is going to make the "make" change the folder to the folder specified and reach it's top Makefile.
The "M=" is saying to the "make" command where the compiled binaries should be copied after the compilation.
And, finally, we are saying that we want to compile a kernel module with the "modules" argument.
The "clean" is going to remove all compiled binaries of our destination folder.
Now a simple
$ make all command should proper compile our module.
The compilation generates a bunch of files, but we are going to focus on the .ko file generated, that's our kernel module.
To install-it, use the
insmod tool. Only super users can load and unload kernel modules.
$ sudo insmod <path-to>/hellokernel.ko
To see if the kernel was load successfully use the
lsmod tool, this tool will list all kernel modules loaded in your system.
$ lsmod
Now, if you look into the kernel log you should see the glorious message "Hello Kernel!".
You can do this with the
dmesg tool.
$ dmesg
To unload the kernel module, use the rmmod tool.
$ sudo rmmod hellokernel
Again, use the
dmesg tool to see our cleanup message "Hasta la vista, kernel!".
Congratulations, you just did your first (I think, so) kernel module.
In the next articles we will dive deep in the in this world of kernel modules and maybe do a LED driver, because there is anything more exciting than make a LED blink!
Happy coding :)
- Write a CommentSelect to add a comment
Very nicely explained Denis
Thanks G_sanket, I hope to post more tutorials in a near.
|
https://www.embeddedrelated.com/showarticle/1274.php
|
CC-MAIN-2020-05
|
refinedweb
| 1,011
| 57.06
|
.
Hello Alex,
In the beginning paragraph it is written:
...std::array lives in the array header.
Is it not more correct to say:
std::array is declared in the array header and defined in std namespace?
It's more accurate to say it's defined in the array header, inside the std namespace. I've updated the lesson.
Hi Alex,
I use Microsoft Visual C++ 2008, I include array #include <array> then I compile and I get the error: "fatal error C1083: Cannot open include file: 'array': No such file or directory". How can I fix it?
Upgrade your compiler to one that is C++11 compatible. std::array was introduced in C++11, and Visual C++ 2008 is too old to support it.
Hi Alex!
Can I pass a fixed array into a function as a std::array without decaying it to a pointer?See the code below. Is it valid?
#include <iostream>
#include <array>
void printLength(const std::array<double, 5> &myarray)
{
std::cout << "length: " << myarray.size();
}
int main()
{
myarray = { 9.0, 7.2, 5.4, 3.6, 1.8 };
printLength(myarray);
return 0;
}
Yes, std::array doesn't decay like a fixed array would. But in your sample code, your declaration of myarray doesn't have a type. It should be std::array.
Thsts by mistake, I forgot to write the type of myarray.
suppose i write int myarray = { 9.0, 7.2, 5.4, 3.6, 1.8 };
and pass this fixed array as a std::array, now will the array decay or not?
Is is possible to define a fixed array and pass it as an std::array?
You can't define a fixed array and pass it as a std::array. You either have to pass the fixed array by reference, or create it as a std::array in the first place and then pass the std:array (also by reference).
for this example:
surely having to specify the length of the std::array in the function declaration defeats the purpose of returning its size?
We don't return its size, we just print its size to the console.
That said, you may be wondering why I didn't do this:
First, just to illustrate how the size() function works. Second, this code will work even if the array size is later changed (or templated). It's less error prone, since changing the array size only needs to be done in the function parameter, not in the function code as well.
Hi Alex!
This is the first chapter where I feel I need to raise a question (really comprehensive tutorial, thank you).
When defining a function
void printLength(const std::array<double, 5> &myarray)
is it possible to generalize it for an arbitrary array size, not only 5?
Yes, you can do this via partial template specialization:
I talk about this more in chapter 13, in the lesson on partial template specialization.
I have always seen your rules as something that is true independant of the context, but im not sure about this:
'Rule: Always pass std::array by (const) reference'
Is this not true if a function is supposed to change the content of the array?
Or is it still possible to change the value(s) of the array passed to the function even if it is const?
Maybe because std::array works with pointers and const does not let you change the address of the pointer but the variable the pointer is pointing to?
Please tell me if I am missing something.
By the way, these tutorials are great. Keep up the good work!
Regards Mario
I found my mistake.... I didn't realise that const was in brackets. Now the rule makes sense, sorry.
That's why I put const in parenthesis (to indicate it's optional, depending on whether you want it to be const or not). I'll update the text to be more clear about this.
Hi Alex, thanks for the great tutorial!
Using global symbolic constant (which you introduced in Lesson 4.2) with the str:array can cause a compiling error.
error : the value of ‘Constants::GSC’ is not usable in a constant expression
I know it's kind of an overkill, but to be of self-consistency, do you have any suggestion for a workaround? Thanks!
constants.cpp
constants.h
main.cpp
It doesn't work for fixed array sizes either. Took me a bit of digging to figure out why -- because the definition of the size (variable GSC) is in a different compilation unit than the array, the compiler can't determine the array size at compilation time. This value is effectively treated as a runtime constant, rather than a compile-time constant.
The array header actually includes the algorithm header, so you don't need to include it separately.
You should not rely on headers to include other headers you need. Best practice is to always directly include any header that you require.
I am having confusion about the size of array in std::array
Why the final result here is 5? So the myarray.size() here is different from sizeof(myarray)? And the last question is: when we talk about the size of the array, it means the number of elements or the total size (number of elements*size of type)?
The final result is 5 because size() returns the length of the array, which is 5. size() is different than sizeof() -- in this case, the sizeof(myarray) is 40 (5 elements * 8 bytes per double).
When we talk about the size of something, we more often mean the sizeof the object (but not always). That's why I try to use the word length when referring to the length of the array.
In my opinion, the C++ designers made a mistake by naming the function size() instead of length().
Hi Alex,
I was wondering, since you can't omit the array size of an std::array, doesn't it add a bit more difficulty for re-using functions with array parameters?
You can pass in any array of type int into the doSomething function, but how can you make parameters such that an std::array of type int can be passed in, regardless of its length?
You can't directly. There's a workaround by using a template parameter for the array size, but that's an advanced topic that we don't cover until much later.
If you want to do this, you may be better off using a std::vector, which doesn't require you to define the size at compile time.
I am having confusion about using 'const references' when passing an array.
The reason you gave was to protect compiler from making copy of it, but what will happen if compiler makes the copy?
Just uses extra time and memory. Most of the time this just bloats the memory usage and running speed of your application, but occasionally if the array is large enough it could cause your program to crash (if you run out of stack memory).
I understand use of reference but why we use 'const'?
Const references allow us to do three things:
1) Pass const arguments (you can't pass a const argument to a non-const parameter)
2) Pass literals (you can't pass a literal to a non-const reference)
3) Ensure the function doesn't change the value of the argument
Hello Alex
When using the previous fixed arrays, it's possible to print out the address of the (1st element of the) array by using:
However, for std::array, it doesn't work anymore, you need to explicitly use the address-of operator:
Is this because of it not decaying into a pointer? I then used:
and this is what came out (codeblocks archlinux x64):
I expect a green face anytime. I can see "array", I guess 3EE is the length (size -- deduced through changes), the rest are very confusing. What are those and what type is an array created with std::array?
Yes, you need the & with the std::array because it's not decaying into a pointer.
I can't tell you what all those symbols printed by typeid.name() mean. Each compiler is free to choose its own representations, so there's no consistency.
It is a bit more difficult to declare multi-dimensional arrays using std::array relative to arrays built into the C++ language, however I've provided examples below for those curious to know how to do so.
Examples of two-dimensional std::array variables:
This prints:
16
16
An example of a three-dimensional std::array variable:
This prints:
19
19
19
Thanks man,it helped a lot.
Hi Alex. Which algorithm does the
method implement internally? Also, do we have an option to change(choose) the algorithm used to sort the array?
It's up to the compiler implementer to pick a sorting routine. Most compilers use quicksort or a variation of quicksort. I'm not aware of a way to explicitly pick a sorting method.
on the second example in size and sorting you need to #include <array>
Thanks, fixed.
Hi Alex,
I added the following header file under the existing 3 lines, then problem was fixed.
But I could not explain why this can solve the problem. I am using the same Microsoft Visual C++ 2015. Thanks!
Aah, yes. You used std::string but you did not include the string header. If you use std::string in a file, you need to include the string header, otherwise C++ won't know what a string is.
Hello Alex,
I am beginner for C++. Thank you for this great tutorial. I changed the example code in this lesson to sort strings as follows:
#include <iostream>
#include <array>
#include <algorithm> // for std::sort
int main()
{
std::array<std::string, 5> myarray{ "Blue", "Green", "White", "Yellow", "Black" };
//std::sort(myarray.begin(), myarray.end()); // sort the array forwards
std::sort(myarray.rbegin(), myarray.rend()); // sort the array backwards
for (const auto &element : myarray)
std::cout << element << ' ';
std::cout << std::endl;
return 0;
}
However, during compile there is an error
"no operator "<<" matches these operands Std_Array"
for line 16 std::cout << element << ' ';
When I change "element" to "element[1]", set a breakpoint at this line and run the code, I can see "element" refers to a string. It is one of the colors I input. Can you help me with my mistake?
Thanks Again.
Not sure, I pasted your code into Visual Studio 2015 and it compiled and ran fine. What compiler are you using? Is it C++11 compatible?
Alex,
On the initializer lists shown below you make comments that the first two are okay but the last one is not since it has to many elements. You also comment that the seconds one has elements 3 and 4 are set to zero.
This is confusing unless you referring to some other code. Or you just want five elements in all the lists.
How do you know this from this code as its shown? How many elements are you allowed to put in the lnitializer list ?
myarray = { 0, 1, 2, 3, 4 }; // okay
myarray = { 9, 8, 7 }; // okay, elements 3 and 4 are set to zero!
myarray = { 0, 1, 2, 3, 4, 5 }; // not allowed, too many elements in initializer list!
In the example, I was working under the assumption we were using the same myarray variable declared above, which had a length of 5. I've updated the lesson to make this more explicit.
You can have as many elements as you want in an initializer list. However, the compiler will complain if the initializer list contains more elements than the std::array you are trying to assign it to.
Alex,
why line 8 error?
The subscript already does an implicit dereference, so by using both * and [], you're trying to do one too many dereferences.
That said, std::array also won't let you deference it using * (only []).
i try delete the * but it still error..
Sorry, I misled you. The proper way to do what you want is as follows:
[code]
(*arrayPtr)[0] = 5;
[code]
If you were using a standard fixed array, what I said would have been true. But std::array (and other classes) work a bit differently, since they won't decay into a pointer.
Hi Alex. First off, thanks for answering questions posted - they've really helped me develop. Another one for you though:
if a function has a std::array passed to it, i.e.
why can't I pass a standard array such as:
does an array defined as
not convert to
?
Apparently not. 🙂 It looks like std::array doesn't know how to convert anything to it.
Is there an error in the example of how to pass an array to a function? It says:
Shouldn't it be:
Yep. Fixed. Thanks!
Thank you so much for building and maintaining the site.
Also wanted to chime in that the use of "array" as a variable name is really confusing. How about "sampleArray" or something?
Updated to myarray for ease of reading.
Loving the information and tutorials Alex, thanks alot
PS: I'm not sure if this is a typo its just I'v never hear of build-in component; in the summary you've stated: " build-in fixed arrays." - Thanks Again 🙂
Removed the ampersand, still works. Suspect we are copying now instead of changing data at source address (&).
Correct, the argument would now be passed by value, making a copy of my_array. It'll still work, but it's slower because making a copy of a std::array isn't fast.
Hi Alex I am learning so much from your efforts! Only thing - the tutorial didn't give an example of passing a container into the function (std::array) though it was mentioned. So I took the onus on myself to prove : ) that the std::array doesn't decay when passed! My question:(std::array<int,6>&my_array). What does the &my_array do? Pass by reference to the original storage location of my_array?
Passing std::array into a function works just like passing a normal non-pointer variable, outside of the fact that you should pass it by reference instead of by value.
In the context of a function parameter, the ampersand (&) means pass by reference. So in your case, the my_array parameter in fx_print_element_num() is a reference to the argument passed in (the my_array variable in main).
When passing an std::array to a function, does it copy all the elements thus being a downside over fixed arrays as more memory will be used?
Many thanks!
Depends on how you pass it. If you pass it by value, then yes, it will copy all of the elements. If you pass it by address or reference, then no. Generally you'll want to pass it by reference to avoid the copying.
std::array is a great replacement for build-in fixed arrays. It’s efficient, in that it doesn’t use any more memory than built-in fixed arrays..
How?
How std::array don't use more memory?
Why would you expect it to?
May be because it is a user-defined class? I don't know, but it must have it's member functions and such right? From school I know that the class methods are allocated a single space, while objects are allocated memory for separate member variables. The sizeof operator returns 20 for both the fixed array and std::array in the above snippet. Still the member functions should take up additional space, correct?
No, functions are just code, they don't take additional space on a per-object basis.
Functions do increase the size of your executable though, which means it will require more memory to load.
Perhaps because std::array has information about its size? (At least, that led me to think it would take up more space.) But according to the C++ reference, the size "is a template parameter, fixed on compile time", and not actually stored in the array object. Still, wouldn't this mean that each std::array uses more memory than a regular array, although in a different location than the array itself?
Nope. std::array (along with many other classes) is implemented as a template classes. When you define a std::array, the compiler custom builds you a new std::array class with data of type int and hardcoded size of 10. That 10 is baked into the class definition, so it doesn't take any extra room.
We cover template classes in more detail in chapter... 14.
About arrays lesson, there is no info about arrays destruction.
Arrays auto delete themselves when they go out of scope (as vectors do)?, or we must take care of use "delete[] our_array;" when we're dealing with them?.
Thanks for your wondeful guide Alex.
Much like you don't have to clean up after a fixed array, there's no need to clean up a std::array either. I've added a mention of that into the lesson.
Hi Alex, I've been following this tutorial and find it extremely helpful. To be honest it's better than any text or videos I've ever read or seen. Definitely worth recommendation to my friends.
Do you have any tutorials on other programming languages? I really like the way you explain all those concepts and like to learn more once finish with c++.
Unfortunately, no. I've been focusing my limited time on improving/expanding these lessons instead of branching out into new languages.
Better to do one thing well than two things mediocrely.
"Better to do one thing well than two things mediocrely." *
*Ah "multi-tasking". Being good at multi-tasking is like saying "I'm good at doing many things poorly but at the same time".
So glad to see you are updating this site again Alex! Great tutorial as ever!
I have a minor suggestion: the example would be easier to understand if you name the first array as "array1" instead of just "array"
The name "array" is purple in colour and looks like a part of the declaration syntax to a beginner.
Thanks for pointing this out.
"array" being highlighted as a keyword is actually a bug in the syntax highlighter (as it's not a keyword in C++). I've modified the list of keywords in the plugin to remove "array" from the list so it shouldn't highlight any more.
Name (required)
Website
|
http://www.learncpp.com/cpp-tutorial/6-15-an-introduction-to-stdarray/comment-page-1/
|
CC-MAIN-2018-13
|
refinedweb
| 3,102
| 72.76
|
Red Hat Bugzilla – Bug 32499
umount(2) man-page incorrectly says umount(block-dev) works (it doesn't)
Last modified: 2015-01-07 18:44:36 EST
I found this while porting the VSX-PCTS to 7.1 with the 2.4 kernel:
#include <sys/mount.h>
{ blah, blah}
/* the mount(2) call works fine */
ret = mount("/dev/sda9", "/tmp/mnt", "ext2", MS_MGC_VAL, (void *) &stuph);
/* the following umount(2) call fails */
ret = umount("/dev/sda9");
/* the following umount(2) call succeeds */
ret = umount("/tmp/mnt");
... tiny test-case included in case it helps.
Dunno if it's i386-only or not... mostly a nit since the user-space
commands work, but the man-page says differently:
MOUNT(2) Linux Programmer's Manual MOUNT(2)
NAME
mount, umount - mount and unmount filesystems.
SYNOPSIS
#include <sys/mount.h>
int mount(const char * specialfile, const char * dir,
const char * filesystemtype, unsigned long rwflag, const
void * data);
int umount(const char *specialfile);
int umount(const char *dir);
Created attachment 13203 [details]
brief test-case source
Not a bug. What do you want to do when a single device has been
mounted in multiple places (legal under some circumstances, will
be more so in the future)?
I understand the logic of the code and the reasoning behind specifying the
*directory* instead of the block-device. However, if you look at the man-page
versus what the code enforces, you will see a discrepancy. I have NO issue with
umount requiring the directory-pathname as the arugment. I do have a concern
that the man-page says you can use *either*. Thus, there IS a bug, albeit a
low-weight one that I would like to see fixed at some point, but not necessary
for Florence.
I am re-opening this and changing the defect heading to reflect a man-page
error. I also think it's a good idea to write a paragraph in the man-page that
says specifying the block-device form of the call is deprecated (because of
multiple mounts of the same device).
$ rpm -qf /usr/share/man/man2/umount.2.gz
man-pages-1.35-5
---> re-assigning
Fixed in man-pages-1.35-6
|
https://bugzilla.redhat.com/show_bug.cgi?id=32499
|
CC-MAIN-2017-13
|
refinedweb
| 365
| 63.8
|
Name:
[116] Erik Thompson
Member:: ...
Vector Math for 3D Programming [ID:766] (17/35)
in series: Developing emol!
video tutorial by Erik Thompson, added 06/08
Name:
[116] Erik Thompson
Member: talk about the basics of vector math as it applies to 3D programming. I'll be using vector math to properly position the cylinders as bonds in the next episode.
To learn more about vector math for 3D programming check out this in depth tutorial:
The code for this episode is the 9th revision uploaded to launchpad which can be downloaded easily using bazaar or from the launchpad website:
- python
- beginners
- tutorials
- programming
- programs
- http
- code
- basics
- graphics
- episodes
- files
- software
- learn
- development
- open-source
- newbies
- text
- data
- demonstration
- databases
- GUI
- objects
- wxpython
- class
- document
- websites
- 3ds
- control
- 3dtool
- blender
- dialogs
- maths
- talks
- widgets
- patterns
- html
- modelling
- design
- distributions
- sql
- keys
- walkthrough
- libraries
- dictionary
- gnu
- diagram
- sqlite
- planning
- pdb
- launchpad
Got any questions?
Get answers in the ShowMeDo Learners Google Group.
Video statistics:
- Video's rank shown in the most popular listing
- Video plays: 37 you video, I am a CAD operator and wish I saw this video years ago when I took Dynamics, I and trying to write a program for 2d top view of a box 6,4,5 and just getting a little overwelmed. thank you again.
Damn good video. A month of high school math revised in 5 min.
Great video ;). I like your diagram about what happens when you get no feedback. And your image kind of looks like Ron Weasley's father from the Harry Potter series.
That is really off topic though, The video was very easy to follow, I only have one suggestion: Talk a little louder and don't say "uhh", "umm" as much. It will make for a better presentation ;).
You should do C++ too :D.
Video is good.
very helpful
Nice tutorial video, really enjoyed it.
good keep upur good owrk
This was the clearest presentation on vector mathematics for me yet. I am looking forward to the next (hopefully this is a series).
Very helpfull, thanks!
Hi, nice video. You should probably instantiate the a, b vectors that you used in your tutorial as vectors themselves and also overload the operators in the vector class, this will give you the following flexibility:
a = Vector(3,1,0)
b = Vector(1,5,0)
c = a + b
c = a - b
c = a * b
c = a / b
c = a.mag()
c = a.norm()
and so on...
you can overload the operators with the following syntax:
def __add__(self,input):
return [self.x + input.x , self.y + input.y......]
def __sub__(self,input):
def __mul__(self,input):
def __div__(self.input):
This way, you don't need to create your v as a vector class.
Ronald
Review of Vector Math for 3D Programming
I stumbled across this using a search online. I really enjoyed it. It would be wonderful if you have the chance to cover the different topics of math pertaining to 3d programming.
Video published, thanks for contributing to ShowMeDo
|
http://showmedo.com/videotutorials/video%3Fname%3D1510160
|
CC-MAIN-2016-26
|
refinedweb
| 511
| 64.2
|
Ads Via DevMavens
I’ve used the snippet of code at the end of this article twice in the last week to clear up some misunderstandings about strings in .NET.
System.String is a reference type in .NET. Don’t let anyone mislead you into thinking otherwise. You’ll hear people saying strings “act like value types”, but this doesn’t mean the runtime makes a copy of the string during assignment operations, and it doesn’t make a copy of a string when passing the string as a parameter.
The equality operator and String.Equals method compare the values of two string objects, and you won’t find a property or method on System.String with the ability to modify the contents of a string – only return a new string object. Because of these two behaviors we sometimes say strings have value type semantics (they feel like value types), but strings are reference types in the runtime. An assignment operation does not copy a string, but assigns a reference. The value given to a string parameter in a method call is the reference, not a copy of the string’s value.
using System;
class Class1
{
[STAThread]
static void Main(string[] args)
{
string s1 = "Hello";
string s2 = s1;
// this test will compare the values of the strings
// result will be TRUE since both s1 and s2
// refer to a string with the value "Hello"
bool result = String.Equals(s1, s2);
Console.WriteLine("String.Equals(s1, s2) = {0}", result);
// next we will check identity
// ReferenceEquals will return true if both parameters
// point to the same object
// result will be TRUE -both s1 and s2reference
// the same object.
// strings are reference types
// there is no copy made on assignment (s2 = s1)
result = Object.ReferenceEquals(s1, s2);
Console.WriteLine("Object.ReferenceEquals(s1, s2) = {0}", result);
// now we will make a copy of the string
s2 = String.Copy(s1);
// compare string values again
// result will be TRUE - both s1 and s2
// refer tostrings with the value of "Hello"
result = String.Equals(s1, s2);
// check identity again
// result will be FALSE
// s1 and s2 point to different object instancesbecause we forced a copy
Console.WriteLine("Object.ReferenceEquals(s1, s2) = {0}", result);
}
}
|
http://odetocode.com/Blogs/scott/archive/2005/07/21/1961.aspx
|
crawl-002
|
refinedweb
| 366
| 62.07
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to make Account Receivable and Account Payable fields auto-incriment
Hi,
i want to make Account Receivable and Account Payable fields auto incriment exp, in the accounting chart ;custumer account is : 411100
for the custumer 1 with the name c1, i want to creat automaticlly an Account Receivable 411101 c1
for the custumer 2 with the name c2, i want to creat automaticlly an Account Receivable 411102 c2
thanks
You will be ending with one account per customer, why?, you could use an account.analytic.account by put an inherits with that model instead to don't mess the account chart like are done for project.project. Analytic accounts are more suitable for this, just my opinion, if you have requirements for that then move on, also account.journal could be used too for that.
+1 What anis wants is contrary to collective accounts for the purpose of a regular accounts payable/receivable management.
Hey friend try to create a function which will add for each customer do like this:
1. add 411101 + custmer..id
2. concatenate the first result with the name of the customer
Which means a function like this:
def get_inputs(self, cr, uid,ids, field_id, context=None):
obj = self.pool.get('table_of_cutomers')
id_var = self.read(cr, uid,ids,['id','name_customer'])
obj_ids = obj.search(cr, uid, [('name_customer', '=', field_id)])
and then you will add (411101 + id) put it in a variable and concatenate i as i told you.
To call your function you can call an on_change function like this:
def on_change_customer_id(self, cr, uid, ids, field_id, context=None):
res = {'value':{'customer_ids': self.get_inputs(cr, uid, ids, field_id, context=context),
}
return res
Dont forget the xml:
<field name="field_id" on_change="on_change_customer_id(field_id)">
Regards.
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
|
https://www.odoo.com/forum/help-1/question/how-to-make-account-receivable-and-account-payable-fields-auto-incriment-90798
|
CC-MAIN-2017-13
|
refinedweb
| 341
| 54.32
|
URL of experiment:
Description of the problem:
I have a color working memory experiment that works just fine on PsychoPy. To code it up, I have an initExperiment routine at the beginning, in which I added codes to specify conditions. For those codes to work, I had to insert import a few things under “Begin Experiment”:
import random
import math
import numpy as np
These are necessary for my condition set up that involves a lot of array manipulations. Although the script runs well in PsychoPy, it does not run on Pavlovia. Instead, it gets stuck on initiating experiment…
In the HTML console, the error message says
SyntaxError: import declarations may only appear at top level of a module
The lines that cause the SyntaxError are those import lines. If I eliminate them, PsychoPy simply would not run. It seems that those lines need to go to the beginning of the experiment, but I if I manually edit this, I lose the Builder’s JS conversion function.
Any advice on how to create codes (import) that run in PsychoPy but don’t break Pavlovia?
|
https://discourse.psychopy.org/t/import-functions-create-problems-for-pavlovia/11755
|
CC-MAIN-2021-43
|
refinedweb
| 184
| 56.79
|
Member Since 4 Years Ago
620 Testing That An Email Can Be Verified
Oops. I took a moment and zoomed out. I was really overcomplicating something that was actually very simple! :) I got it to do what I want now.
Replied to Testing That An Email Can Be Verified
I'm sure you're probably right. It may not even be possible to do what I was hoping to. It's late here and I'm tired :)
As to what updates the email, it's simply visiting the URL:
Route::get('/verify-email/{id}/{hash}', [VerifyEmailController::class, '__invoke']) ->middleware(['auth', 'signed', 'throttle:6,1']) ->name('verification.verify');
Replied to Testing That An Email Can Be Verified
Hey, thanks for trying to help. My app has the simple ability to allow the user to change their email address. They request the change, an email with a verification link is dispatched to the new address, and once it's clicked then the new email is inserted into the place of the old one.
Does that help explain the situation? I basically want to replicate the behaviour of a user clicking the link in the verification email to test that their new email is inserted correctly.
Thanks again!
Started a new Conversation Testing That An Email Can Be Verified
I wish to have a test that checks whether or not an email can be verified successfully.
The issue I have is getting my test to visit the link created by
verificationUrl() in the
VerifyEmail notification. Is there a way I can easily expose this for testing purposes?
This is one way of doing it:
But I was wondering if anyone knows a better way?
Awarded Best Reply.
Replied.
Started a new Conversation Send Email Verification To A New Email Address
Hi all, I'm just adding the ability for a user to change their email address in my app, but I'd like to get them to verify the new address before accepting it on their account.
Laravel has the very helpful
$user->sendEmailVerificationNotification(); method, but of course it only sends to
$user's current email address (and not the new proposed one).
Laravel tends to think of things before I do, so I was wondering if there was any way to send this verification email to an email address selected by the user (instead of
$user->email). Looking through the code, I can't find a way, but you never know. I thought I'd check before building my own.
Thanks!
Replied to Laravel Blade Components... Best Practice For Props?
@martinbean That was my fault. I missed off the
$attributes bag. I see now that props allow you to easily place HTML attributes wherever you'd like around a more complex component. And also make things a bit more readable if you want to use default values.
I still don't understand why
disabled gets special treatment in the Laravel Breeze component, though?
Started a new Conversation Laravel Blade Components... Best Practice For Props?
There's a great Livewire tutorial on Laracasts by Kevin McKee where he gives an example of using Laravel components for form elements. His example looks like this:
<x-text-input wire:
And the actual component file:
@props([ 'text' => "text" 'label' => "" 'required' => false 'placeholder' => "" ]) <div class="{!! $attributes !!}"> <label for="email">{{ $label }}</label> <input id="email" type="{{ $type }}" required="{{ $type }}" placeholder="{{ $placeholder }}">
etc.
Whereas Laravel Breeze follows the same pattern, but implements it far more simply:
<x-input id="email" class="block mt-1 w-full" type="email" name="email" :value="old('email')" required autofocus />
And the actual component file:
@props(['disabled' => false]) <input {{ $disabled ? 'disabled' : '' }} {!! $attributes->merge(['class' => 'rounded-md shadow-sm border-gray-300']) !!}>
What's the reason for Kevin McKee's approach, when you could simply ignore
@props and they would pass through to the rendered HTML anyway (as with Laravel Breeze's approach)?
Edit: Ignore the above question. It totally makes sense now. Still don't understand the following, though:
And why does Laravel Breeze use disabled as a
@prop when it could easily be passed through in the same way
required and
autofocus are?
Replied to PHPStorm + Blade Components
This does not solve the question. At the moment PHPStorm's Laravel support doesn't extend to BladeX components yet.
This does, though:
Replied to PHPStorm + Blade Components
Yes! Just installed it. Does everything I wanted. Perfect!
Started a new Conversation Laravel Cashier: AsStripeSubscription() Not Working On Staging Server
I have a simple method on my User (actually "Customer") model to return a user's subscription renewal date:
public function subscriptionRenewalDate() : string { $subscription = $this->subscriptions()->active()->first()->asStripeSubscription(); return Carbon::createFromTimeStamp($subscription->current_period_end)->format('F jS, Y'); }
I call this method on the authenticated user from a blade template (
{{ auth()->user()->subscriptionRenewalDate() }}) and it works locally, but as soon as I upload it to the remote staging server it fails with the error:
Cannot declare class App\Models\Customer, because the name is already in use
It points to this line in the Customer model:
class Customer extends Authenticatable {
What's weird is that my unit tests all pass locally, but the ones relating to this blade template fail when Travis CI runs them. It's not database related because the tests use
RefreshDatabase.
All I know is that the error is caused by
asStripeSubscription(). If I remove that from the method, the blade template loads fine.
What's going on?
Replied to Tests Are Suddenly Failing
I could kiss you! I forgot there was another STRIPE_KEY in the PHPUNIT.XML. Everything is working fine again. Thank you!
Started a new Conversation Tests Are Suddenly Failing
I've been developing project for a few weeks and my 28 tests have been passing consistently. Today I added some things in preparation for deployment to staging, and now suddenly my tests are no longer working. Even really basic ones started to fail.
I forced a clearing of my cache through the artisan command (cache:clear, view:clear, config:clear, route:clear) and some of the previously not working tests started to work again.
What's incredibly strange is if I manually enter the stages from within the tests in Tinker... they work fine. They work absolutely as expected. What's more, when I use the app everything is working as expected. The only place things aren't working is in the tests... and I haven't changed the tests at all.
I feel all I've has been very innocuous. Add the following package to composer:
vemcogroup/laravel-sparkpost-driver, update my Stripe keys to different account's, and add an .ebextensions folder for deployment to AWS.
Why would my tests suddenly become temperamental? I'm pulling my hair out!
Commented on A Unit Test Is Not Locked To A Single Class
Great stuff. It's so hard to consider going back to not doing TDD now. It feels so incredibly risky. I love having more confidence in my ability to refactor.
Replied to Secure Against Laravel 8.4.2 Debug Mode - Remote Code Execution
Using Laravel 6 isn't the issue per se, as it has security fixes until September 6th, 2022, but you will need to update Laravel to the latest version (use
composer update). I believe this will get patches (6.x.x).
That said I can't see a list of the Laravel 6 minor versions anywhere, so it's difficult to know what the latest 6.x you should be using is.
Replied to Where To Learn? ... Tried Laracasts!
Laracasts is a phenomenal resource. It's my go to, and I don't really understand what you mean about it being "out of date"? The differences between 6 and 7/8 are largely minor (the whole Jetstream debacle aside), and you can quickly watch the "What's New?" series to see what's changed.
The reason they haven't done a whole new Laravel from Scratch series is simply because it's largely not needed. I'm currently building a Laravel 8 app (I jumped from 5 to 8 -- that was painful at first), and the tutorials are still very much working exactly as expected. In fact I've yet to encounter anything out of date (aside from the namespace changes which are listed underneath every video as a reminder to everyone).
I guess everyone likes different teaching styles, but for my money (and I'm a subscriber, so I mean that literally) Jeffrey Way is a superb teacher. The difference between a good teacher and a great teacher is the student leaves with the confidence to solve problems by themselves. Way is quite simply one of the best teachers I've ever had in my life. Possibly the best.
But again, each to their own.
Replied to Extending CRUD Methods
Ha, I've never seen someone use a Request like that before. Interesting :) I guess it shows the flexibility to solve problems in ways that make sense to you.
For my question, it's less about multiple model writes, and just doing a bunch of heavy lifting before I can persist something to the database. Things that will happen every time. It feels like it must be a common situation and I was wondering if a) there were any best practices, or b) how people approached situations like this.
Of course, it's very possible I'm over-engineering this, but it's a problem I've encountered before and never really felt I'd found a good solution to. Be interesting to see what others say.
Started a new Conversation Extending CRUD Methods
I often find myself in projects where I have additional business logic that I'd like to execute before persisting a model to the database. In the past I've approached this in a number of ways, but the one that seems to make the most sense isn't available to me... overwriting the
create(),
update(), etc. methods.
Things I've tried include:
createPost()- so long polymorphism!).
I've done research into Events. Is this the best place to put logic like this? They seem better suited to "side effects" (eg. send the user an email upon registering, etc) than putting heavier logic in place.
My ideal solution would be just to be able to call
$model->create() and the model takes care of all the steps, and I don't have to know how. But there's no mention of the best practices for where to put this sort of logic.
I wish to generate a unique access code for a third party service, which is then saved on the user's account for them to use.
Ideally I'd just want
$user->accessCode->create($attributes).
However there a number of steps I need to accomplish first:
There's actually several other steps, too (like checking to see that they're entitled to have an access code, etc).
I've love to break many of these steps into single responsibility private methods within the model and just call
$user->accessCode->create($attributes). Then it can go about calling the methods (eg.
generateAccessCode(),
getAccessToken(),
registerAccessCode(), etc.) and taking responsibility for itself.
After all I don't need to know the steps involved, it can just go ahead and do the steps required and report back when it's done.
What is the "best practice" way to approach these situations? Is using something like Event Closures the best way to go?
Thanks.
Commented on Limit Access To Authorized Users
Just one additional thing, which has thrown me in the past, you can also establish authorisation rules in Requests. I presume the Policy method is the one that's recommended (it's more flexible and can be used elsewhere in your project), but I always feel uneasy adding:
public function authorize() { return true; }
Commented on Limit Access To Authorized Users
JW is the greatest Laravel instructor out there. Just have to say that once in a while. He makes everything so simple to understand and is the reason I subscribe to Laracasts. Thanks @jeffreyway
Commented on Handle Incoming Webhooks
Found it:
It can be installed globally using composer:
composer global require beyondcode/expose
Commented on Handle Incoming Webhooks
Where do you get "expose"? Can you get it over brew? It's a difficult thing to Google! It doesn't seem to appear anywhere:
Replied to Running Cashier's Unit Tests
Yep, I understand that now. What I don't understand is why I have all these files in my
vendor/laravel/cashier/ folder that really shouldn't be there, though. So weird!
I have
laravel/cashier v12.6.3. I wonder why the
.gitattributes was ignored. I guess it's harmless. Just odd.
Replied to Running Cashier's Unit Tests
@tippin Ah. Weird. My
/vendor/laravel/cashier/ folder has a
tests/ folder, filled with a bevy of tests. And the Laravel documentation says:
When testing, remember that Cashier itself already has a great test suite
I think both of those things threw me into thinking I was supposed to be using the supplied tests.
Thanks for clarifying. I can write my own integration tests for my app. I just didn't want to double up with if I didn't have to.
Thanks!
Edit: In fact, looking at the folder now, it seems the .gitattributes have been completely ignored! I have a
.github/ folder and everything else that was supposed to be blocked (eg.
.gitignore). Weird!
Edit edit: I'm using the package mentioned in the Laravel documentation:
composer require laravel/cashier. I assume this is why? But actually I'm not sure. Really bizarre.
Started a new Conversation Running Cashier's Unit Tests
I understand the Laravel Cashier comes with a comprehensive suite of tests. I feel very dumb, but how am I supposed to run them?
I've tried
php artisan test vendor/laravel/cashier/tests/ but I got the error message:
Fatal error: Uncaught Error: Class 'Laravel\Cashier\Tests\Feature\FeatureTestCase' not found in /PROJECT/vendor/phpunit/phpunit/src/TextUI/Command.php on line 97
I tried moving the FeatureTestCase from the vendor directory to my projects Tests/Feature directory, but obviously I got a similar error due to the namespace being
Laravel\Cashier\Tests\Feature (which makes me feel I shouldn't be moving it).
What am I missing?
Commented on Installation And Usage
"Ends at" is null in the database (22:09 in the video). Why?
Commented on Email Verification
It would be amazing to explain the differences/similarities between Breeze, Jetstream and Fortify. I started my app with Breeze because I thought that was the most likely to be similar to Laravel 5 (which I know the best), but now it seems that Fortify might have been the right solution. (I've spend hours picking out TailWind and useless blade components from the Breeze installation.)
However, what's really confusing is that I can't see any of this Fortify configuration with Breeze installed.
Honestly Laravel used to make my life EASIER. What happened to it?
Commented on Login And Registration
I don't understand why you're not putting the registration and login views inside the AuthenticationSessionController?
Commented on Customize Routes
It depresses me so much that Laravel, once the cleanest and most developer friendly PHP framework, has been reduced to manually copying routes from the vendor folder. It feels like Laravel cares more about the beginner than it does the experienced developer. Hiding away routes that any project of reasonable complexity would need exposed.
Same goes for actions. And controllers.
It's crazy. Now we're expected to manually get the project to a point where any normal developer would want it to be out of the box. Or at least through artisan commands.
Laravel, what happened to you? You're such a mess! :'(
|
https://laracasts.com/@JohnnyW2001
|
CC-MAIN-2021-25
|
refinedweb
| 2,643
| 64.71
|
Issues
ZF-5606: Zend_Db::factory normalizes characters in namespace, so ZendX libraries are not found
Description
on line 240: $adapterName = strtolower($adapterNamespace . '' . $adapter); $adapterName = str_replace(' ', '', ucwords(str_replace('_', ' ', $adapterName)));
This code make ZendX_Db_Adapter_Firebird to become Zendx_Db_Adapter, which can't be found on Linux...
Also problem for user's Db libraries.
Now can be soluted by using standard inicialization without ::factory() method, but it is a bug....
Posted by Tobias Petry (ice-breaker) on 2009-03-23T16:07:57.000+0000
Have got the same problem: a self defined Db adapter has to be used, but the namespace has 3 big letters (abbreviation of the project's name)
Posted by Dariusz Sierakowski (darek_si) on 2009-09-09T13:43:50.000+0000
This my proposed for issue this bug
Posted by Ralph Schindler (ralph) on 2009-09-20T14:51:22.000+0000
Fixed in trunk in r18328 and in release 1.9 branch at 18329
Posted by Marc Peterson (marcpeterson) on 2009-09-20T19:58:11.000+0000
This patch doesn't create the adapter name properly. Say if you're using PDO_MYSQL you end up looking for Zend_Db_Adapter_PDO_MYSQL, in directory Zend/Db/Adapter/PDO/MYSQL.php. This doesn't exist in case-sensitive operating systems.
You have to strtolower() the adapter before using ucwords() on it, for example: line 247: $adapterName .= str_replace(' ', '', ucwords(strtolower(str_replace('', ' ', $adapter))));
Posted by Marc Peterson (marcpeterson) on 2009-09-20T21:52:31.000+0000
Never mind, looks like that's the whole point of this patch. Beware that there are lots of examples out there using all-uppercase adapter names in one's application.ini file. This may break a few projects.
|
http://framework.zend.com/issues/browse/ZF-5606?focusedCommentId=34749&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
|
CC-MAIN-2016-26
|
refinedweb
| 276
| 66.74
|
Hi All, I want to move the webdav implementation on by releasing it to cheeseshop, supporting more content types and then working on some more interesting aspects of the protocol itself, and maybe even supporting other applications like Plone if there is interest.
Advertising
First I need to rename it out of the zope namespace, which was a mistake on my part in calling it this. But this brings up some packaging issues for me that I am not sure how best to do this in a consistent fashion that I can build on. Hence why I have delayed doing anything about this. Currently everything is in zope.webdav. Which includes support for the zope.app.file & zope.app.folder content types. Using zope.locking as a locking mechanism and using zope.copypastemove for copying and moving content. Each of the packages just mentioned should be optional (but aren't because of the current setup, which I want to fix). Now into future. I want to split this up into a core Zope3 WebDAV implementation, which will handle registering the different WebDAV methods, their implementation, declaring properties, exception handing which I could call z3c.webdav. This module / egg on it own will be pretty useless, as it doesn't know about any of the content or services in Zope3. Then I want for each content type and services in Zope its own egg so to speak, for example z3c.webdav.zopeappfolder, z3c.webdav.zopeappfile, z3c.webdav.zopefile, z3c.webdav.zopelocking. A lot of these extra eggs will be small and easy to write, and as more content types are supported by Zope3 I would like more webdav modules to be written for them. An other two options I can think of, instead of the naming convention above: 1. create a webdav namespace, webdaz (WebDAv for Zope), which would contain webdaz.core, webdaz.zopefile, webdaz.zopelocking etc 2. Name the core webdav implementation z3c.webdav and then create eggs that are named like zope.file.webdav (for zope.file), zope.app.file.webdav, z3c.extfile.webdav, zope.locking.webdav etc. but which are separate from the underlying module. Does any one have any experience with this type of problem or does any one of a preference to which one they like best. Or is there other options that I have missed that is worth considering, in order for me to move this forward. Thanks, Michael P.S. after writing all this down my preference is moving towards the last zope.file.webdav, etc. _______________________________________________ Zope3-dev mailing list Zope3-dev@zope.org Unsub:
|
https://www.mail-archive.com/zope3-dev@zope.org/msg08200.html
|
CC-MAIN-2017-04
|
refinedweb
| 432
| 65.12
|
There comes a time in every pasty young developer’s life when they realize that they must build an application that stores images in a database, and then serves them up via an ASP.NET MVC Action.
Soon after this realization, they have another realization that, unlike with many other shortcuts to brilliance that MVC readily provides, the purveyors of the Web Stack of Love did not serve this one up on a silver platter carried by a pirate riding a unicorn. What they did do (which is twice as pirate, upon reflection) is make their little framework as extensible as the sea is salty. So basically, unless your liver is gilded with lilies, you can sail forth and build your own ruddy ImageResult.
I have one here.
public class ImageResult : ActionResult { public Stream ImageStream { get; private set; } public string Extension { get; private set; } public ImageResult(Stream imageStream, string extension) { this.ImageStream = imageStream; this.Extension = extension; } public override void ExecuteResult(ControllerContext context) { var response = context.HttpContext.Response; // set the content type based on the extension response.ContentType = string.Format("image/{0}", this.Extension); // copy the image stream to the response output stream this.ImageStream.CopyTo(response.OutputStream); response.End(); } }
Here’s the consuming code that shows the ImageResult in action:
public ActionResult UserImage(int id) { using (var db = new ApplicationEntities()) { var user = db.Users.Single(p => p.ID == id); // read bytes from database into a stream var imageStream = new MemoryStream(user.Image); // create an image result using the stream return new ImageResult(imageStream, "jpeg"); } }
See what I did there? In action.
Moving On.
The quick-of-brain among our readership will extrapolate from this that any type of binary file can be served from an Action in this way, with only minor modification to our custom type.
The not-quite-as-quick-of-brain will then reason that this is wicked awesome and that every file should be served this way. To them I will say two things: No, and really this approach should only be used if you need to obtain or manipulate the file in code before serving it. For example, if you need to resize the image, or if access to it is secured in a programmatic way.
So, you know.
Hoist the colors, me hearties.
|
https://code.jon.fazzaro.com/2012/04/30/how-to-serve-an-image-from-an-action-and-otherwise-be-an-mvc-swashbuckler/
|
CC-MAIN-2018-26
|
refinedweb
| 379
| 53.92
|
FeatureRequest: enhancement to ToolTipEvent: caretIndextimo888 Oct 29, 2009 5:53 AM
Platform: Flex AIR
Use Case:
You have information you want to give the user about any|every individual word in the content of a TextArea (perhaps grammatical information, a translation into another language or languages, or a brief historical blurb, whatever). Using TextArea for its CSS/HTML capabilities.
One needs to know the caretIndex in order to display a custom tooltip for a particular word in the TextArea text. Knowing the caretIndex, you can determine the word underneath it by scanning the TextArea.text property looking for whitespace|punctuation in the vicinity of the caretIndex to pluck out the word.
Could a future version of the ToolTipEvent expose the caretIndex?
Thanks
1. Re: FeatureRequest: enhancement to ToolTipEvent: caretIndexFlex harUI
Oct 29, 2009 10:53 AM (in response to timo888)
Unless selection spans several characters, caretIndex == selectionBeginINdex
Alex Harui
Flex SDK Developer
Adobe Systems Inc.
2. Re: FeatureRequest: enhancement to ToolTipEvent: caretIndextimo888 Oct 29, 2009 12:29 PM (in response to Flex harUI)
But caretIndex != selectionBeginIndex when the user doesn't click on the TextArea but merely hovers the mouse, right?
Or you can select text and then hover elsewhere.
P.S.
I saw this post in the right margin ("MORE LIKE THIS") as I was replying:
The author suggests this approach:
var idx:int = TextField(ta.getChildAt(2)).caretIndex; //ta is the id of your TextArea
Is it a documented (i.e. written in stone) feature that the TextArea has a child at offset 2? Can this be depended on?
3. Re: FeatureRequest: enhancement to ToolTipEvent: caretIndexFlex harUI
Oct 29, 2009 1:52 PM (in response to timo888)1 person found this helpful
The caret is a specific thing in selection. You're talking about the I-beam cursor position. I should've realized that since you were referencing ToolTipEvent.
I doubt we'd add it to ToolTipEvent since ToolTips aren't always about text.
I still think all you need to do subclass TextArea, get to the internal TextField and call getCharIndexAtPoint()
Alex Harui
Flex SDK Developer
Adobe Systems Inc.
4. Re: FeatureRequest: enhancement to ToolTipEvent: caretIndextimo888 Oct 29, 2009 3:42 PM (in response to Flex harUI)
Sorry about confusing the terms for the beam and the caret, my bad, not yours. Thanks for the suggestion: I didn't understand that in extending TextArea I'd gain access to an encapsulated TextField. That should get me past one major hurdle.
The next hurdle is that the Flex Tooltips Manager requires the mouse to move outside the related display object before it will show another tooltip for the object, so this makes it essentially unsuitable for what I was trying to do. The user cannot simply move the mouse over one word, and then over another, and keep displaying tool tips one after another, word by word.
So I have to
(a) recreate the wheel by writing timing code to see if the mouse is actually "hovering" over a word and not just moving over it in passing (pretty hard, I expect, inasmuch as words have no focus boundaries)
or
(b) resort to a click-driven or right-click-driven approach.
Option (b) wins.
5. Re: FeatureRequest: enhancement to ToolTipEvent: caretIndexpaul.williams Oct 29, 2009 4:42 PM (in response to timo888)1 person found this helpful
I had a go at (a). It probably needs some more work, but looks promising:
package
{
import flash.events.MouseEvent;
import flash.utils.clearTimeout;
import flash.utils.setTimeout;
import mx.controls.TextArea;
import mx.managers.ToolTipManager;
public class TextAreaWithTips extends TextArea
{
private var hoverTimeout : uint;
private var hoverX : int = 0;
private var hoverY : int = 0;
public function TextAreaWithTips()
{
super();
addEventListener( MouseEvent.MOUSE_MOVE, onMouseMove );
addEventListener( MouseEvent.ROLL_OUT, onRollOut );
}
private function onMouseMove( event : MouseEvent ) : void
{
reset();
hoverX = event.localX;
hoverY = event.localY;
hoverTimeout = setTimeout( showTip, ToolTipManager.showDelay );
}
private function onRollOut( event : MouseEvent ) : void
{
reset();
}
private function reset() : void
{
clearTimeout( hoverTimeout );
toolTip = "";
}
private function showTip() : void
{
var index : int = textField.getCharIndexAtPoint( mouseX, mouseY );
if ( index < 0 )
{
return;
}
var wordStart : int = textField.text.lastIndexOf( " ", index );
var wordEnd : int = textField.text.indexOf( " ", index );
var word : String = textField.text.substring( wordStart + 1, wordEnd );
toolTip = "You are hovering over: " + word;
ToolTipManager.currentTarget = this;
}
}
}
6. Re: FeatureRequest: enhancement to ToolTipEvent: caretIndextimo888 Oct 30, 2009 5:09 AM (in response to paul.williams)
Cool, Paul. That clearTimeOut utility is nifty Works nicely!
Here's where I had gotten on plucking the word out from beneath the cursor. It doesn't do punctuation delimiters yet, or all kinds of whitespace, but it handles the firstWord|lastWord scenario where there's no space delimiting the word on one side.
I have to look up what lastIndexOf does.
public function getWordUnderMouse():String {
var tlen:int = this.textField.text.length;
var pos:int = this.textField.getCharIndexAtPoint(this.mouseX, this.mouseY);
if (pos > tlen || pos==-1){
word="";
return word;
}
if (text.substr(pos,1)==SPACE) {
word="";
}else{
var s:String = text.substr(pos,1);
while ( (s != SPACE) && pos > 0 && pos < text.length)
{
pos -= 1;
s = text.substr(pos,1);
}
var pos2:int = text.length;
if (pos < text.length)
pos2=text.indexOf( SPACE, pos+1)
if (pos2 > 0) {
word = text.substring(pos,pos2);
} else {
word = text.substring(pos, text.length);
}
}
return word;
}
|
https://forums.adobe.com/thread/514807
|
CC-MAIN-2017-51
|
refinedweb
| 876
| 58.69
|
Not to be confused with Lifecycle Hooks, Hooks were introduced in React in v16.7.0-alpha, and a proof of concept was released for Vue a few days after. Even though it was proposed by React, it’s actually an important composition mechanism that has benefits across JavaScript framework ecosystems, so we’ll spend a little time today discussing what this means.
Mainly, Hooks offer a more explicit way to think of reusable patterns — one that avoids rewrites to the components themselves and allows disparate pieces of the stateful logic to seamlessly work together.
The initial problemThe initial problem
In terms of React, the problem was this: classes were the most common form of components when expressing the concept of state. Stateless functional components were also quite popular, but due to the fact that they could only really render, their use was limited to presentational tasks.
Classes in and of themselves present some issues. For example, as React became more ubiquitous, stumbling blocks for newcomers did as well. In order to understand React, one had to understand classes, too. Binding made code verbose and thus less legible, and an understanding of
this in JavaScript was required. There are also some optimization stumbling blocks that classes present, discussed here.
In terms of the reuse of logic, it was common to use patterns like render props and higher-order components, but we’d find ourselves in similar “pyramid of doom” — style implementation hell where nesting became so heavily over-utilized that components could be difficult to maintain. This led me to ranting drunkenly at Dan Abramov, and nobody wants that.
Hooks address these concerns by allowing us to define a component’s stateful logic using only function calls. These function calls become more compose-able, reusable, and allows us to express composition in functions while still accessing and maintaining state. When hooks were announced in React, people were excited — you can see some of the benefits illustrated here, with regards to how they reduce code and repetition:
Took @dan_abramov's code from #ReactConf2018 and visualised it so you could see the benefits that React Hooks bring us. pic.twitter.com/dKyOQsG0Gd
— Pavel Prichodko (@prchdk) October 29, 2018
In terms of maintenance, simplicity is key, and Hooks provide a single, functional way of approaching shared logic with the potential for a smaller amount of code.
Why Hooks in Vue?Why Hooks? We have mixins for composition where we can reuse the same logic for multiple components. Problem solved.
I thought the same thing, but after talking to Evan You, he pointed out a major use case I missed: mixins can’t consume and use state from one to another, but Hooks can. This means that if we need chain encapsulated logic, it’s now possible with Hooks.
Hooks achieve what mixins do, but avoid two main problems that come with mixins:
- They allows.
So, how does that work in Vue? We mentioned before that, when working with Hooks, logic is expressed in function calls that become reusable. In Vue, this means that we can group a data call, a method call, or a computed call into another custom function, and make them freely compose-able. Data, methods, and computed now become available in functional components.
ExampleExample
Let’s go over a really simple hook so that we can understand the building blocks before we move on to an example of composition in Hooks.
useWat?useWat?
OK, here’s were we have, what you might call, a crossover event between React and Vue. The
use prefix is a React convention, so if you look up Hooks in React, you’ll find things like
useState,
useEffect, etc. More info here.
In Evan’s live demo, you can see where he’s accessing
useState and
useEffect for a render function.
If you’re not familiar with render functions in Vue, it might be helpful to take a peek at that.
But when we’re working with Vue-style Hooks, we’ll have — you guessed it — things like:
useData,
useComputed, etc.
So, in order for us look at how we’d use Hooks in Vue, I created a sample app for us to explore.
In the src/hooks folder, I’ve created a hook that prevents scrolling on a
useMounted hook and reenables it on
useDestroyed. This helps me pause the page when we’re opening a dialog to view content, and allows scrolling again when we’re done viewing the dialog. This is good functionality to abstract because it would probably be useful several times throughout an application.
import { useDestroyed, useMounted } from "vue-hooks"; export function preventscroll() { const preventDefault = (e) => { e = e || window.event; if (e.preventDefault) e.preventDefault(); e.returnValue = false; } // keycodes for left, up, right, down const keys = { 37: 1, 38: 1, 39: 1, 40: 1 }; const preventDefaultForScrollKeys = (e) => { if (keys[e.keyCode]) { preventDefault(e); return false; } } useMounted(() => { if (window.addEventListener) // older FF window.addEventListener('DOMMouseScroll', preventDefault, false); window.onwheel = preventDefault; // modern standard window.onmousewheel = document.onmousewheel = preventDefault; // older browsers, IE window.touchmove = preventDefault; // mobile window.touchstart = preventDefault; // mobile document.onkeydown = preventDefaultForScrollKeys; }); useDestroyed(() => { if (window.removeEventListener) window.removeEventListener('DOMMouseScroll', preventDefault, false); //firefox window.addEventListener('DOMMouseScroll', (e) => { e.stopPropagation(); }, true); window.onmousewheel = document.onmousewheel = null; window.onwheel = null; window.touchmove = null; window.touchstart = null; document.onkeydown = null; }); }
And then we can call it in a Vue component like this, in AppDetails.vue:
<script> import { preventscroll } from "./../hooks/preventscroll.js"; ... export default { ... hooks() { preventscroll(); } } </script>
We’re using it in that component, but now we can use the same functionality throughout the application!
Two Hooks, understanding each otherTwo Hooks, understanding each other
We mentioned before that one of the primary differences between hooks and mixins is that hooks can actually pass values from one to another. Let’s look at that with a simple, albeit slightly contrived, example.
Let’s say in our application we need to do calculations in one hook that will be reused elsewhere, and something else that needs to use that calculation. In our example, we have a hook that takes the window width and passes it into an animation to let it know to only fire when we’re on larger screens.
In the first hook:
import { useData, useMounted } from 'vue-hooks'; export function windowwidth() { const data = useData({ width: 0 }) useMounted(() => { data.width = window.innerWidth }) // this is something we can consume with the other hook return { data } }
Then, in the second we use this to create a conditional that fires the animation logic:
// the data comes from the other hook export function logolettering(data) { useMounted(function () { // this is the width that we stored in data from the previous hook if (data.data.width > 1200) { // we can use refs if they are called in the useMounted hook const logoname = this.$refs.logoname; Splitting({ target: logoname, by: "chars" }); TweenMax.staggerFromTo(".char", 5, { opacity: 0, transformOrigin: "50% 50% -30px", cycle: { color: ["red", "purple", "teal"], rotationY(i) { return i * 50 } } }, ...
Then, in the component itself, we’ll pass one into the other:
<script> import { logolettering } from "./../hooks/logolettering.js"; import { windowwidth } from "./../hooks/windowwidth.js"; export default { hooks() { logolettering(windowwidth()); } }; </script>
Now we can compose logic with Hooks throughout our application! Again, this is a contrived example for the purposes of demonstration, but you can see how useful this might be for large scale applications to keep things in smaller, reusable functions.
Future plansFuture plans
Vue Hooks are already available to use today with Vue 2.x, but are still experimental. We’re planning on integrating Hooks into Vue 3, but will likely deviate from React’s API in our own implementation. We find React Hooks to be very inspiring and are thinking about how to introduce its benefits to Vue developers. We want to do it in a way that complements Vue’s idiomatic usage, so there’s still a lot of experimentation to do.
You can get started by checking out the repo here. Hooks will likely become a replacement for mixins, so although the feature still in its early stages, it’s probably a concept that would be beneficial to explore in the meantime.
(Sincere thanks to Evan You and Dan Abramov for proofing this article.)
This is beautiful. I love how you once again show how these libraries build upon each other to achieve the same goal for people with different architectural taste. I hope people can learn from this and stop speaking ill about a technology that doesn’t fit their structural preference
Kudos for this! I have always struggled when revisiting Vue components that make use of one or more mixins, don’t know where data is coming from, even when I wrote those mixins! I think vue-hooks will help keep making Vue beautiful (no pun intended, sorry).
Hooks r useless because there are mixins in Vue
Thanks, Sarah! This clears up a lot of questions I had about using Hooks with Vue, and what the main benefits are.
But why not just fix the problem? If the problem is that mixins cannot do something,…. make them able to do it.
Why do I get the feeling that the Javascrpit community is wasting a metric ton of time and effort re-inventing the wheel of applicationdesign all because A) javascript is not a particularly well designed language and B) The creators of React where solving the problems they had, not designing a system to be used by the real world.
Hooks sound like PHP’a traits; they are glorified globals and globals are generally an extremely poor desing choice. easy to use and they can be a lifesaver, but the trend in react is that hooks rule bigtime and react is showing it’s naivity by pushing hooks as a solution to a problem that react has. It’s not a programming problem, any langauge has solved this a long time ago, it’s a react problem.
|
https://css-tricks.com/what-hooks-mean-for-vue/
|
CC-MAIN-2022-21
|
refinedweb
| 1,650
| 54.52
|
This is a C++ Program to solve a matching problem. Given N men and N women, where each person has ranked all members of the opposite sex in order of preference, marry the men and women together such that there are no two people of opposite sex who would both rather have each other than their current partners. If there are no such people, all the marriages are “stable”.
Here is source code of the C++ Program to Solve a Matching Problem for a Given Specific Case. The C++ program is successfully compiled and run on a Linux system. The program output is also shown below.
// C++ program for stable marriage problem
#include <iostream>
#include <string.h>
#include <stdio.h>
using namespace std;
// Number of Men or Women
#define N 4
// This function returns true if woman 'w' prefers man 'm1' over man 'm'
bool wPrefersM1OverM(int prefer[2*N][N], int w, int m, int m1)
{
// Check if w prefers m over her current engagment m1
for (int i = 0; i < N; i++)
{
// If m1 comes before m in lisr of w, then w prefers her
// cirrent engagement, don't do anything
if (prefer[w][i] == m1)
return true;
// If m cmes before m1 in w's list, then free her current
// engagement and engage her with m
if (prefer[w][i] == m)
return false;
}
}
// Prints stable matching for N boys and N girls. Boys are numbered as 0 to
// N-1. Girls are numbereed as N to 2N-1.
void stableMarriage(int prefer[2*N][N])
{
// Stores partner of women. This is our output array that
// stores paing information. The value of wPartner[i]
// indicates the partner assigned to woman N+i. Note that
// the woman numbers between N and 2*N-1. The value -1
// indicates that (N+i)'th woman is free
int wPartner[N];
// An array to store availability of men. If mFree[i] is
// false, then man 'i' is free, otherwise engaged.
bool mFree[N];
// Initialize all men and women as free
memset(wPartner, -1, sizeof(wPartner));
memset(mFree, false, sizeof(mFree));
int freeCount = N;
// While there are free men
while (freeCount > 0)
{
// Pick the first free man (we could pick any) of preference is free, w and m become
// partners (Note;
}
Output:
$ g++ MatchingProblem.cpp $ a.out Woman Man 4 2 5 1 6 3 7 0 ------------------ (program exited with code: 0) Press return to continue
Sanfoundry Global Education & Learning Series – 1000 C++ Programs.
Here’s the list of Best Reference Books in C++ Programming, Data Structures and Algorithms.
|
https://www.sanfoundry.com/cpp-program-solve-matching-problem-given-specific-case3/
|
CC-MAIN-2018-13
|
refinedweb
| 426
| 68.5
|
If you’re constantly moving your Raspberry Pi projects from place to place this tutorial will show you how to automate a Python script so you can take your pi on the go!
I found this especially helpful when working with the Pi Zero because you won’t need to crank out the adapters, ethernet adapter etc to be able to run your Python scripts when out.
Step 1: Updating and Installing GPIO Zero 3
I'm assuming you can access the terminal of your Raspberry Pi (ssh,VNC, on a monitor … etc)
Run these commands in order
Let’s update the Pi and install the newest version of GPIO Zero ver 3 for Python 3 by running the following commands:
- sudo apt-get update
- sudo apt-get install python3-gpiozero
Step 2: Make a Directory
Make a directory(folder) to store your python scripts . I will create one named pythonScripts:
- sudo mkdir pythonTest
then navigate to the directory using this command:
- cd pythonTest
Step 3: Create the Python Script
Create your Python 3 script named blinky that will blink the LED once the Pi Boots Up with the following command:
- sudo nano blinky.py
Here's the code :
from gpiozero import LED
from time import sleep
led =LED(4)
while True:
led.on()
sleep(1)
led.off()
sleep(1)
Control + X then press Y to save the file
Step 4: Wire the LED to Your Raspberry Pi
Wire the LED as shown on the Fritzing diagram above
Step 5: Create the Launching Script
Next we need to create the script that will tell the Raspberry Pi where our blinky Python script is located to run it o boot .
Type:
- sudo nano launcher.sh
Control + X then press the y key to save it
Step 6: Make the Launcher Script Executable
Since in Linux there is a set of rules for each file which defines who can access that file, and how they can access it.We need to make the launcher.sh script executable,readable etc with the command:
- sudo chmod 755 launcher.sh
chmod stands for "change mode" used to define the way a file can be accessed.
755 means the owner may read, write, and execute the file more information about chmod 755 found here
Now test it by typing: this should turn on your LED!
- sh launcher.sh
Control + C should stop the script from running
Step 7: Make a Debugging Directory
Create a directory called logs for storing any errors information if the script fails the run on boot .
Navigate back to the home directory by typing:
- cd
Next type:
- sudo mkdir logs
Step 8: Configure the Scripts to Run on Boot With Cron
Now let’s access the crontab-a background (daemon) process that lets you execute scripts at specific times- so we can run our script on boot. learn about Cron here!
Type:
- sudo crontab -e
then paste in the following at the bottom of the file
- @reboot sh /home/pi/pythonTest/launcher.sh >home/pi/logs/cronlog 2>&1
Control + X to save it
Step 9: TEST IT ! Reboot Your Pi
Make sure your LED is correctly wired to pin 4 and Ground then
Reboot your pi
- sudo reboot
wait a minute or two for the Pi to boot up then if it worked you LED should blink!
If your script did not work you can check the debugging folders we made by typing
- cd logs
then read the cronlog that should have any errors that occurred on boot using the command cat
- cat cronlog
|
http://www.instructables.com/id/ShareCreate-Edit-Files-Between-Your-Mac-and-Raspbe/
|
CC-MAIN-2017-22
|
refinedweb
| 593
| 68.33
|
100DaysOfGatsby1-Jan-2020
- gatsbyJs
Day 47
Feb 16 2020
Sad times today. I've released a first version of gatsby-plugin-prop-shop.
but... i wasn't anticipating failure, alpha tests have kind of suggested that PropShop is a bit useless 😂
When i built it i was thinking, this is great, i know i've needed functionality like this on many a project... but that's because i develop component libraries for my day job.
It would appear as though if you're developing your own blog or a theme, PropTypes and descriptions aren't really a priority. I somewhat disagree if you're building a theme because you'll want your users to know what each prop does in case they shadow the component and documenting the props would be very helpful in this instance.
However, what i learned from my alpha tests are that not a lot of people bother with PropTypes 😥.
I can understand this if you're developing in
.js because there's no real reason other than for completeness to spend time writing out types and descriptions, if you're using TypeScript it's of course an absolute must! But from the alpha test PR's i created i noticed not a lot of Gatsby developers are using TypeScript.
All of this was an oversight to be honest because i was just thinking about what stack i use and how i approach a build.
I'm a huge proponent of Component Drive Development and correctly typing props and documenting them is a big part of this.
The other oversight is a pattern i've noticed where GraphQL queries are created using the new-ish Gatsby
useStaticQuery method and then abstracted away in to a utils directory and then imported as and when required.
When using this method passing props around needn't occur at all as everything is returned within the component where the query is being used. I suspect in TypeScript you'd need to type the return but in
.js land everything is so loose you can leave this un-typed and pass the data straight onto the
.jsx no questions asked. 😬
So with that said, i've released a
0.0.1 anyway and tweeted a few people and rather than get hung up on failures, i've dusted myself off and come up with a new plan.
I'm going to build another theme!
I have for a while now been wanting to transfer my current commercial portfolio from Next.js to Gatsby and i had intended to use gatsby-theme-gatstats for both my portfolio and my blog.
but...
After digging around in the GatStats repo after having learned a lot from my recent work i think GatsStats needs a bit of an overhaul... i'm not sure when i'll tackle this and attempting to rip out methods to replace them with some new things i've learned might become messy so i've decided to start a new theme and i'll try out some new methods and see how it goes.
I'm not too disappointed with GatStats, it was my first theme after all, in fact it was the first thing i built using Gatsby so there were mistakes made along the way but that's just all part of the process ay!
Day 46
Feb 15 2020
It's been an absolute belter of a day today!
I started the day by refactoring some quite gross 🤢 functional programming i'd done in gatsby-plugin-prop-shop and i think second time round i've made the "totals" object a lot neater and streamlined the amount of times the GraphQL array needs to be looped over.
There's probably a significant performance increase but i'm not too worried about testing that at the moment.
Next up was to find a Gatsby theme written in TypeScript so i could test PropShop with
.tsx files or more specifically
interfaces
Fellow Gatsby fan Rich Haines has a sweet little theme for adding
SEO data to your blog, you can see it here.
So i forked that repo and added gatsby-plugin-prop-shop ⚡
There are some differences between the way react-docgen returns data from PropTypes vs interface(s), the main one being with PropTypes all data objects are returned with an
id, with TypeScript they don't, which was throwing my table header out of whack.
To resolve this instead of trying to get data from the GraphQL query i just manually created an array of table headers and that was that sorted.
I then forked theme-ui 😬 and set about adding PropShop to the docs so that i could see PropTypes for theme-ui/components. It worked first time amazingly even though the components directory is in a different package!
I did have an odd style issue so i've set the
prop-shop page wrapper to be
position: absolute so no matter which site or blog PropShop is used in that page will always cover anything else defined in a default layout.
I've also written a blog post about PropShop, which if you're interested you can read here
I'm still very much in the alpha phases of development and i now have x4 PR's open and am looking for feedback before i'm ready to release a first version of the plugin so if you're reading this and want to help, tweet me @pauliescanlon
Day 45
💖 Feb 14 2020
Today was a bit of a maintenance day on gatsby-plugin-prop-shop there were a few things i'd noticed as part of my alpha tests as mentioned yesterday with styles not quite being as i wanted them so i've been over all the css and tidied up a few things.
I also took another stab at the filtering, previously if you start to type you can filter out files containing props that don't match your search term but props from the same file still appeared in the list. The new filtering system hides any prop from any file that doesn't match the search term which makes using PropShop a much better experience.
I've still got a bit more tidying up to do before i release
0.0.1 and ideally i'd be able to see both PR's i created for Scott Spence and Eric Howey on their respective projects completed.
On that note however, when speaking with Eric he mentioned he'd not really used PropTypes that much and looking at his project i can see why. A lot of the components use a hook to grab the data so in theory all the components manage their own data source internally and don't require props to be passed in. It's a pretty sweet approach and i might "borrow" it for gatsby-theme-gatstats
I've also been thinking about how PropTypes are kinda optional in straight
.js world so perhaps a lot of people developing Gatsby projects won't even use them if they're developing in
.js...
.ts however you have to have props defined in an "interface" or as a "type" or you get IDE errors left, right and center so i think this will be the next thing i investigate.
It looks like from the react-docgen docs that TypeScript "type" is supported... not sure about "interface" so i'll look into it and see what's what.
Day 44
Feb 13 2020
🚨 We have an alpha release of PropShop!
Yup, Yup, Yup. Today i've released an alpha version of my new Gatsby plugin gatsby-plugin-prop-shop
As you can see it's been renamed from "Project Prop" to PropShop... watta you think?
I started the day by forking a couple of repos from trusted Gatsby dev pals, Scott Spence and Eric Howey and added the plugin to their
gatsby-config.
Problem No.1 was i'd set the path for
gatsby-source-file-system to
path.resolve(src/...
) which will resolve to the demo running the plugin not the theme where it's installed. In most cases theme developers will want to inspect PropTypes for components in their theme not in the demo running the theme.
That's now changed to accept a full path of where to find
.js files containing PropTypes.
Problem No.2 was as suspected style related. I was using
gatsby-plugin-theme-ui for the styles but when running the plugin in a theme or, demo of a theme that was also using
gatsby-plugin-theme-ui the theme object was getting overwriting and i was losing all my styles because the theme or demo theme wins out.
The resolve was to still use
theme-ui but not the Gatsby plugin version which meant i needed to wrap the
prop-shop.js page in a
<ThemeProvider /> ... no big deal there.
I have seen a few other issues where some of my styles pick up the theme styles, things like global typography rules do affect my typography rules if i've left a css value unset. I think i can sort this by being more strict with my styles and setting things that i was taking for granted as browser defaults.
So there ya go. If you're into PropTypes and documenting components PropShop can give you a helping hand!
Enjoy!
Day 43
Feb 12 2020
So close now!
Happy to report i've very nearly completed all work required to launch an MVP of "Project Prop" looking at what i've done i'm confused as to why it's taken me so long 😂. That said, i'm finished with all design, all styles and i only have a tiny bit more functionality to add to allow for table sorting but i might not launch with this as i'm keen to get testing.
I've got the build pipeline sorted in Netlify, prepared a nice open graph image but will need to add some SEO back in to the demo but that shouldn't be too much work and i've started writing the README.
After that all i really need to do is publish it on npm... then let the world know... and by world i mean my 400 followers on Twitter 😔
That's it for today, no real problems or headaches i just cracked on and all is well!
Day 42
Feb 11 2020
Another good day today! 😃 I did have to do some traveling so lost a few hours of sweet sweet developing time but i think i've made up for it.
After yesterdays progress with the design of "Project Prop" i've moved back to development and have implemented all the new styles apart from some small data visualizations which i might try and attempt in pure css. I think i have a repo for an old project where i created some css only donut charts.
One thing that continues to play on mind is that i'll really need to test this plugin against multiple Gatsby projects.
Whilst i'm confident in the JavaScript because i'm using react-docgen to generate all the data to fill the prop table i'm unsure how the styles will cascade.
To give you some context "Project Prop" is a plugin but it works like a theme.
When you install it and spin up your project you can visit eg. The plugin has already generated the page that will appear on this route as it gets bundled in with your site or blog build as part of the
gatsby build /
gatsby develop process... but so too will the styles.
I've tried to name tokens in my theme-ui config in such a way they won't clash by pre-fixing them with a "pp" but given that this is a theme and themes are supposed to provide all the styles for the project i'm worried they'll try and overwrite styles defined by the user.
I'm currently developing a demo using the Gatsby starter blog and it works ok but it doesn't implement theme-ui so i'm not sure if they'll be complications further down the line.
I think what i'll do is release a
0.0.0-alpha version then perhaps pull some ready made Gatsby themes and see what happens when i add this plugin... should be fun!
Day 41
Feb 10 2020
Feeling good today. 😃
I put in a proper shift and made some excellent design progress on "Project Prop" i have the MVP design and interactions all sorted and i even managed to design a logo.
I know from previous experiences that for me, attempting to design in browser doesn't really do me any favours but i do like to kind of prototype stuff and then if i think it's a good idea i go ahead and design that functionality. Better that then designing a load of things and then finding out they can't be done... seems like a waste of time!
Having said that, there's a few extra things i've put in the design which i haven't prototyped, namely the sorting behavior of the "PropTable". It will probably be fairly straight forward so for now i'm happy for this to be part of the MVP.
That's kind of it for today. Design is not my favorite thing to do but it is always good to dust of the old skills and...
i reckon i still go it! 😎
Day 40
Feb 9 2020
Sometimes i really struggle with functional programming, i think it's because i'm not from that background or maybe my brain is wired differently but the problem i was having yesterday with filtering a deeply nested array was quite easily soloved with the
array.some method.
some tests whether at least one element in the array passes the test implemented, this was all i was really looking to do, but instead i was messing around with
map which of course returns a new array, and then trying to filter that.
some returns a boolean value and from there you can do what you want with the return value.
Anyway, i now have all the search and filtering functionality implemented to create what i suppose will be an MVP of "Project Prop".
I have an overall plan of what i'd like "Project Prop" to do but there's a little part of me that is working it out as i go and as i'm using it i'm thinking about what other functionality should be added in.
I have to go back to the design phase a little bit now as i'm not sure where some of the filter options should be positioned and this is way easier in design that it is to move styles and props around in code.
So tomorrow's plan will be just that, a little bit of pixel pushing then back on to the code. I also need a logo at some point so that'll be fun!
Day 39
Feb 8 2020
Not an overly productive day today. The good news is i'm finally back on "Project Prop", the bad news is I’ve run out of JavaScript talent.
Maybe I’m just having one of those days because I can’t for the life of me work out how to do some pretty tricky filtering. I have a search input kinda thing working but I need to filter a nested array from an array of objects and still return the parent object if it’s child array contains what I’m filtering against, but filter out anything that’s not a match for the filter.
Hopefully a good nights sleep and an offline re-think will get me over this small hurdle.
It’s probably the most important part of "Project Prop" as it’s what will be used to aid users in inspecting prop details / descriptions.
Part of me wants to move on to another part of the project, the bit that I’ll probably enjoy more as it’ll be mostly data visualization but I feel giving up on this first issue before I’ve solved the filtering problem won’t help me in the long run.
Day 38
Feb 7 2020
Challenge accepted and challenge completed.
gatsby-theme-gatstats now has a contact page hooked up with Netlify's built-in form service ... which is truly magical!
I've completed the work on the components used in the demo contact page and set it up using Formik and i've also thrown in some sweet validation using Yup
All the components used in the demo are available as shortcodes so you don't need to import anything to use them in the theme... but because this is a theme i've not included Formik or Yup.
I feel at this point it's best to leave that up to whoever is using the theme to decide how they'd like to handle forms and form validation.
I was thinking about adding a few more form components to the theme shortcodes, radio or checkboxes might be handy but until i hear from end users i'll shelve development on the theme for a while as i really want to get back to "Project Prop" and i still need to update all the posts in my commercial portfolio which i'm putting off... but... it was great to finally take part in one of the actual challenges set out by #100DaysOfGatsby
Day 37
Feb 6 2020
Today was another travel day and i think they'll be a few more before i find my vibe. But, i have done some work today on the form elements.
I've created form, text input and text area components, all work via "shortcodes" because they've been added to the components object of the
MDXProvider which means they can be used in any
.mdx file within the theme, no imports required!
All of the components in gatsby-theme-gatstats are documented in Storybook and i've tried to write a few unit tests where applicable.
This will probably be a job for tomorrow now, then i can look in to the magic that is Netlify's built-in form service
Day 36
Feb 5 2020
i didn't have an opportunity today to actually right any code but i did do some thinking about a conversation i had on Twitter last night.
It started because it dawned on my that i've not actually completed any of the task set out by #100DaysOfGatsby, this is largely due to having spent some months working with Gatsby and i feel i've already accomplished the first few tasks in one way or another...
...until now!
I noticed the challenge for week 6 was to integrate a serverless form solution, this is something that might be useful in my theme gatsby-theme-gatstats as currently there's nothing in the demo relating to a "contact me" page.
I had a look over the links in the challenge email and there's a mention of using Netlify's built-in form service and at first glance it appears to be really straight forward and all you really need to do is add an attribute to a form element and Netlify does the rest.
Now, this sounds great if you're building you're own blog or developing a site for someone else but i did wonder how it would work within a theme.
I've come to the conclusion that the theme doesn't really need to expose this functionality as it then limits the themes users to only using Netlify forms. Instead i think what the theme can do is provide some UI components that will work if they're dropped in to an
.mdx file... yesssss the joy of
.mdx
So that's what i'm gonna do, i'll probably create two new components, an input field that will spread the props for an html input element and a text area component, probably the same deal with the props. The theme is written in TypeScript so i might need to explicitly define the "netlify" attribute and perhaps i'll create a "contact" page in the demo site, plus add some documentation to the README and because i'm using Storybook to develop the components i can also add some notes in there about how to use the form components.
I think that's the theory done, now i just need to write some code.
Day 35
Feb 4 2020
Today was a travel day...
London Heathrow - Colombo Sri Lanka. I'll be here for the next 30 days working on various Gatsby related things.
For those of you who don't know, i'm a contractor, i have been self employed for nearly a decade now and have learnt along the way it's important to take time off to up-skill.
The main reason is when you're a contractor, you're the person "they" bring in because you know what you're doing... but i do sometimes find that whilst on contact i'm not really learning anything.
Of course working is great as i need to earn a living but if i only work and never up-skill, at some point i'll be out of touch with what's going on.
This is my main reason for taking the next 30 days off to up-skill. Granted i could have done this in England but why not travel and see the world and more importantly ... be warm!
The next few days might not be that productive as i've got a bit of admin to take care of but i did manage to do some work on the fight without internet access, and because of this i decided to finally transfer my commercial portfolio over from a site i built using next.js and a self hosted headless version of Ghost CMS on digital ocean.
It was actually the reason i built my theme in the first place. I wanted to have a commercial portfolio and a blog that shared the same look at feel. I also wanted to go serverless which lead me to discovering Gatsby.
I've pulled down all the content i need from my CMS and have started to create new posts in the new site which uses my theme gatsby-theme-gatstats. It's quite a boring task if i'm honest but i think i'll just chip away at it over the next few days as i can do most of the work without internet access.
That's all for today. I'm hungry and tired!
Day 34
Feb 3 2020
A fantastic day today! I've been continuing to problem solve an issue with gatsby-mdx-embed and i'm pleased to report, it's fixed!
The way the plugin works is by using the
wrapRootElement api in both
gatsby-browser.js and
gatsby-ssr.js which does what it says on the tin and wraps the root element of the site with a component from the plugin.
In the case of gatsby-mdx-embed it's wrapping the root element with the
MdxEmbedProvider, this in turn wraps the root element with an
MDXProvider which handles the passing of React components from
.mdx and executes them so they render in the dom.
In all cases when running
gatsby develop this "wrapping" was working as expected but users were seeing issues when running
gatsby build.
I started my bug fixing investigation by adding a
div with a border around the
MdxEmbedProvider and found that when i ran
gatsby develop it was there but not when running
gatsby build
This explains why users were seeing the problems with the components not rendering as expected but it took a little more investigation to work out why.
It turned out in my case to be the method i was using to export the
MdxEmbedProvider in
gatsby-browser.js and
gatsby-ssr.js
see below
// gatsby-browser.jsexport { wrapRootElement } from "./wrapRootElement"
// gatsby-ssr.jsexport { wrapRootElement } from "./wrapRootElement"
and wrapRootElement looks like this
// wrapRootElement.tsximport React, { FunctionComponent } from "react"import { MdxEmbedProvider } from "./components/MdxEmbedProvider"interface IWrapRootElement {element: React.ReactNode;}export const wrapRootElement: FunctionComponent<IWrapRootElement> = ({element,}) => <MdxEmbedProvider>{element}</MdxEmbedProvider>
The first thing i changed was the name of this "component" because
wrapRootElement is part of the gatsby api and i wasn't sure if something was getting screwed up due to a clash in names. I've since changed the name of this file to
provider.tsx
The second thing i looked at was the method of exporting the module, this now looks like the below
// gatsby-browser.jsexports.wrapRootElement = require(`./provider`)
// gatsby-ssr.jsexports.wrapRootElement = require(`./provider`)
and the provider now looks like this
// provider.tsximport React from "react"import { MdxEmbedProvider } from "./components/MdxEmbedProvider"interface IProviderProps {element: React.ReactNode;}module.exports = ({ element }: IProviderProps) => (<MdxEmbedProvider>{element}</MdxEmbedProvider>)
Whilst i think i could have kept the ES6 method for exporting in
gatsby-browser.js it wasn't working in
gatsby-ssr.js.
Changing the method in both files seems to have done the trick.
At this point i released
0.0.16 but then due to some weirdness with re-naming the file from Provider.tsx to provider.tsx i had to release
0.0.17 but either way this is now fixed! Here's the full run down #11
Boogy time!
Day 33
Feb 2 2020
Kind of productive day today. I'm continuing to bug fix gatsby-mdx-embed.
At this point however i'm not entirely sure there is a bug with my plugin. I've spun up a minimal repo and installed all the relevant peer dependencies and have installed one of the "problem" plugins that was reported to have been causing issues.
So far i can't re-create the problem. Which leads me to believe it's not a problem with my plugin but might be related to having a project that uses multiple
MDXProviders
I did update the README and have explained how to manually wrap the
MDXRenderer with the
MdxEmbedProvider which is effectively what the plugin is doing but it does appeat that importing and wrapping manually solves all the problems.
I'll continue to investigate this tomorrow.
Day 32
Feb 1 2020
Today was a bug investigation day. I've had two issues raised now relating to gatsby-mdx-embed and what appears to be a "clash of the providers".
The plugin works by wrapping a sites
MDXRenderer with it's own
MDXProvider. This provider is called
MdxEmbedProvider, and is handled by a
gatsby-ssr method called
wrapRootElement
This is the core of the plugin and means that by adding
@pauliescanlon/gatsby-mdx-embed to your
gatsby-config the plugin can catch any components referenced in
.mdx files and converts them into the components defined by the plugin.
Unfortunately when other gatsby plugins are used in a site that also use this method it prevents the custom
MdxEmbedProvider from working correctly.
There is one work around that seems to do the trick and that's to manually wrap the
MDXRenderer with the custom
MdxEmbedProvider.
eg:
// layout.js<MdxEmbedProvider><MDXRenderer>{body}</MDXRenderer></MdxEmbedProvider>
This indeed solves the problem but i'm still not sure why when you have multiple plugins that use the
wrapRootElement you experience these bugs.
While developing the plugin i asked @chrisbiscardi if he knew of any reason why multiple providers couldn't be used. This was his response.
Multiple providers will merge the components object. Last provider wins
I'll continue to investigate this bug and hopefully be able to release a new version with a fix in the coming days.
Day 31
Jan 31 2020
Not a whole lot of code completed today but i have been tinkering with "Project Prop". The code i have done today has been to sense check that i can later implement what i'm designing. I do find myself doing this on my side projects where i'll do half in browser and half in "a hem" Photoshop ... yeah i need to switch over to Sketch!
I usually start with a prototype and build it to kinda prove that i have at least some kind of functionality in place, then i attempt to design it all in browser, then i get fed up with it not looking the way i want and then finally i give in and open Photoshop.
That's what i've pretty much done today, i will say however experimenting with the CSS as i go does at least mean that i know i will be able to build what i'm spending time designing.
I've made good progress today and have also been playing around with fonts, colors and table styles. Fortunately a lot of the CSS i will need in the prop table can be borrowed from one of my other Gatsby plugins gatsby-remark-sticky-table.
I've also had some other thoughts about how to extract what i feel will be some useful prop related information and display it in a kind of data visualization way. Who knows, there may be donut charts in "Project Prop"
Day 30
Jan 30 2020
It's been a slow day. I started the morning by investigating an issue with gatsby-mdx-embed reported by Scott #133 turns out it was a clash of MDXProviders and a little re-jig was all that was required to fix it. I should probably add a comment to the README explaining what to do if this happens.
Other than that i've been wrestling with tables and css on "Project Prop" it's coming together but it's taking me longer as i don't want to put my design hat on ... knowing full well i will have to eventually but it's fun to design in browser!
Day 29
Jan 29 2020
Had another pretty successful day today, i continued working on "project prop" and after chasing my tail for a bit i've changed tact slightly.
To start off my thinking was to create a plugin, and i approached this the way i have with some of my other plugins but i realized after a few hours of banging my head against a brick wall that some of what i need to do is already supported out of the box if i take the theme approach.
So that's what i've done.
"Project prop" name tbc is for all intents and purposes a plugin and will act like a plugin but under the hood it's actually a theme which is a little confusing but now i've got my head round it i think it's the right way to go.
The next issue i'm facing is the style stack.
My initial thought was to use theme-ui but what i'm finding is that i'm accidentally overwriting styles in the host site. This is the expected behavior of a theme-ui so i can understand why it's happening, the problem is of course how to stop it from happening. I need to go back over the theme-ui docs and see if there's an escape hatch or perhaps some kind of order of specificity class name i can use somewhere.
Failing that i might be tempted to use StyledComponents or maybe even go back to CSS Modules or Scss.
In either case i'm pretty happy with my progress today and if i can solve the styling problem i can carry on with some Design and UX work.
Day 28
Jan 28 2020
Today has been pretty chilled. I've solved a lot of the problems with the two main projects i've been working on and i'm happy for them to be used and i'll be keeping an eye out for issues or feature requests which i'll continue to work on as and when.
I'm pretty new to open source and i've gotta say it's pretty cool that people like my stuff enough to use it and to raise issues and contact me on twitter.
I'm enjoying hearing about use cases and in quite a lot of cases i feel these issues are actually really nice improvements... so a big thanks to anyone who i've spoken with lately!
Since there's no fires to put out i decided to kick start another project that's been in the back of my mind for a while. It's quite pertinent to my day job of developing custom React component libraries - for companies that make bazillions, and for almost 4 years now my go to, must have tool has been Storybook
If you've never used Storybook, you should, it's amazing! It's ideal for Component Driven Development as it allows you to just focus in on the one thing you're working on and not worry about the larger application... well some times you might have to consider where the component will end up but usually i aim to solve for the 80% of use cases... but....
One thing missing from Storybook is a method to provide an holistic view of all the props, not just the ones in the component you're developing on that day. The prop tables in Storybook are second to none, don't get me wrong! but you can't easily see the props and prop descriptions you wrote for a similar component four sprints back. To see these i usually open another browser and put the windows / stories side by side, or use the split view option in VS Code.
But this sometimes still isn't good enough.
One important factor in the creation of Component Libraries is api consistency"
By that i mean it's really not good having a prop called
CardHeader on a
<Card /> component and then having a
PanelHeading prop on a
<Panel /> component, or worse, a prop that is actually a header / heading but called something completely different.
So i've started to think about writing a plugin that will provide me with a single view of all my props and their descriptions, i'd hope to be able to filter / search, sort and perhaps even spell check all in one place.
The method for delivering this i think at the moment will work very similarly to how Jest create their coverage reports. You run a build on it and it creates a static site right there in your repo, you can of course
.gitignore the directory or deploy it.
Given that creating static sites is kinda the Gatsby thang, it seems like an obvious choice, but i have run in to one or two problems with it today. Mainly how to create the static build bundle and move it to somewhere more useful... all from the plugin.
I think i'm on the right track and i'm sure i'll have more to say tomorrow.
Toodle-pip!
Day 27
Jan 27 2020
Another sweet ass day today!
I've refactored gatsby-mdx-routes to use TypeScript and now it follows the same yarn workspace pattern as gatsby-mdx-embed and gatsby-theme-gatstats which is important to me as i'm working across multiple projects and just prefer it if they can all be developed the same way.
I did have some problems though. First off i'm not using
babel/preset-env because Gatsby's static query can't be compiled by in a plugin like i'm doing so i have to just convert my
.ts to
.js as ES6.
I imagine when the plugin is used by another Gatsby project the compiling to browser friendly code will be taken care of by the Gatsby project using the plugin. 🤷♂️
Once the
.ts was compiled to
.js as EE6 i started to see a "multiple graphql query" error, this was because i had named the graphql query and it can't exist in both
.ts and the compiled
.js in the same project.
Removing the name of the query cleared this error... not sure if there's a more complicated issue i'm yet to find about using unnamed graphql queries? time will tell.
I also found that using
staticQuery in
gatsby dev was fine but when i ran
gatsby build and
gatsby serve i was seeing gatsby Loading (StaticQuery) instead of the graphql response. I did have a read of some GitHub issues but couldn't find anything solid so just decided i'd just give the new
useStaticQuery hook a go! and boom 💥 it worked.
gatsby-mdx-routes is now fully converted to TypeScript! ... i've just gotta go back over my code and correctly type everything 😫
Day 26
Jan 26 2020
Pleased to report i had an excellent start to the day!
My investigation into how to solve this
docz site TypeScript prop tables issue went really well and i found that by shadowing the
Props component from
gatsby-theme-docz i was able to re-wire the props to enable correct and full population of the prop table.
This his how i did it.
Shadow the
Propcomponent by creating a new one in my demo site at
demo/src/gatsby-theme-docz/Props/index.js
Remove the
propprop from the component, which i think is actually the
ofprop
Create a new prop called
name
import and use the
useDbQueryhook 🎣
Filter the
useDbQueryby the new
nameprop
Pass the result on to the
Propcomponent
The new component now looks like this:
...import { useDbQuery } from "gatsby-theme-docz/src/hooks/useDbQuery"export const Props = ({ name, getPropType, isToggle }) => {const db = useDbQuery()const entries = Object.entries(db.props.filter(prop => prop.value.length > 0 && prop.value[0].displayName === name)[0].value[0].props)return (<div sx={styles.container}{entries.map(([key, prop]) => {return (<Propkey={key}propName={key}prop={prop}getPropType={getPropType}isToggle={isToggle}/>)})}</div>)}
...and to use it i now do this,
<Props name="Gist" />
instead of this
<Props of={Gist} />
I'm not sure if this is the best way to make this work but i think until
docz have a more solid way of creating prop tables for TypeScript components it'll have to do.
I've tested this in
dev and in
prod and it works!
So, as of this morning i released
0.0.15 of gatsby-mdx-embed which is fully working with TypeScript and populated prop tables in the demo site
🥳
Day 25
Jan 25 2020
Had another day of wins and loses. I got
docz site prop tables finally working and it's able to read props and prop descriptions from my TypeScript files
but...
only when running in
dev not
prod!
This is both mad and extremely frustrating!
When i run
gatsby develop all is well, when i run
gatsby build then
gatsby serve i get a flash of prop tables, then nothing.
My current course of investigation is to shadow the
Props component, work out what it's doing and try and find what feeds it the
props prop.
I think at the moment this comes from the
docz core which is not part of the theme so there's nothing i can really do from my end 😢
but...
i've had an idea which involves using the
useDbQuery hook and passing the props on to the
Props component manually.
This so far is proving very difficult but using a lot of console logs and a number of JavaScript array methods i think i can manipulate the data into a shape that will work in both dev and prod.
Day 24
Jan 24 2020
Tricky day today.
On one had it was great, i've got babel and tsconfig playing together nicely and the two combined take my
.tsx files and output browser friendly .
js
This is perfect, and now i feel like i have a much better understanding of both tsconfig and babel.
I've also learned that no matter what your "main" key in package.json is doing Gatsby ignores it and always looks to the root of your project for
gatsby-browser,
gatsby-ssrand
gatsby-config. I suspected this was the case but thought that i might be able to work round it.
I've also got both the plugin and demo dev processes working together.
The demo site for gatsby-mdx-embed and the plugin code exist in a monorepo controlled via yarn workspaces.
In order to develop the plugin code i need babel running in
--watch mode so that when i start the Gatsby develop process both projects hot reload when i make a code change. I was able to accomplish this with
npm-run-all.
Things were going well until i had to address the docz site propTable issue again and i've spent nearly 8 hours trying to find a way to make it work.
docz site can read props from components in a monorepo if you tell it where to look, naturally it works with React PropTypes and using a boolean in the docz config it can also read TypeScript interfaces and populate propTables.... but no matter what i did today i couldn't get it to work.
I'd all but given up hope until i posted a message on Spectrum and watta ya know Rakan Nimer one of the maintainers has replied.
If you read along that thread he's suggested changing the repo setup. This might mean dropping Yarn workspaces but if that's gonna make these bloody PropsTables work i'm all for it!
Day 23
Jan 23 2020
Hmmmm TypeScript! I thought i'd sorted this already but after my old chum Scott started using gatsby-mdx-embed he noticed that he was seeing some TypeScript related issues.
This plugin is written in TypeScript but it should transpile back down to commonjs so you can use it in both TypeScript and JavaScript projects. This wasn't quite the case!
I've since been using babel and gooooood lord is it great! I've done TypeScript setup in the past and it was always a combination of mashing tsconfig, babel and webpack together until you had a config that worked.
This was always a ball ache because you'd have to maintain both babel and webpack but now i've discovered babel 7 it's got so much easier!
The tsconfig is still a little bit of a mystery to me but the new babel presets make sense, even if you just read them by name it's pretty clear what they do.
"@babel/preset-env""@babel/preset-react""@babel/preset-typescript"
This is pretty much all that's needed to covert back to browser compatible JavaScript and with only a few settings in tsconfig setup it's just a case of setting the input and output for babel and it goes off and builds out .js modules!
"build:js": "babel src --out-dir lib --extensions \".js,.ts,.tsx\" --source-maps inline",
This was all going really well until i ran into that
docz site issue again with it not populating prop tables... gonna come back to that, the more pressing issue is i think i've got some weird Gatsby thing going on.
Babel now coverts all
.ts to
.js and moves it in to a lib directory, my package.json has a "main" key which points to this lib directory but for some reason
gatsby-browser.js and
gatsby-ssr.js never seem to run.
I've put the word out so hopefully i'll get this resolved soon!
Day 22
Jan 22 2020
Had a pretty sweet day today, i continued working on gatsby-mdx-routes and focused on the on the issue i described yesterday which was to enable a way to create a dynamic recursively created navigation object based on slugs... what a mouthful!
Taking a step back for a second i'll describe the problem as i see it.
When creating navigation in any website you need to think about the depths a route might be at.
Example, a top depth navigation might look like this
|-- home|-- services|-- contact
...which is pretty easy to display an an html list, but then you get to a navigation element that might have a second or third depth, eg;
|-- home|-- services|--- web design|-- user interface|--- web development|-- front end|-- mobile first|-- backend|-- contact
At this point you need a way to have nested navigation elements within headings, and since you don't know how many levels deep an element might be you need a solution that will cater for an element that could be 10 levels deep.
I've seen a lot of examples of this problem 'solved' by suggesting the use of a frontmatter property that can be used to group menu items together by a parent heading.
If you're creating this in your own project then that's fine, you're in charge of both the frontmatter and the graphQL query that fetches the data... but what if you're not?
You may or may not know that if you add a property to a graphQL static query and that property is not found in at least one file in your file system you get an error.
This is the problem i faced with gatsby-mdx-routes, if i add a property called
menu for example, then at least one file in the project must contain it, but if you don't want to group your navigation elements you'd still need to add this property to frontmatter to avoid getting an error.
There are a couple of solutions to this.
- I could create a hidden file somewhere in the plugin that can be used as a place where all frontmatter properties exist but the file is never rendered... 🤢...
- Set a default using
createSchemaCustomizationin gatsby-node.js ... again 🤢
Until Gatsby resolve what i understand to be a really complicated problem of allowing the graphQL static query to fail gracefully if no frontmatter is found then i'm sticking by my original thoughts.
"I don't think the answer is to add any additional properties to frontmatter"
Which leaves me with one other option to determine how a navigation element should be grouped...
slugs
You'll have probably seen when you inspect
slugs that they represent a location on your file system, the chances are you've put all your blog posts in a
posts directory in your project, you might have even created sub directories for year and month.
All this information is contained within the
slug and in effect all we're trying to do in the browser is mirror what your file system is doing.
A visual example....
|-- home|-- services|--- web design
Given the above directory structure the
slug(s) would be as follows;
- services = "/services/"
- web design = "/services/web-design/"
This is all the information we need to determine how many levels deep any given navigation element is... but getting at it was a real challenge for me.
My approach was to split the
slug using
slug.split("/")
;["", ""]
;["", "services", ""]
;["", "services", "web-design", ""]
Then i remove the empty strings so i'm just left with what is effectively an array of parents and children.
There is a bit more to it, but the code can be see here if you're interested.
Using this
slug approach i don't need anyone using this plugin to do anything to their frontmatter, meaning i think this could easily be implemented into a blog / site that had loads of pages just as easily as it could be to a brand new project.
There's more info about how to use this plugin in the README and here's a very computer science looking demo
Happy routing! 🚌
Day 21
Jan 21 2020
Recursive functions continue once more. I mentioned yesterday about an issue that had been raised with gatsby-mdx-routes and after a little digging it think one of the problems can be solved with a recursive function.
There's more on the issue here
Actually let me start again. gatsby-mdx-routes is a little plugin that creates routes and generates navigation labels from
frontmatter and originally i thought the issue was related to how could routes be grouped into menu headings as per the work i've been doing on the
docz site but after asking that question it was communicated that wasn't the issue.
While i waited for a response i cracked on with a little enhancement to the
MdxRoutes component by creating a prop to allow re-ordering of navigation routes.
For instance if you had Home , About and Contact you'd probably want Home to be the first route returned but because i wasn't sorting the results they were just returned in alphabetical order.
I thought about having an
ASC and
DESC option but that doesn't really solve the problem so i decided upon the reference array option.
Suppose you had a graphQL response that looked a little bit like this 👇
graphQlData = ["About", "Contact", "Home"]
By providing a
navigationOrder you can sort like this 👇
const navigationOrder = ["Home", "About", "Contact"]graphQlData.sort((a, b) =>navigationOrder.indexOf(a.navigationLabel) -navigationOrder.indexOf(b.navigationLabel))
Unlike the default method of sorting instead of simply determining if
a - b we can use the
indexOf from the
navigationOrder array which is provided via a prop.
Great i thought, that's that issue closed and released
0.0.5
But nope. I was notified they'd been a new comment on the issue and it was mentioned again that a new property in
frontmatter was required to sort the navigation items... "WT Flip" i thought, i've just provided a solution for that.
A few comments later and we have now established that it's both a grouping and an ordering issue.
In the first instance grouping the menus should be managed by a recursive function and not via a property in
frontmatter and then once they're grouped we can order them before returning.
This does mean however, i'm back in to recursive hell! This time i'm planning on using the
slug to determine how many levels deep a menu should be grouped.
My initial thoughts are to take the slug. eg
some-folder/some-sub-folder/some-file and split the string to create an array and then remove any empty space. I hope to use the array length as the condition to stop the recursive function by removing the slug segment on each loop before calling it again... we'll see how it goes!
Day 20
Jan 20 2020
Recursive functions research has continued today and although i've solved what i set out to do with the
docz navigation i wanted to dig a little deeper.
The main reason is i feel the method(s) i've used to reduce the menu object feel a bit ECMAScript 5, even though they're not.
I'm even using
let and if you follow along on Twitter there's a lot of conversations going on about
let vs
const etc, etc ... but i do feel some of the
if statements i'm using, or rather the nested
if statements i've used to manage what to do on each iteration of the recursive function feel a bit clunky.
There's some really powerful ECMAScript 6 array methods available and i don't feel i'm taking advantage of them and as a result my reduce function isn't that readable.
I became a little frustrated today so perhaps it's best if i move on to something else and let all this sink in and try to tackle the problem again at a later date.
Ironically i've since noticed a new issue has been raised on one of my other Gatsby plugins gatsby-mdx-routes which is actually intended to make file path and menu navigation easier. 🤦
I think this will be a job for tomorrow now but perhaps it'll be an opportunity to test drive my new found knowledge of recursion... which in itself is actually kinda recursive. 🤗
Day 19
Jan 19 2020
Made good progress today on the
docz recursive submenu problem and have it pretty much sorted.
I'm now able to group menu items together by defining a submenu property in the frontmatter You can see it working in the Components menu with Twitter and Pinterest acting as submenus here
The code needs a bit of cleaning up and i'll document my approach and solution in a following blog post... i just gotta make sure i fully understand what i did first 🤔
If you're interested though i've pushed to master so you can see the amended Sidebar component, the new reduce method and the new SideNavGroup component... you gotta love Component Shadowing!!!
Day 18
Jan 18 2020
I did a bit of work this morning on recursive functions to make the navigation in
docz work for me. Luckily since
docz is also a Gatsby theme i'm able to tap in to component shadowing. I started by looking at
gatsby-theme-docz/src/components/Sidebar and investigated the
menu object.
This, as it stands doesn't have a way to group by submenu but the theme does allow you to add an additional fields to frontmatter.
This is quite key given i need a way to determine if a file is part of a menu as well as part of a submenu.
I've since added a sumbmenu property to the
.mdx files that i will use to "group by" and have set to work on altering the menu object created by
docz core and recursively looping over any object that contains a submenu.
Now i have an object that i can use to drive the UI.
I've also been working on another blog post explaining recursive functions and will continue to update it documenting the problem i'm facing and how i intend to solve the
docz theme submenu problem.
Day 17
Jan 17 2020
This morning i was attempting to work on a recursive submenu solution for the navigation in the
docz site used by gatsby-mdx-embed and as i worked through a solution i thought about how this might make a good blog post.
I then thought, if i start to write the post in this blog the next time i commit and publish that post will also get published.
This blog uses my theme gatsby-theme-gatstats which currently has no way to set a post as draft which would exclude it from being published.
I then worked on my theme and have included a new property in the frontmatter called
status. If
status is set to draft the post won't be displayed and none of the charts will attempt to reference it as a data source, thus not skewing your metrics 😎
That issue was raised on GitHub #26 and now felt like a good time to address it.
I've had another issue raised today too, this time with gatsby-remark-sticky-table A user wanted to put images in a table cell.
There's probably something funky going on with the way i'm parsing the node to the new markup so i suggested this solution and might investigate this another time.
Right... back to the recursive submenu problem!
Day 16
Jan 16 2020
This morning i was having a google around and out the corner of my eye i saw a Pinterest logo... ooh i thought, i wonder if Pinterest have embed-able widgets?
They do! so that was me for the next few hours!
gatsby-mdx-embed now has three new components for Pinterest.
The standard
<Pin /> where you can embed an image from Pinterest, a
<PinterestFollowButton /> and a
<PinterestBoard /> with an optional prop so you can create a board from a user or a board from a board, if that makes sense.
The inject script works very similarly to Twitter, Instagram and Flickr but i did need to do a little digging into the window object to find the
build() function. This needs to be called on route change so that any components on the page will execute and display as expected.
This evening i'll be looking at how to make a change to the
docz theme so i can have nested menus. I have a feeling my old enemy
array.reduce() may be required and it doesn't matter how many times i read or write an
array.reduce() i always struggle!
Day 15
Jan 15 2020
I spent some of the morning updating prop names and continued to work on two new components,
<TwitterTimeline /> and
<TwitterList /> and i really like the Twitter method for handling embeds!
The method i've used to inject the Twitter embed script remains in tact and all the other Twitter embeds work without any additional injecting, which leaves me time to just develop the components and ponder about what to name the props.
At this point i was really regretting not implementing TypeScript 😞. I use TypeScript for my day job and in my bigger projects like gatsby-theme-gatstats and developing without it feels like being nude and leaving the house.
As development continued i'd been wondering if gatsby-mdx-embed was going to need TypeScript because
PropTypes kinda has my back but i'm now feeling really un-easy with working towards a stable release without it so implementing it now is an absolute must.
This evening i've refactored everything and gatsby-mdx-embed is now TypeScript enabled 🎉🥳.
It feels like a massive relief to know that's sorted and it wasn't too much of pain to setup.
docz did give me a few headaches with it not generating the prop table from the TypeScript interface... until i discovered that i needed to set
typescript: true in the
doczrc.js
With TypeScript now in place i'm more comfortable with continuing to develop components and perhaps
1.0.0 will happen sooner than expected.
Day 14
Jan 14 2020
Last night i started to implement Twitter buttons and hit a naming issue. I'd previously named the components after the provider they represent. In the case of a Tweet this is fine because a Tweet is a unique thing but, what do i do with a hashtag button?
Twitter isn't the only provider tha uses the hashtag paradigm and so i've had to start thinking about prefixing names to allow for this plugin to scale. Maybe at some point i'll be able to introduce an Instagram hashtag button. 🤷♂️
This led me on to the next problem. Props!
To give some context. I build commercial React Component Libraries for companies that make bazillions and these Component Libraries are typically used by lots of engineers on various projects so i'm familiar with the trouble with naming things.
I have learnt that keeping a consistent api for prop names is key.
For instance if you have a number of components that, let's say accept some kind of render prop for a heading, this prop across the entire library should always be called heading. It makes no sense if in a Card component you name it CardHeading and then in a Modal name it Header.
I was trying with gatsby-mdx-embed to keep a consistent
id prop across all components but i've now hit a point where
id just doesn't cut the mustard. 🌭
On some components like YouTube the
id is the video id and that's how YouTube refer to it but, in a Tweet for instance i need the prop to be less ambiguous.
I had some interesting Tweets back and forth with @AskGatsbyJS and John Otander and the general consensus was in Open Source Libraries it pays to be clear and it doesn't matter that names or prop names are long. I think there's a post or Tweet out there by Dan Abramov about how the function names at Facebook are really long, but the upside is it's clear what they do.
I've tried to keep some consistency and the
id paradigm is still in use but now it's more explicit
youTubeId.
For components that require more than an
id, Spotify and Gist for instance where the prop includes a word
tracks or
album AND an
id i've gone with
...Link. I thought about using
URL in the prop name but the prop doesn't contain
http etc so it's not really a URL.
Interestingly though when i have
username as a prop i don't feel i need to change this to, e.g
twitterUsername, same thing with the
hashtag prop. If you're using a component that has the provider in the name like
<TwitterHashtagButton /> i hope it's clear that the
hashtag prop refers to a Twitter hashtag... naming things is hard 😩
A note on sem-ver. Strictly speaking these are breaking changes but i feel i can justify not bumping the plugin to a major release version of
1.0.0 because in my mind when a library is only at
0.x.x to start with it's still a pre-release... even though i have already released it. 🤯
I'm not sure if this is a bad thing to do but i suspect i'm going to encounter a lot more of these kinds of naming issues before i'm happy to release a stable
1.0.0.
I have started a CHANGELOG which should help you fix any issues, and of course the docz will always reflect the actual state of the library.
I hope this change hasn't or won't cause anyone any serious headaches but if you are having problems feel free to Tweet me @PaulieScanlon
This morning and this evening will mostly be taken up by combing over the docs and i'll be thinking hard about sensible names for both components and props.
TTFN!
Day 13
Jan 13 2020
This morning i continued to work on the Wikipedia component and switched to using the wikipedia rest_v1 api
I had a little trouble with the response, i was parsing
response.json() but i needed
response.text()
This can now be injected into the
srcDoc of an
iframe which retains all the wikipedia styles but has created a new problem.
The
iframe now needs some dimensions. By default i'm setting the width as 100% and i'm allowing for a height prop which means control of the height is handed over to whoever is using this component in their
.mdx
I think this will be enough for now and should i get any issues on GitHub i'll deal with them when they arise.
oh, and here's a demo of the finished component.
Day 12
Jan 12 2020
Today i continued to work on the Wikipedia component and have a new method in place for querying the Wiki api. The new method uses the fetch api which has cleaned things up a lot and means there's less JavaScript being injected into the page but the thing that's stumped me now is that the response i'm getting using this approach no longer returns a stylesheet.
I can render the page data but the content picks up the styles from the
docz site rather than the usual Times New Roman style from Wikipedia that we are familiar with.
There are alternative methods documented in the Wikipedia api for fetching the css but i'm having trouble getting both the stylesheet and the data in the same response. I'll just have to keep digging!
The other issue i encountered was that the api response returns All links as relative. For instance for my test page i'm querying "The_Jimi_Hendrix_Experience" and the
href's that are returned look like this
wiki/The_Jimi_Hendrix_Experience
when in fact they need to be prefixed with
using a regex pattern i'm replacing everything that has
wiki in the href with the url.
I've never really understood regex but i did spend some time reading, watching and learning about the bits i needed.
Here's the final regex pattern
/<a href="\/w/g which when used in the response looks like this
text["*"].replace(/<a href="\/w/g,'<a target="_blank" href="//en.wikipedia.org/w')
I added the
target="_blank" as it feels like a better option for users, perhaps i'll turn this into a prop but it works like a charm so i'm happy to move forward.
Day 11
Jan 11 2020
It's Saturday and i didn't have a lot of time today. I did continue working on the Dropbox Chooser component but i think i need to shelve development on this for the time being.
After making good progress i noticed that in order to get it hooked up i'd need to inject the script for each and every Chooser button.
The repercussions are that this could really affect page load speed and that really goes against what i'm trying to achieve with this plugin
I've pushed a branch so i can keep my work but for now i'm going to move on to something else.
I've since gone back to my list of oEmbed providers and have decided to focus on a Wikipedia component. It feels like it could be useful and the api looks to be well documented.
Upon initial inspection i think the best course of action is to
fetch the "page" from the Wiki api, then render the response using
dangerouslySetInnerHTML. I'll see how it goes and no doubt report back tomorrow.
Day 10
Jan 10 2020
Last night i decided to design a logo for gatsby-mdx-embed i was hoping it would be like the Fight Club logo in a bar of soap but it didn't quite work out that way... i was a designer a long time ago but i think i've now just run out of talent. 🤷♂️
Another early start again this morning and i started to think about how users might use this plugin and it's not always clear what i'm referring to as an
id in the props. As of this morning i've created a Help page which documents how to extract the
id the component needs from the providers embed code or URL.
This evening i'm planning on adding Dropbox chooser to the list of components. I don't even use Dropbox but it has an embed-able script tag so it belongs in gatsby-mdx-embed
Day 9
Jan 9 2020
I had a productive morning this morning and have added Twitch to the list of components gatsby-mdx-embed supports.
While i was digging around the Twitch embed code i noticed the skip to option which allows you to embed a video and start it a certain point. This seems like a pretty useful thing to include so i added it as an optional prop along with auto play.
The way Twitch handles the time code is by having the following parameters as part of the URL.
&t=0h21m14s
t is the time parameter which accepts a numerical value followed by the time digit
h for hours,
m for minutes and
s for seconds. I've exposed a
skipTo prop on the Twitch component so you can pass through a time-code.
I then thought i'd add this to the YouTube component, but YouTube do the time-code a different way.
The URL parameter looks like this
&start=1274
After doing a bit of maths i realized the time-code is the total minutes from the start of the video. I wanted to keep the props the same as i've done with Twitch so i needed to calculate the total time before i pass it on to the URL.
const { h, m, s } = skipToconst tH = h * 60const tM = m * 60const startTime = tH + tM + s// startTime = 1274
First i de-structure the
h,
m and
s from the
h by 60 and also
m by 60, you don't need to do it for seconds as the
start time is in minutes.
Then the last step is to add these all together to create something the YouTube URL understands.
I've also added the same prop to Vimeo, and luckily it handles the time-code in a very similar way to Twitch so not much work needed to be done on that.
This is now available in
v0.0.6 of gatsby-mdx-embed
Day 8
Jan 8 2020
Last night i carried on working on gatsby-mdx-embed, made some tweaks to the
docz theme and found a list of "providers" that are supported by oEmbed.
There where two that caught my eye, SoundCloud and Gist. SoundCloud was pretty straight forward as it's a very similar method to how YouTube and Vimeo so that was no problem.
This morning i moved on to Gists.
I Googled around to see if there were any existing React repos and found these two... so thanks Christian Korndörfer and Miroslav Saračević
The main bit that i didn't initially understand was how to handle the
JSON callback.
If we start with a typical Gist URL and run it, you'll hit the
To provide a callback we can add on
gist_callback i Googled around for how to add the callback for ages and found no official docs, just examples of how others have done this.
?gist_callback_ca0cc9239176066492cb2aba435edbf7
This is now the complete URL.
The last step was to inject a
<script> tag with the URL in a
useEffect life-cycle method, create a function on the window object called
gist_callback_ca0cc9239176066492cb2aba435edbf7 then investigate the response.
The response comes back with a
div object which contains all the HTML for the Gist which can be handled by
dangerouslySetInnerHTML
<div dangerouslySetInnerHTML={{ __html: response.div }} />
...and a
stylesheet object which in turn needs to be injected into the page in a
<link> tag.
With both of those things done Gists are now embed-able!
I need to do a bit more work on handling errors but i've bundled it up and Gists are now part of
v0.0.4
Happy Embedding! 🧼
Day 7
Jan 7 2020
Yesterday was a pretty good day, i was relieved to finally get
docz setup the way i wanted. I also launched a very early version of gatsby-mdx-embed. A fellow Gatsby enthusiast who i met at Gatsby Days London Scott Spence was keen to try it out... so, here ya go Scott!
This evening i'll mostly be focussing on documenting and testing the props for each component and perhaps if there's time i'll investigate what other providers i can include. dev.to would be sweet but i had a quick google and there's an open issue on GitHub regarding this so i don't know if it's possible yet... stay tuned 📺
Day 6
Jan 6 2020
Back to my day job today so i got up early and cracked on with the
docz.site... and pleased to report i got the
<Props> component working.
I was on the right track with what i thought the problem was and
docz can be configured to source files from outside the root directory but this hasn't been documented anywhere.
The problem was twofold.
- What did node think the root was and what is the correct path to pass to
docz
- How the WT Flip do you pass a path to
docz
The way to do point 2 is to add
docgenConfig.searchPath to
doczrc.js which tells
docz where to look for component props. You won't find this in their docs but i did find it amongst the examples
// doczrc.jsexport default {docgenConfig: {searchPath: directoryPath,},}
and
directoryPath is as follows;
const directoryPath = path.join(process.cwd(),"../@pauliescanlon/gatsby-mdx-embed/src/components")
First i'm using
path.join to make sure i don't end up with un-wanted
/'s, then i'm using
process.cwd which is the node.js method for working out the Current Working Directory then finally i go up a level
"../" and into the yarn workspace where i'm writing my
MdxComponents
This now means from the
.mdx file where i'm writing the documentation about the component i can also have a nicely formatted prop table which are the real props defined by
propTypes in the component file.
You can see the example for the CodePen component here
Later tonight or early tomorrow i'd like to tweak the
docz theme styles a bit then finally i can crack on with developing the
MdxComponents
Oh and dark mode now works! 🎉
Day 5
Jan 5 2020
Today i decided to focus on the documentation site as i need a
playground setup so i can test props for each of the
MdxEmbed components.
Usually in my professional life i'd use Storybook but since part of #100DaysOfGatsby is to learn new things i've gone for using docz.site
It was a bit tricky to setup as i wanted to use gatsby-theme-docz and i found the documentation regarding setup a bit confusing.
None the less it's all looking pretty good now. I might have found a bug with how the plugin options are passed or perhaps in some cases they have to be set in the
doczrc.js file... who knows, maybe i'll have to live without dark mode on ths one!
The problem i'm currently experiencing is with the
<Props> component which is one of the built-in-components which can create prop tables for my components. What's not initially made clear is that it'll only work if the component in question is local to the
.mdx. For example
docz will generate a prop table for
<MyComponent> it it's in the same things folder as
MyMdx.mdx
- thingsMyComponent.jsMyMdx.mdx
... But it wont work if i move
<MyComponent> to somewhere else, for example...
- componentsMyComponent.js- pagesMyMdx.mdx
If you want to be able to source files from somewhere else you have to set the
src value in
doczrc.js
export default {...src: './components',}
But my docs site and my MdxEmbed project are two different repos linked together by yarn workspaces so i think i need to tell docz to jump up a level to find my components.
export default {...src: '../@pauliescanlon/gatsby-mdx-embed/src/components',}
Which causes GraphQL to error, presumably because it can't find what it needs?
I'm currently trying to work out where node thinks the root is and from there i need to refresh my memory about
path.resolve and
process.cwd but that sounds like a job for tomorrow!
Day 4
Jan 4 2020
Yesterday i spent the morning getting lost in a
docs rabbit hole. I tried numerous documentation themes but all were tripping me up one way or another so i decided to shelve the docs part of gatsby-mdx-embed and just get something looking half decent... which led me down another rabbit hole. I had real trouble getting theme-ui and typography treatments to works. I also discovered that behind theme-ui typography is Kyle Mathews who has been creating a ton of them. I kept having the same issue though and that was the body font wasn't being set so i resorted to using emotion core global and css to just set it... again i'll come back to this.
What i was able to do on gatsby-mdx-embed though was to understand how to create a method for the provider so that once the plugin is installed that's all a user would have to do. It's similar to how
gatsby-plugin-theme-ui works by using
gatsby-browser and
gatsby-ssr and using the
wrapRootElement method to inject the MdxProvider.
This i discovered needs to be done in both files and in different ways. in
gatsby-browser es6 imports work in
gatsby-ssr you have to use require and node modules exports.
..both are required so that in dev and prod the MdxEmbedProvider wraps the root element. With this now working i'm almost ready for an early release.
...
Day 3
Jan 3 2020
This post is starting to feel like an agile standup so perhaps i'll treat it like one.
Yesterday
After writing yesterdays blog post i had a think about how i was gonna handle the headings with nested
<a> styling in gatsby-theme-gatstats which was actually pretty easy. I'm using theme-ui for everything so a little update to my headings object to style the child
<a> was all that was needed.
const headings = {...a: {fontSize: 'inherit',fontWeight: 'inherit',lineHeight: 'inherit',color: 'inherit'}}
Next i looked at how these
# anchors should work. Typically in a site the browser jumps to where the
# starts when clicked and if a link is shared with a
# as part of the url when the page loads it'll move the page so the
# link is at the top.
For this to work in my theme i needed to look at the
gatsby-browser api. There's two methods that were required to make this to work.
- onRouteUpdate which is called when a user changes routes, and also called when an
<a>is clicked (if it has a
#)
- shouldUpdateScroll which allows us to influence the scroll position of the browser on load and also it would seem between route changes.
The main guts of this functionality is wrapped up in a little function i created in
gatsby-browser which is called by both of the above methods.
const anchorScroll = location => {if (location && location.hash) {const item = document.querySelectorAll(`a[href^="${location.hash}"]`)[0].offsetTopconst mainNavHeight = document.querySelector(`header`).offsetHeightsetTimeout(() => {window.scrollTo({top: item - mainNavHeight,behavior: "smooth",})}, 50)}}
In short all this is doing is finding the
<a> element that contains the
# which is found from the url /
location.hash then scrolls to it. There's an offset in there because gatsby-theme-gatstats has a position fixed header so i needed to calculate the top position minus the height of the header so the selected
# isn't under the header when the browser scrolls to it.
This all works well but i'm a little worried about my choice to use the native
window.scrollTo method with
behavior: "smooth" as some older browsers don't support this so if you're using IE 11 (🤢) can you let me know if it still works as intended?
I also had a little bit of twitter activity yesterday from a new user of gatsby-theme-gatstats who tweeted to ask me how to do something. He said he didn't want to raise as issue as he thought it was more him not knowing how to do it rather than it being a bug. My reply "was go ahead, raise an issue" Reason being is that if he's had problems with this particular "thing" then perhaps i should explain it better in the README so others don't have the same question.
I'm pretty new to open source and gatsby-theme-gatstats is the first real project i've had where users are getting in touch and / or rasing issues on GitHub. I think it's fantastic to be honest as it's quite difficult to know how your project will be used and whilst i tried to think of everything i know i've missed things so having other users point this out and let me know means i get to change and improve the project ... it's a win, win!
Today
Today i really want to focus on gatsby-mdx-embed and get this converted to TypeScript and perhaps get Storybook setup.. but i do like the idea of using a Gatsby project to use as a documentation site so i'll probably have google around and see what's out there.
I'm also thinking in the background about the subjects i'm covering on this "standup" style post and which ones should be promoted to their own blog posts. The
<a>
# thing i did yesterday could be a good one to write up but i'm not sure i'll have time. For now at least my main focus is on getting gatsby-mdx-embed stable and released.
Day 2
Jan 2 2020
Started this morning by fixing a bug with gatsby-theme-gatstats. Part of the dashboard has a Posts chart displaying information gathered from the previous year and current years posts. I wasn't correctly checking that the chart would render correctly if there were no posts for the current year.
I've also noticed another bug which is the chart max range needs to be calculated from grabbing the highest count value from both of the year data arrays. I was previously only checking the current year data and creating the range from the hightest count value in that year but it's quite possible that the previous year will have a higher count meaning the line chart will get cut off.
I also need to investigate anchor tags in the theme so that they don't always take on the styling for
<a> if nested within an
<h1> This will come in handy should i want to add anchor tags to each heading from this 100DaysOfGatsby post.
Day 1
Jan 1 2020
Today is the first day of #100DaysOfGatsby and since i already have a blog in place i think i'm gonna just continue working on my various Gatsby projects and try to write a bit each day and then see where i'm at in 100 days.
I currently have a number of Gatsby plugin projects all on npm but all in various states of completeness, these are:
- gatsby-theme-gatstats
- gatsby-remark-sticky-table
- gatsby-remark-grid-system
- gatsby-mdx-routes
- gatsby-mdx-embed
Today i've pushed a fairly stable commit to gatsby-mdx-embed which i think pretty much solves the twitter, instagram and flickr embed problem. I'm not gonna release this to npm just yet as i've done with previous plugins because i'd like to get this properly tested and converted to TypeScript first.
gatsby-mdx-embed is first and foremost an
MdxProvider and when used in a Gatsby project along side
.mdx it'll allow for media embed codes to be used without import by using the
MdxProvider and it's
component prop.
So far so good and i have the first 8 components in and working, although the instagram one seems a little flakey on iOS.
Lessons learned today are similar to issues i've experienced before regarding
gatsby-ssr and
gatsby-browser. Or more specifically the problems when attempting to use both together to accomplish similar things.
ie. if
gatsby-browser relies on something that
gatsby-ssr is gonna do which might not run synchronously. I made a change and it was worth the effort now
gatsby-browser does all the heavy lifting things are working much better.
In the case of gatsby-mdx-embed it was because i was using
gatsby-ssr to inject the relevant provider scripts into the
<head> and then using
gatsby-browser to invoke them. This wasn't really working so making the
<script> tags and appending them to
<head> AND then invoking them from
gatsby-browser proved more solid.
I'd really like to solve the instagram problem before moving on much further but getting the project converted to Typescript and then installing Storybook sounds like it's gonna be way more fun.
I plan to open up and document a load of props to make each component more useful, adding widths to videos and or aspect ratios etc would be really cool. I haven't set up a TypeScript / Storybook repo for a while so this should be fun!
Once i'm happy gatsby-mdx-embed is pretty solid i'm gonna switch out the temporary
Tweet and
YouTube components i hacked together in my theme gatsby-theme-gatstats
I'm gonna use this post as a kind of journal and update each day with things i'm working on of things i'm thinking about and would like to create more in depth posts for topics that warrant further explanation.
|
https://paulie.dev/posts/2020/01/100daysofgatsby/
|
CC-MAIN-2020-10
|
refinedweb
| 14,173
| 66.57
|
num_threadsis not constant.
std::vector<std::queue<int> > q_vector;
q_vector[num_threads]should be given or just
q_vectoris enough
i<...when iterating over arrays. arrays are 0 based (so is vector) hence you have the valid indexes 0, 1, 2 for an array with the size of 3 (like int a[3]) ->
for(int i = 0; i < 3; i++)
q_vector[num_threads]is an array of vectors. You don't want that.
for( i=1; i<
num_threadssize; i++){ //accessing loop of queues
q.push(int)// but this queue must be the queue which has minimum size.
min_index, assign
ito
min_indexwhere you assigned
min_value. That's it.
q_vector[min_index].push(x);
q[1]not from the 1st queue,
q[0]so do i need to change
int min_index = 0to
int min_index = 1?
#include <vector>or a queue
#include <queue>
|
http://www.cplusplus.com/forum/general/90184/
|
CC-MAIN-2016-40
|
refinedweb
| 136
| 77.03
|
A gulp plugin to wrap the stream contents with a lodash template.
First, install
gulp-wrap as a development dependency:
npm install --save-dev gulp-wrap
Then, add it to your
gulpfile.js:
Wrap the contents with an inline template:
var wrap = ;gulp;
Wrap the contents with a template from file:
var wrap = ;gulp;
Use parsed contents within a template (supports JSON and YAML):
var wrap = ;gulp;
Provide additional data and options for template processing:
var wrap = ;gulp;
This gulp plugin wraps the stream contents in a template. If you want the stream contents to be the templates use the gulp-template plugin.
The stream contents will be available in the template using the
contents key. If the file extension is
json,
yaml, or
yml then the contents will be parsed before being passed to the template. Properties from the vinyl file will be available in the template under the
file object and are local to that stream. User supplied
data values will always take precedence over namespace clashes with the file properties.
Type:
String or
Object or
Function
The template to used. When a
String then it will be used as the template. When an
Object then the template will be loaded from file. When a
Function then the function will be called and should return the template content. This function get the
data object as first parameter.
Type:
String
The file location of the template.
Type:
Object or
Function
The data object that is passed on to the lodash template call. When a
Function then the function will be called and should return the
Object data used in the template.
Type:
Object or
Function
The options object that is passed on to the lodash template call. When a
Function then the function will be called and should return the
Object used as the options.
Type:
Boolean
Set to explicit
false value to disable automatic JSON and YAML parsing.
Type:
String
Set the consolidate template engine to use. (default to
lodash).
Using another engine that
lodash may require installation of additional node package.
|
https://www.npmjs.com/package/gulp-wrap
|
CC-MAIN-2017-34
|
refinedweb
| 344
| 72.76
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.