text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
fast enough that the eye blurs the ON and OFF. Whatis a Seven Segment Display ? A Seven Segment Display (SSD) is one of the most common, cheap and simple to use display. Seven Segment Displays are of two types : Common cathode : In the common cathode type SSD, the –ve terminal of all the LEDs are commonly connected to the ‘COM’ pin. A segment can be lighted up when ‘1’ is given to the respective LED segment and ground is connected to the common. Common anode : In the common anode type SSD, the +ve terminal of all the LEDs are commonly connected to the ‘COM’ port of the SSD. A segment can be lighted up when the ‘COM’ port is connected to the +ve battery supply, and ground is given to the respective segment. For more information about seven segment Description : All the similar segments of multiple LED displays are connected together and driven through a single I/O pin. Instead, 4 NPN transistors are used as switches to connect or disconnect the cathode terminals from Gnd. When the base of the NPN transistor is high, the transistor conducts and corresponding digit’s common cathode is connected to GND. 1 and RB3-RB6 are 0) and then RA0 should be pulled up (while keeping RA1-RA3 low) so that only units digit display will be active. In order to display all 4 digits, each seven-segment display is activated sequentially using an appropriate refresh frequency so that it will appear that all the them are turned on at the same time. We have four 7-segment displays connected to the same PORTB on the PIC18F2550. Because the circuit is connected in this way we have to multiplex the output. switch on display 1 and switch off the other 3 displays (2, 3,4) switch on display 2 and switch off the other 3 displays (1, 3,4) switch on display 3 and switch off the other 3 displays (1, 2,4) switch on display 4 and switch off the other 3 displays (1, 2,3) Decoding Function : function which will return Seven Segment Decoded mask of a single digit. Multipulxing 7 Segment Display using PIC18F2550 Microcontroller (Code) unsignedintmask(int a) { switch(a) {; } } Software : The firmware is written in C and compiled with MikroC Pro for PIC V7.1.0. Download crack version : CODE : #include "Display_Utils.h" unsigned short shifter, portb_index; unsigned int digit, number; unsigned short portb_array[4]; void interrupt() { PORTA = 0; // Turn off all 7seg displays PORTB = portb_array[portb_index]; // bring appropriate value to PORTD PORTA = shifter; // turn on appropriate 7seg. display // move shifter to next digit shifter <<= 1; if (shifter > 8u) shifter = 1; // increment portd_index portb_index++ ; if (portb_index > 3u) portb_index = 0; // turn on 1st, turn off 2nd 7seg. TMR0L = 0; // reset TIMER0 value TMR0IF_bit = 0; // Clear TMR0IF } void main() { ADCON1 =0x0F; // Configure PORTX pins as digital CMCON = 0x07; // comparators OFF TRISA = 0; // Configure PORTA as output PORTA = 0; // Clear PORTA TRISB = 0; // Configure PORTD as output PORTB = 0; // Clear PORTD T0CON = 0xC4; // Set TMR0 in 8bit mode, assign prescaler to TMR0 TMR0L = 0; // clear TMROL digit = 0; portb_index = 0; shifter = 1; number = 9980; // Initial number value GIE_bit = 1; TMR0IE_bit = 1; do { digit = number / 1000u ; // extract thousands digit portb_array[3] = conversion(digit); // and store it to PORTD array digit = (number / 100u) % 10u; // extract hundreds digit portb_array[2] = conversion(digit); // and store it to PORTD array digit = (number / 10u) % 10u; // extract tens digit portb_array[1] = conversion(digit); // and store it to PORTD array digit = number % 10u; // extract ones digit portb_array[0] = conversion(digit); // and store it to PORTD array Delay_ms(400); // one second delay number++ ; // increment number if (number > 9999u) number = 0; } while(1); // endless loop } Multipulxing 7 Segment Display using PIC18F2550 Microcontroller (Schematic Diagram) Results : After the c code is successfully compiled, a HEX file is generated.For simulating with PROTEUS ISIS hit run button and and then you will get above output. we can drive more than one Seven Segment Display by using a technique called ‘Multiplexing’. This technique is based on the principle of Persistence of Vision of our eyes. If the frames change at a rate of 25 (or more) frames per second, human eye can’t detect that visual change. Each display is turned on above this rate and our eyes will think that the display is turned on for whole the time. We have used Common Cathode Seven Segment Display in this example. Pins RD0 – RD6 are connected to the A – G of the display. This will count from 0000 to 9999. Resource : You can download the MikroC Source Code and Proteus files etc from here download The same application with PIC16F877A : This Our Group (thanks to join and participate) : Facebook page (thanks to join and share) : Youtube Channel (thanks to subscribe) : JLCPCB – Prototype 10 PCBs for $2 + 2 days Lead Time China’s Largest PCB Prototype Enterprise, 300,000+ Customers & 10,000+ Online Orders Per Day Inside a huge PCB factory:
http://pic-microcontroller.com/multipulxing-7-segment-display-using-pic18f2550-microcontroller/
CC-MAIN-2018-39
refinedweb
826
52.12
Customising a Builder Most builders can be generated simply by using <retep:builder/>, however sometimes you will need to modify the default settings. These go into the content of the element and are formed by a set of java Properties. Getters The default is not to generate a get method for head set() or add() method. If you want these generated you simply turn on getter generation: <retep:builder> generategetters=true </retep:builder> Subclassing and interfaces You can tell the plugin to have the builder extend some other class and/or implement an interface. In each of these you can use the extendsclass or implementsclass properties. Note: You can declare one or both of these, but only once for each one. The builder cannot implement multiple interfaces. <retep:builder> extendsclass=com.example.AbstractBuilder implementsclass=com.example.SomeInterface </retep:builder> If you want to declare generics for these, then the content of the <retep:builder/> element must be wrapped within an XML CDATA section. Objects referred by a schema but not defined If you have the case of a schema referring to objects generated outside of this schema, probably by a previous run of the maven-jaxb2-plugin and stored in another artifact, the plugin will assume that it has an associated builder with it. The Builder will be expected to be located within the same package and have the same name with Builder appended to it. An example from retepXMPP is the uk.org.retep.xmpp.builder.JID class which has it’s builder uk.org.retep.xmpp.builder.JIDBuilder. If that Builder class does not exist then javac will fail as it would be expecting it to be present in the classpath. Further reading This article only covers the main points of this plugin. There are many more configuration items available to it which can customise the builders even more. Unlike writing this in an article, it’s probably best to look directly at a working example, specifically the jabber:client namespace within retepXMPP. Although the plugin is complete, there are bound to be the odd bug or two where some combination of how JAXB generates specific classes are not implemented, or it generates a builder which doesn’t work as expected. If you do get one of these then please get in touch, either via XMPP, here or preferably within Jira.
http://blog.retep.org/2010/05/18/implementing-builders-with-jaxb-generated-objects/3/
crawl-003
refinedweb
393
59.43
. import tensorflow as tf from tensorflow import keras import numpy as np print(tf.__version__) 1.12.0 Download to your machine (or uses a cached copy if you've already downloaded it): imdb = keras.datasets.imdb (train_data, train_labels), (test_data, data(train_data), len(train_labels))) Training entries: 25000, labels: 25000 The text of reviews have been converted to integers, where each integer represents a specific word in a dictionary. Here's what the first review looks like:] Movie reviews may be different lengths. The below code shows the number of words in the first and second reviews. Since inputs to a neural network must be the same length, we'll need to resolve this later. len(train_data[0]), len(train_data[1]) (218, 189) Convert the integers back to words It may be useful to know how to convert integers back to text. Here, we'll create a helper function to query a dictionary object that contains the integer to string mapping: #(train_data[0]) "" Prepare the data The reviews—the arrays of integers—must be converted to tensors before fed into the neural network. This conversion can be done a couple of ways: One-hot-encode the arrays to convert them into vectors of 0s and 1s. For example, the sequence [3, 5] would become a 10,000-dimensional vector that is all zeros except for indices 3 and 5, which are ones. Then, make this the first layer in our network—a Dense layerDlayer, and we'll explore it later.=tf.train.AdamOptimizer(), loss='binary_crossentropy', metrics=[ Epoch 1/40 15000/15000 [==============================] - 1s 54us/step - loss: 0.6923 - acc: 0.5339 - val_loss: 0.6908 - val_acc: 0.6712 Epoch 2/40 15000/15000 [==============================] - 1s 39us/step - loss: 0.6879 - acc: 0.7180 - val_loss: 0.6846 - val_acc: 0.7465 Epoch 3/40 15000/15000 [==============================] - 1s 41us/step - loss: 0.6781 - acc: 0.7669 - val_loss: 0.6721 - val_acc: 0.7582 Epoch 4/40 15000/15000 [==============================] - 1s 42us/step - loss: 0.6599 - acc: 0.7691 - val_loss: 0.6511 - val_acc: 0.7649 Epoch 5/40 15000/15000 [==============================] - 1s 39us/step - loss: 0.6311 - acc: 0.7913 - val_loss: 0.6195 - val_acc: 0.7818 Epoch 6/40 15000/15000 [==============================] - 1s 41us/step - loss: 0.5928 - acc: 0.8057 - val_loss: 0.5815 - val_acc: 0.7941 Epoch 7/40 15000/15000 [==============================] - 1s 41us/step - loss: 0.5482 - acc: 0.8229 - val_loss: 0.5399 - val_acc: 0.8108 Epoch 8/40 15000/15000 [==============================] - 1s 39us/step - loss: 0.5019 - acc: 0.8383 - val_loss: 0.4985 - val_acc: 0.8260 Epoch 9/40 15000/15000 [==============================] - 1s 40us/step - loss: 0.4580 - acc: 0.8516 - val_loss: 0.4607 - val_acc: 0.8382 Epoch 10/40 15000/15000 [==============================] - 1s 40us/step - loss: 0.4178 - acc: 0.8653 - val_loss: 0.4281 - val_acc: 0.8456 Epoch 11/40 15000/15000 [==============================] - 1s 42us/step - loss: 0.3837 - acc: 0.8757 - val_loss: 0.4031 - val_acc: 0.8519 Epoch 12/40 15000/15000 [==============================] - 1s 43us/step - loss: 0.3548 - acc: 0.8835 - val_loss: 0.3790 - val_acc: 0.8599 Epoch 13/40 15000/15000 [==============================] - 1s 42us/step - loss: 0.3294 - acc: 0.8908 - val_loss: 0.3618 - val_acc: 0.8640 Epoch 14/40 15000/15000 [==============================] - 1s 40us/step - loss: 0.3083 - acc: 0.8969 - val_loss: 0.3467 - val_acc: 0.8696 Epoch 15/40 15000/15000 [==============================] - 1s 42us/step - loss: 0.2902 - acc: 0.9009 - val_loss: 0.3348 - val_acc: 0.8732 Epoch 16/40 15000/15000 [==============================] - 1s 39us/step - loss: 0.2747 - acc: 0.9035 - val_loss: 0.3253 - val_acc: 0.8742 Epoch 17/40 15000/15000 [==============================] - 1s 40us/step - loss: 0.2599 - acc: 0.9108 - val_loss: 0.3172 - val_acc: 0.8769 Epoch 18/40 15000/15000 [==============================] - 1s 40us/step - loss: 0.2472 - acc: 0.9155 - val_loss: 0.3104 - val_acc: 0.8806 Epoch 19/40 15000/15000 [==============================] - 1s 41us/step - loss: 0.2355 - acc: 0.9187 - val_loss: 0.3048 - val_acc: 0.8816 Epoch 20/40 15000/15000 [==============================] - 1s 42us/step - loss: 0.2252 - acc: 0.9223 - val_loss: 0.3005 - val_acc: 0.8811 Epoch 21/40 15000/15000 [==============================] - 1s 40us/step - loss: 0.2151 - acc: 0.9260 - val_loss: 0.2964 - val_acc: 0.8820 Epoch 22/40 15000/15000 [==============================] - 1s 39us/step - loss: 0.2059 - acc: 0.9293 - val_loss: 0.2936 - val_acc: 0.8828 Epoch 23/40 15000/15000 [==============================] - 1s 37us/step - loss: 0.1975 - acc: 0.9321 - val_loss: 0.2913 - val_acc: 0.8826 Epoch 24/40 15000/15000 [==============================] - 1s 36us/step - loss: 0.1892 - acc: 0.9363 - val_loss: 0.2886 - val_acc: 0.8847 Epoch 25/40 15000/15000 [==============================] - 1s 40us/step - loss: 0.1818 - acc: 0.9404 - val_loss: 0.2870 - val_acc: 0.8851 Epoch 26/40 15000/15000 [==============================] - 1s 38us/step - loss: 0.1745 - acc: 0.9425 - val_loss: 0.2862 - val_acc: 0.8852 Epoch 27/40 15000/15000 [==============================] - 1s 37us/step - loss: 0.1683 - acc: 0.9461 - val_loss: 0.2856 - val_acc: 0.8848 Epoch 28/40 15000/15000 [==============================] - 1s 40us/step - loss: 0.1617 - acc: 0.9482 - val_loss: 0.2848 - val_acc: 0.8862 Epoch 29/40 15000/15000 [==============================] - 1s 39us/step - loss: 0.1557 - acc: 0.9503 - val_loss: 0.2846 - val_acc: 0.8866 Epoch 30/40 15000/15000 [==============================] - 1s 41us/step - loss: 0.1504 - acc: 0.9526 - val_loss: 0.2851 - val_acc: 0.8862 Epoch 31/40 15000/15000 [==============================] - 1s 38us/step - loss: 0.1443 - acc: 0.9555 - val_loss: 0.2857 - val_acc: 0.8866 Epoch 32/40 15000/15000 [==============================] - 1s 38us/step - loss: 0.1394 - acc: 0.9580 - val_loss: 0.2867 - val_acc: 0.8868 Epoch 33/40 15000/15000 [==============================] - 1s 41us/step - loss: 0.1339 - acc: 0.9601 - val_loss: 0.2878 - val_acc: 0.8869 Epoch 34/40 15000/15000 [==============================] - 1s 41us/step - loss: 0.1293 - acc: 0.9623 - val_loss: 0.2891 - val_acc: 0.8869 Epoch 35/40 15000/15000 [==============================] - 1s 45us/step - loss: 0.1250 - acc: 0.9633 - val_loss: 0.2905 - val_acc: 0.8866 Epoch 36/40 15000/15000 [==============================] - 1s 41us/step - loss: 0.1200 - acc: 0.9663 - val_loss: 0.2926 - val_acc: 0.8860 Epoch 37/40 15000/15000 [==============================] - 1s 43us/step - loss: 0.1160 - acc: 0.9671 - val_loss: 0.2949 - val_acc: 0.8858 Epoch 38/40 15000/15000 [==============================] - 1s 42us/step - loss: 0.1124 - acc: 0.9680 - val_loss: 0.2962 - val_acc: 0.8852 Epoch 39/40 15000/15000 [==============================] - 1s 41us/step - loss: 0.1078 - acc: 0.9700 - val_loss: 0.2985 - val_acc: 0.8855 Epoch 40/40 15000/15000 [==============================] - 1s 39us/step - loss: 0.1040 - acc: 0.9711 - val_loss: 0.3009 - val_acc: 0.8851 Evaluate the model And let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy. results = model.evaluate(test_data, test_labels) print(results) 25000/25000 [==============================] - 1s 35us/step [0.32051958542823794, 0.87336](['val_acc', 'acc', 'val_loss', 'loss']) There are four entries: one for each monitored metric during training and validation. We can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy: import matplotlib.pyplot as plt acc = history.history['acc'] val_acc = history.history['val_acc'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(acc) + 1) # "bo" is for "blue dot" plt.plot(epochs, loss, 'bo', label='Training loss') # b is for "solid blue line" plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() plt.clf() # clear figure acc_values = history_dict['acc'] val_acc_values = history_dict['val_acc'] plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend() plt.show() In this plot, the dots represent the training loss and accuracy, and the solid lines are the validation loss and accuracy. Notice the training loss decreases with each epoch and the training accuracy increases with each epoch. This is expected when using a gradient descent optimization—it should minimize the desired quantity on every iteration. This isn't the case for the validation loss and accuracy—they seem to peak after about twenty epochs. This is an example of overfitting: the model performs better on the training data than it does on data it has never seen before. After this point, the model over-optimizes and learns representations specific to the training data that do not generalize to test data. For this particular case, we could prevent overfitting by simply stopping the training after twenty or so epochs. Later, you'll see how to do this automatically with a callback. #@title.
https://www.tensorflow.org/tutorials/keras/basic_text_classification?hl=ja
CC-MAIN-2018-51
refinedweb
1,401
83.02
SQL.Open only creates the DB object, but dies not open any connections to the database. If you want to test your connections you have to execute a query to force opening a connection. The common way for this is to call Ping() on your DB object. See and Quoting from the doc of sql.Open(): Open may just validate its arguments without creating a connection to the database. To verify that the data source name is valid, call Ping. As stated, Open() may not open any physical connection to the database server, but it will validate its arguments. That being said if arguments are valid, it may return nil error even if the database server is not reachable, or even if the host denoted by dataSourceName does not exist. To answer your other question: What is the point of check for errors after this function if it does not return errors? You have to check returned errors because it can return errors. For example if the specified driverName is invalid, a non-nil error will be returned (see below). To test if the database server is reachable, use DB.Ping(). But you can only use this if the returned error is nil, else the returned DB might also be nil (and thus calling the Ping()method on it may result in run-time panic): if db, err := sql.Open("nonexistingdriver", "somesource"); err != nil { fmt.Println("Error creating DB:", err) fmt.Println("To verify, db is:", db) } else { err = db.Ping() if err != nil { fmt.Println("db.Ping failed:", err) } } Output (try it on the Go Playground): Error creating DB: sql: unknown driver "nonexistingdriver" (forgotten import?) To verify, db is: <nil> sql.open("postgres", "postgres://postgres:postgres/xxxx")连接数据库出错的时候,也不会报错, 很奇怪,那这种错误是怎么处理的呢? package main import ( "database/sql" "fmt" _ "github.com/lib/pq" "log" ) var db *sql.DB func main() { defer func() { fmt.Println(recover()) }() var ss string var err error // var err error if db != nil { db.Close() } else { db, err = sql.Open("postgres", "postgres://postgres:postgres@127.0.0.1/xinyi?sslmode=disable") if err != nil { log.Println("Can't connect to postgresql database") } else { err = db.Ping() if err != nil { fmt.Println("db.Ping failed:", err) } } err = db.QueryRow("select value from configs where key =$1", "line1_batch").Scan(&ss) if err != nil { log.Println("query error") } fmt.Println(ss) } } ----------------------------------------------------------------------------------------------------- SQL Drivers Go’s standard library was not built to include any specific database drivers. Here is a list of available third party SQL drivers . Setup First we will need to import the packages that our program will use. import ( “database/sql” _ “github.com/lib/pq” ) Here, we import the “database/sql” library which provides a generic interface for working with SQL databases. The second import, _”github.com/lib/pq”, is the actual postgresql driver. The underscore before the library means that we import pq without side effects. Basically, it means Go will only import the library for its initialization. For pq, the initialization registers pq as a driver for the SQL interface. Open Next we will need to open the database. It is important to note that calling “Open” does not open a connection to the database. The return from “Open” is a DB type and an error. The DB type represents a pool of connections which the sql package manages for you. db, err := sql.Open(“postgres”,”user=Arnold dbname=TotalRecall sslmode=disable”) “Open” returns an error which validates the arguments of a database open if err != nil { log.Fatal(“Error: The data source arguments are not valid”) } Ping Since the error returned from “Open” does not check if the datasource is valid calling Ping on the database is required err = db.Ping() if err != nil { log.Fatal(“Error: Could not establish a connection with the database”) } Prepare Once the DB has been set up, we can start safely preparing query statements. “Prepare” does not execute the statement. queryStmt, err := db.Prepare(“SELECT name FROM users WHERE id=$1”) if err != nil { log.Fatal(err) } QueryRow We can now “QueryRow” off of the prepared statement and store the returned row’s first column into the “name string”. “QueryRow” only queries for one row. var name string err = queryStmt.QueryRow(15).Scan(&name) In addition, a common error check is for “No Rows”. Some programs handle “No Rows” differently from other scanning errors. Errors like this are specific to the library, not Go in general. if err == sql.ErrNoRows { log.Fatal(“No Results Found”) } if err != nil { log.Fatal(err) } You can also skip explicitly preparing your Query statements. var lastName string err = db.QueryRow(“SELECT last_name FROM users WHERE id=$1”, 15).Scan(&lastName) if err == sql.ErrNoRows { log.Fatal(“No Results Found”) } if err != nil { log.Fatal(err) } Query We can also handle a Query that returns multiple rows and stores the result into a “names” slice. In the code below you will see “rows.Next”, which moves the cursor to the next result row. If there is no next row or error preparing the next row, a false will be returned. var names []string rows, err := queryStmt.Query(15) defer rows.Close() for rows.Next() { var name string if err := rows.Scan(&name); err != nil { log.Fatal(err) } names = append(names, name) } This next check is for any errors encountered during the iteration. err = rows.Err() if err != nil { log.Fatal(err) } Conclusion Golang’s standard sql package is extremely simple, yet powerful. This post covers the basics of the sql package. If you would like to learn more, visit the official docs at:. Feel free to leave any comments or questions. 有疑问加站长微信联系(非本文作者) 感谢作者:oxspirt 查看原文:golang 查询数据库操作
https://studygolang.com/articles/10166
CC-MAIN-2021-49
refinedweb
947
69.58
: There are three parts to understanding the Flex class library: First, I'll dig into what it means when you declare a user interface (UI) component using a tag. The Button class is a great starting point. Take a moment and ask yourself: How do I declare a button in MXML? If you are like most developers, you probably read the documentation and start with something like <mx:Button label=”Hello” />. There is nothing wrong with that, but what did you do? You instantiated a Button object from the class library, and at the same time, you set the label property for the object to a value of “Hello”. Everything in Flex is a class in the API. Ultimately, when you declare something using a tag, you create an instance of that class. There are two great ways to understand this at a deeper level. The first way is to set Flex up to save the ActionScript it generates that represents your application. The second way is to read specific parts of the documentation (which I will touch on momentarily). You can enable this option in the flex-config.xml file by changing the property keep-generated-as from "false" to "true". When you enable this option and run your application, Flex will store the ActionScript file that represents your application in the same directory as the original MXML source file. If you open the generated ActionScript file and read it, you can see how Flex instantiates the button (and a lot more). This is not only an excellent way to understand your application at a much lower level but it also gives you an excellent way to debug your code. Most beginning Flex developers are all too familiar with the Developing Flex Applications documentation, but did you know that there is another set of documentation that explains the class library? The Flex ActionScript and MXML API Reference is a great resource that should feel familiar for those of you familiar with JavaDoc. In the Flex ActionScript and MXML API Reference, take a closer look at the Button class in the All Classes pane. Here, the docs list classes you can use in your Flex applications in alphabetical order. Find the Button class and click the provided link. The corresponding documentation appears in the main pane. The first listing on this page is the class hierarchy. The Button class extends SimpleButton, which extends UIComponent, which extends UIObject. Each time a subclass is created, it adds more specifics to the behavior of the parent class. In this case you eventually end up with a full-fledged button and you can still leverage the classes that reside further up the tree for your own component needs. Try the following code example by creating an MXML file and running the application. <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns: <mx:SimpleButton /> </mx:Application> This little application is not particularly useful, but now you should understand what it is that you are instantiating. If you scroll further down through the documentation, you will find a plethora of information about properties you may have never known existed. Keep in mind that the class at the bottom of the inheritance chain gets all the functionality of the previous classes. For example, you may have used the “width” property on your buttons before, and now you know that the property is actually implemented all the way back up the tree at UIObject. If you expand your view and look at a potentially more complex UI component, you will find some interesting correlations between the classes and their MXML declarations. Most developers try the DataGrid control in one of their first projects (more on that later). A typical DataGrid declaration might look something like the following code listing: <mx:DataGrid> <mx:columns> <mx:Array> <mx:DataGridColumn headerText=”Name” /> <mx:DataGridColumn headerText=”Price” /> </mx:Array> </mx:columns> </mx:DataGrid> Now that you know this code simply instantiates a class, what does the MXML do? If you reference the API documentation I mentioned above, you can see that “columns” is a property on the DataGrid class, and that it takes an Array of DataGridColumn objects. In turn, the DataGridColumn class has a “headerText” property that is displayed at the top of the column. Bringing this concept full circle, can you think of another way to declare a Button class with a label? <mx:Button> <mx:label>Hello</mx:label> <mx:toolTip>Now I get it!</mx:toolTip> </mx:Button> To make this a little more interesting I added the toolTip property as well (found on UIObject). You can use this technique in a number of scenarios; more importantly, is that you now understand how the tags relate to the class library. Now you can read the MXML API and understand how to instantiate those classes, as necessary, in your Flex applications. It may seem that you are done, but you can take this quite a bit further. Tip: The default layout of a Panel may look like it is a VBox container, but it is actually a Box with the direction property set to vertical. VBox and HBox layout containers are convenience classes that extend from Box. It turns out then that if you want to lay out controls inside of a Panel in a horizontal direction, you can set the direction property to horizontal. No need to nest another container! How does all this discussion on classes relate to making custom components? If MXML tags simply declare classes and set properties, shouldn’t you then be able to create a component subclass using MXML? You bet you can, and the good news is that if you have played with Flex at all, it is likely that you have already done this to some degree, probably using an <mx:HBox> container or <mx:VBox> container! It is easy enough to understand how to take a box and make a subclass of that box by adding components to it, but what if you wanted to create a Label control subclass? Now that you understand the class structure and how you declare those classes in MXML, you can extend even the most basic component without having to resort to a pure ActionScript class. In this example, I wanted to create a Label subclass that would handle formatting of currency automatically. In short, I wanted to move the CurrencyFormatter that I might otherwise use in my main application, inside of the Label component itself. In a new MXML file, start with the standard XML declaration. After the XML declaration, establish an instance of the Label class. You are creating a component, so don’t forget to specify the appropriate namespace. Once you have created an instance of the Label class, your code should look like the following code snippet: <?xml version="1.0" encoding="utf-8"?> <mx:Label xmlns: </mx:Label> Since this Label subclass handles currency, the text property probably doesn’t make complete sense; add a property to the class that accepts a numeric value into a property called amount. Remember that Number is just another class, and that you can declare classes in MXML. You will likely also want to set the initial value of the new Label subclass to a numeric value (such as 0). <?xml version="1.0" encoding="utf-8"?> <mx:Label xmlns: <mx:Number id=”amount”>0</mx:Number> </mx:Label> Lastly, format the text property that the label displays by default. There are formatter classes available in Flex, so it makes sense to use an instance of one of those formatters inside the new Label class. Add an instance of the CurrencyFormatter class as a property in your new subclass and set the values, formatting it as you prefer. Then, declare the text property and use the CurrencyFormatter.format() method to fine tune your display. <?xml version="1.0" encoding="utf-8"?> <mx:Label xmlns: <mx:Number0</mx:Number> <mx:CurrencyFormatter <mx:text>{currency.format( amount )}</mx:text> </mx:Label> Congratulations, you have just created an MXML subclass for a simple control! Save your new MXML subclass as CurrencyLabel.mxml and reuse it. This is a very simple example, and probably one that’s not altogether useful, but it further illustrates that what you do in tags is what you might otherwise do using ActionScript. The principles of object-oriented programming really start to shine through and you can start finding ways to apply common design patterns to your applications. Next steps: Think about how you might apply validation, and raise and handle runtime errors if your application passed an alphabetic value to your new CurrencyLabel class. With a foothold firmly established in understanding what you are doing declaratively with MXML and how it relates to the Flex class library, you can start digging for some other really interesting features you might have otherwise overlooked. Now look at the View.createChild() method. This method requires: Because the View class is the base class for almost every container, everything from the Application class to the HBox class to the Canvas class supports this method through inheritance. With this in mind, you can create an entire UI programmatically through ActionScript. In the following code example, you will add a Label, TextInput, and Button component to your application through this dynamic technique to create a simple “Hello World” example. <?xml version="1.0" encoding="utf-8"?> <mx:Application <mx:Script> <![CDATA[ import mx.controls.Label; import mx.controls.TextInput; import mx.controls.Button; import mx.containers.Panel; public var btnSubmit:Button = null; public var txtFirst:TextInput = null; public function initApp( Void ):Void { var pnlHello:Panel = null; var init:Object = null; init = new Object(); init.verticalAlign = "middle"; init.direction = "horizontal"; init.title = "Hello World"; pnlHello = Panel( createChild( Panel, "pnlHello", init ) ); init = new Object(); init.text = "First name:"; pnlHello.createChild( Label, "lblFirst", init ); init = new Object(); init.text = "Macromedia"; txtFirst = TextInput( pnlHello.createChild( TextInput, "txtFirst", init ) ); init = new Object(); init.label = "Submit"; btnSubmit = Button( pnlHello.createChild( Button, "btnSubmit", init ) ); btnSubmit.addEventListener( "click", this ); } public function handleEvent( event:Object ):Void { if( event.type == "click" ) { if( event.target == btnSubmit ) { alert( "Hello " + txtFirst.text + "!", "Alert" ); } } } ]]> </mx:Script> </mx:Application> There are a few interesting points about this code example: Here, I have chosen an initApp() method that creates the UI, which is called during the initialize event of the application. For all practical purposes, the end user could drive the creation of the UI. I will talk more about this soon. Notice that I have also declared object variables for the UI components I wish to use outside of the initApp() method. Once I started creating the UI objects, I used an internal variable of type Object to hold my initialization properties. You may also use the classic ActionScript approach, using brackets (such as {text:”Macromedia”}) inline. The example may also introduce you to casting in ActionScript which I do to leverage the benefits of strict data typing. By default, View.createChild() actually returns a MovieClip object. This general data type may cause confusion or errors down the road, so while casting isn’t necessarily required, I encourage you to use it for code maintainability. Also notice that when creating the Label object, I ignore the return altogether. Since I won’t programmatically manipulate the Label object during the application, there’s no need to store a named reference. Lastly, notice how event handling differs when building dynamic UIs through ActionScript. In this example, the submit button must raise its click event so that the application can display an alert dialog box. Rather than use the init object, you leverage the UIEventDispatcher.addEventListener() method. When triggered, the EventDispatcher calls an handleEvent() method, which you must implement. The handleEvent() method takes an object as an argument which you can use to understand which component raised the event. The object argument has two properties: one specifies the event type and the other the event target (the component that fired the event). Since all components used in this programmatic fashion might call the handleEvent() method, you need to examine the type and target properties to isolate events and take appropriate action. Tip: When implementing the Model View Controller (MVC) composite pattern, it can be useful to leverage the event object by adding additional data properties. For example, if your MXML component contains a <mx:DataGrid> that triggers a change event, you will likely pass the selectedItem from the grid to any listeners. You might be inclined to have the component parent access the property directly, but this creates a tight coupling between UI elements. It is cleaner to add the selectedItem to the event object and let the parent leverage that property. One key flaw to note in this example: It requires far more code and can be substantially less maintainable. Clearly, you do not want to leverage this technique for generic UI requirements, but it can prove exceptionally useful in a number of scenarios. Generally, you can use this method to add components to your application at runtime when you might otherwise not know how many you will need. An example where this type of dynamic creation is useful might be a product comparison matrix. For instance, let's say that your application lists products that are similar in nature. You would like to create functionality so that the user can drag products to an area on the screen that presents more information about those products for side-by-side comparison. Once the user has dropped a product on the comparison area, you can now create an instance of the special comparison view (MXML component) and pass it the required data. There is also the opposite method, View.destroyChild(), that you can use to remove controls from the user interface. If there is one thing I have seen time and again, it is that developers stick with what they know. Web application developers know tables, grids, and page refreshes. Client/server developers know dialog box windows and multiple document interfaces (MDI). In the case of Flex this can be very problematic. Flex applications are not web applications, and they are not client/server either. Flex is its own hybrid, and requires that developers approach problems from a fresh perspective. The HALO look and feel used throughout Flex is a solid way to start addressing this hybrid environment. You can learn more about HALO from Mike Sundermeyer’s article. Mike makes a point that Flex applications “aren't OS-native applications; they are Internet applications that run in all different browsers and platforms.” This brings you as far as a component foundation, but what about overall UI considerations? Take the scenario of allowing the user to zoom in on an image. From a web application standpoint, you know that file size limits what you can do. You don’t want to deliver the high-resolution image, but scaling of a smaller image can result in pixilated image. From a client/server standpoint, you might be tempted to use the client’s power to read the high-resolution image outright and manipulate the image bits. Neither is ideal for a Flex application, but you can find a happy, hybrid, median. In a Flex application, it is the server that holds most of the processing power, but the client is intelligent. A solution to this problem, then, might be to have the server process the image based on event-driven requirements and deliver the resulting scaled and cropped image back to the client. In a Flex application, the client is responsible for letting the server know of its exact needs: the available viewing space and the pixel coordinates the user has specified for a zoom operation. This hybrid approach can result in the ability to have a client (Flex application) view a multi-megabyte high-resolution image and yet deliver only a few kilobytes of data at any one point in time. The trick is to think outside the box. When developing Flex applications, you might be tempted to stick to what you know. In some cases this may mean believing that Flex cannot deliver the required functionality, and yet in others it may mean expecting too much from the Flex application during runtime. Solutions you may have historically dismissed as challenging or impossible can now come into existence because of Flex. For more examples of thinking outside the box, check out my blog.
http://www.adobe.com/devnet/flex/articles/flexprimer_print.html
crawl-002
refinedweb
2,738
53.81
"Got to build a path in my code here, so I'll just use 'join' and a backslash... yeah, it isn't portable, but I'm never going to use this app on *nix,..." My meditation: it takes just a bit more extra effort and time to use File::Spec::Function, to make code relocatable, to make code as OS agnostic as possible. Because inevitably, there's some chance the code will be ported to a different machine, location, IP, OS, whatever. And if you've written it as portably as reasonably possible, it is Sheer Joy when in ports easily. And when you haven't, it is Sheer Annoyance, and you curse yourself for saving a few seconds back then in exchange for hours / days of effort now. The XP folks wisely warn against designing in extra functionality "just in case" -- I'm not advocating writing piles of extra difficult code just in case you end up porting your module elsewhere, or posting it on CPAN, whatever. I am advocating using simple idioms (like File::Spec) that don't take extra effort and give you the flexibility down the road. Because inevitably a successful project will wander across different platforms. Writing from the middle of an XP-to-Linux port -- water water, thank for this meditation. I've been fighting with several modules that I've installed at home on Linux that work just fine, but fail miserably on my Windows box at work because of this very issue. I'd just like to add on point though. If you do run into a module that you use on CPAN that fails because of this issue and you take the time to fix it, please take the time to put together a patch for the developer of the module and the public at large. Although it may be a pain to go through this extra step, your helping out the community at whole by making your work available. The loss in readability between: $file = "fnord/glop/bibba"; [download] $file = File::Spec -> catfile (qw /fnord glop bibba/); [download] The XP folks wisely warn against designing in extra functionality "just in case" Writing from the middle of an XP-to-Linux port Writing from the middle of an XP-to-Linux port Abigail In the rare cases where I had to do this, a simple fix_path function is usually sufficient to insert (which turns slashes into backslashes). I *do* need to look at File::Spec and others in these cases, but in general, conciseness is important to me when it can be done without causing problems. And when only dealing with Windows, Linux, and basic Unix systems, this is good enough for me. I'll capitulate when I have to port to something more obscure, I'm sure. I've never had a problem with ActiveState not translating between "/" and "\" in a path. The main problem I've had is with the tests, where a path in a string is "t/foo/bar" and Windows returns "t\foo\bar". This is enough to cause a test and CPAN install to fail. When I first started doing DBI stuff, I made the mistake of thinking, "I thought out this table design carefully, so I'm never going to add or change any fields." So I hardcoded the fields: my $q=$dbh->query("INSERT INTO foo VALUES (?,?,?,?,?); $q->execute($bar, $baz, $qx, $qux, $quux); [download] Hah. Don't ever do that. You end up tracking down every single query in your code and fixing it. Multiple times. I think one of the reasons File::Spec is not used more is that the default interface is ... erm ...strange. File::Spec->catfile(...)? Beg your pardon? Yeah OK so it uses inheritance to allow the default implementations of the functions to be overriden ... but that's just an implementation detail. And implementation details should not leak into the interface. The interface provided by File::Spec::Functions is the one that should have been the default one. It looks like the interface of most Perl modules, it allows you to import (or not) functions into your namespace just like any other well behaved module. A module that's OO and provides only static methods has no reason to be OO at all. This aint Java. Actually it seems to me OO inheritance is not the best implementation. It's slow. There is no reason why the right versions of the functions could not be found just once, on startup. I think it would be better to do something like this: #SpecBase.pm package SpecBase; require Exporter; @ISA = qw(Exporter); @EXPORT = @EXPORT_OK = qw(foo bar); sub foo { print "The base foo()\n"; } sub bar { print "The base bar()\n"; } 1; #SpecChild package SpecChild; require Exporter; @ISA = qw(Exporter); @EXPORT = @EXPORT_OK = qw(foo bar); use SpecBase qw(foo); # the inherited functions, to prevent the "Subro +utine bar redefined" warnings sub bar { print "Overwritten bar()\n"; } 1; #Spec.pm package Spec; ... sub import { shift(); ... find out the right version require $the_right_version; $the_right_version->import(@_) } 1; [download] Update: Actually looking into the code in File::Spec::Functions I see that the inheritance tree is not being searched through each time. The right method is found by the ->can(). But I think the closure also is not free. Jenda Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live. -- Rick Osborne Edit by castaway: Closed small tag in signature That's why Ken Williams, who maintains File::Spec, wrote Path::Class. It's a really, really nice object oriented module for path manipulations. Check out the Synopsis section of its POD to see how much better it is. My spouse My children My pets My neighbours My fellow monks Wild Animals Anybody Nobody Myself Spies Can't tell (I'm NSA/FBI/HS/...) Others (explain your deviation) Results (52 votes). Check out past polls.
http://www.perlmonks.org/index.pl?node_id=343476
CC-MAIN-2016-50
refinedweb
994
70.02
A decorator for caching properties in classes. Project description A decorator for caching properties monopoly.__dict__[: from cached_property import threaded_cached_property class Monopoly(object): def __init__(self): self.boardwalk_price = 500 @threaded_cached_property def boardwalk(self): """threaded_cached_property is really nice for when no one waits for other people to finish their turn and rudely start rolling dice and moving their pieces.""" sleep(1)) Working with async/await (Python 3.5+) The cached property can be async, in which case you have to use await as usual to get the value. Because of the caching, the value is only computed once and then cached: from cached_property import cached_property class Monopoly(object): def __init__(self): self.boardwalk_price = 500 @cached_property async def boardwalk(self): self.boardwalk_price += 50 return self.boardwalk_price Now use it: >>> async def print_boardwalk(): ... monopoly = Monopoly() ... print(await monopoly.boardwalk) ... print(await monopoly.boardwalk) ... print(await monopoly.boardwalk) >>> import asyncio >>> asyncio.get_event_loop().run_until_complete(print_boardwalk()) 550 550 550 Note that this does not work with threading either, most asyncio objects are not thread-safe. And if you run separate event loops in each thread, the cached version will most likely have the wrong event loop. To summarize, either use cooperative multitasking (event loop) or threading, but not both at the same time. Note: The ttl tools do not reliably allow the clearing of the cache. This is why they are broken out into seperate tools. See. Credits - Pip, Django, Werkzueg, Bottle, Pyramid, and Zope for having their own implementations. This package originally used an implementation that matched the Bottle version. - Reinout Van Rees for pointing out the cached_property decorator to me. - My awesome wife @audreyr who created cookiecutter, which meant rolling this out took me just 15 minutes. - @tinche for pointing out the threading issue and providing a solution. - @bcho for providing the time-to-expire feature Support This Project This project is maintained by volunteers. Support their efforts by spreading the word about: History 1.5.2 (2020-09-21) - Add formal support for Python 3.8 - Remove formal support for Python 3.4 - Switch from Travis to GitHub actions - Made tests pass flake8 for Python 2.7 1.5.1 (2018-08-05) - Added formal support for Python 3.7 - Removed formal support for Python 3.3 1.4.3 (2018-06-14) - Catch SyntaxError from asyncio import on older versions of Python, thanks to @asottile 1.4.2 (2018-04-08) - Really fixed tests, thanks to @pydanny 1.4.1 (2018-04-08) - Added conftest.py to manifest so tests work properly off the tarball, thanks to @dotlambda - Ensured new asyncio tests didn’t break Python 2.7 builds on Debian, thanks to @pydanny - Code formatting via black, thanks to @pydanny and @ambv 1.4.0 (2018-02-25) - Added asyncio support, thanks to @vbraun - Remove Python 2.6 support, whose end of life was 5 years ago, thanks to @pydanny 1.3.1 (2017-09-21) - Validate for Python 3.6 1.3.0 (2015-11-24) - Drop some non-ASCII characters from HISTORY.rst, thanks to @AdamWill - Added official support for Python 3.5, thanks to @pydanny and @audreyr - Removed confusingly placed lock from example, thanks to @ionelmc - Corrected invalidation cache documentation, thanks to @proofit404 - Updated to latest Travis-CI environment, thanks to @audreyr 1.2.0 (2015-04-28) - Overall code and test refactoring, thanks to @gsakkis - Allow the del statement for resetting cached properties with ttl instead of del obj._cache[attr], thanks to @gsakkis. - Uncovered a bug in PyPy,, thanks to @gsakkis - Fixed threaded_cached_property_with_ttl to actually be thread-safe, thanks to @gsakkis 1.1.0 (2015-04-04) - Regression: As the cache was not always clearing, we’ve broken out the time to expire feature to its own set of specific tools, thanks to @pydanny -. | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/cached-property/1.5.2/
CC-MAIN-2022-33
refinedweb
651
67.15
Hello everyone, I was trying to create a program that allows the user to input 4-30 assignment scores/max scores and drop 3 scores to output the best percentage. The exact requirements can be found in my other thread: I ended up using an array to store the score and max score separately. My problem at this point is trying to come up with a way to drop 3 scores. My code at this point is: Code java: import java.io.*; public class HW8 { public static void main(String args[]) throws IOException { BufferedReader keybd = new BufferedReader(new InputStreamReader(System.in)); double[] score = new double[31]; double[] max = new double[31]; System.out.println("Enter the amount of assignments: "); int numAssign = Integer.parseInt(keybd.readLine()); if (numAssign < 4) { numAssign = 4; } else if (numAssign > 30) { numAssign = 30; } double result = 0.0; double result2 = 0.0; for (int i = 1; i <= numAssign; i++) { System.out.print("Enter student's score #" + (i) + ": "); score[i] = Double.parseDouble(keybd.readLine()); if (score[i] < 0) { score[i] = 0; } System.out.print("Enter max score #" + (i) + ": "); max[i] = Double.parseDouble(keybd.readLine()); if (max[i] < 1) { max[i] = 1; } else if (max[i] > 100) { max[i] = 100; } if (score[i] > max[i]) { score[i] = max[i]; } result = result + score[i]; result2 = result2 + max[i]; } // ends for loop System.out.println("Original percentage: " + (result/result2)*100.0 + "%"); } // ends main } // ends class I was thinking about using if/else conditions but that wouldn't really limit the amount dropped to 3. Any advice is appreciated!
http://www.javaprogrammingforums.com/%20java-theory-questions/8892-how-drop-scores-printingthethread.html
CC-MAIN-2015-06
refinedweb
256
50.84
Make Python Pandas Go Fast Make Python Pandas Go Fast Learn how to use the open source big data platform, Wallaroo, with the Python Pandas library to perform analysis of big data sets. Join the DZone community and get the full member experience.Join For Free Some Background Suppose you have a Data Analysis batch job that runs every hour on a dedicated machine. As the weeks go by, you notice that the inputs are getting larger and the time it takes to run it gets longer, slowly nearing the one hour mark. You worry that subsequent executions might begin to ‘run into’ each other and cause your business pipelines to misbehave. Or perhaps you’re under SLA to deliver results for a batch of information within a given time constraint, and with the batch size slowly increasing in production, you’re approaching the maximum allotted time. This sounds like you might have a streaming problem! But — you say — other parts of the analytics pipeline are owned by other teams, and getting everyone on board with migrating to a streaming architecture will take time and a lot of effort. By the time that happens, your particular piece of the pipeline might get completely clogged up. Wallaroo, while originally designed for streaming and event data, can also be used to reliably parallelize many workloads not normally thought of as streaming, with little effort. Let’s make our Pandas go faster! We’ll use an ad-hoc cluster to parallelize a batch job and reduce its run-time by ¾ on one machine. The cluster will consist of several Wallaroo workers on one machine and can be shut down after the job is done. With this structure in place, we can easily scale out horizontally onto multiple machines, if needed. This means that we can roll out a little piece of streaming architecture in our own backyard, and have a story ready when the time comes to move other parts of the stack into the event streaming world. The Existing Pipeline # file: old_pipeline.py df = pd.read_csv(infile, index_col=0, dtype=unicode, engine='python') fancy_ml_black_box.classify_df(df) df.to_csv(outfile, header=False) The bottleneck lies in fancy_ml_black_box.classify_df. This function runs a classifier, written by our Data Analysts, on each row of the Pandas dataframe. Since the results of classifying a particular row are independent of classifying any other row, it seems like a good candidate for parallelization. A Note on the Fancy Black Box Classifer If you look inside the classifier source code, you’ll find that it calls dataframe.apply with a rather meaningless computation. We’ve chosen something that burns CPU cycles in order to simulate an expensive machine learning classification process and showcase the gains to be had from parallelizing it. Here’s how we can do it with Wallaroo: ab = wallaroo.ApplicationBuilder("Parallel Pandas Classifier with Wallaroo") ab.new_pipeline("Classifier", wallaroo.TCPSourceConfig(in_host, in_port, decode)) ab.to_stateful(batch_rows, RowBuffer, "CSV rows + global header state") ab.to_parallel(classify) ab.to_sink(wallaroo.TCPSinkConfig(out_host, out_port, encode)) The idea is to ingest the csv rows using our TCP source, batch them up into small dataframes, and run the classification algorithm in parallel. We’ll preserve the input and output formats of our section of the pipeline, maintaining compatibility with upstream and downstream systems, but, hopefully, see significant speed increases by leveraging all the cores on our server. Baseline Measurements Let’s get some baseline measurements for our application. Here are the run-times for input files of varying sizes: These numbers make it clear that we’re dealing with an algorithm of linear run-time complexity — the time taken to perform the task is linearly dependent on the size of the input. We can estimate that our pipeline will be in trouble if the rate of data coming in exceeds ~270 rows/second, on average. This means that if the hourly job inputs start to approach 1 million rows, new jobs may start ‘running into’ old jobs that haven’t yet finished. Parallelizing Pandas With Wallaroo Let’s see if we can improve these numbers a bit, by splitting all the work among the available CPU cores (8 of them) on this machine. First, we’ll need some scaffolding to set up input and output for Wallaroo. Step 1: Sending the CSV File to Wallaroo We’ll use a Python script to read all the lines in our input csv file and send them to our Wallaroo TCP Source. We’ll need to frame each line so that they can be decoded properly in the Wallaroo source: try: with open(filename, 'rb') as f: for line in f.readlines(): line = line.strip() sock.sendall(struct.pack(">I",len(line))+line) finally: sock.sendall(struct.pack(">I",len(EOT))+EOT) print('Done sending {}'.format(filename)) sock.close() sock.sendall(struct.pack(">I",len(line))+line) means: encode the length of the line as a 4-byte, big-endian integer ( I), then send both that integer and the full line of text, down the TCP socket. In the finally clause, we also encode and send down a single ASCII EOT byte, to signal that this is the end of our input. This TCP input is received by our decoder: @wallaroo.decoder(header_length=4, length_fmt=">I") def decode(bs): if bs == "\x04": return EndOfInput() else: return bs As you can see, if our data is the EOT byte ( \x04), we’ll create an object that makes the “End Of Input” meaning explicit. Otherwise, we’ll take the data as-is. Step 2: Batching the CSV Rows The next step in the pipeline is where we batch input rows into chunks of 100. @wallaroo.state_computation(name='Batch rows of csv, emit DataFrames') def batch_rows(row, row_buffer): return (row_buffer.update_with(row), True) The RowBuffer state object will take the first row it sees and save that internally as a header. Then it will accept incoming rows until it stores a certain amount (100 rows in our app). The .update_with(row) method will return None if the row was added but there’s still room in the buffer. If the update fills the buffer, it will zero out internally and emit a BatchedRowsobject with two fields: a header and rows. This object will get passed down to the next computation, while the RowBuffer will start collecting another batch. A Note on Serialization Efficiency Why go through the exercise of batching, when we can simply send each entry in the CSV file as a single-row dataframe to our classifier? The answer is: for speed. Every transfer of data between computation steps in Wallaroo can potentially entail coding and decoding the data on the wire, and the creation of dataframe objects is not without its own cost. Step 3: Classifying Mini-Dataframes in Parallel This is the part of the pipeline where we can bring Wallaroo’s built-in distribution mechanism down to bear on our problem: @wallaroo.computation(name="Classify") def classify(batched_rows): df = build_dataframe(batched_rows) fancy_ml_black_box.classify_df(df) return df There is some massaging involved in getting a BatchedRows object converted into a dataframe: def build_dataframe(br): buf = StringIO(br.header + "\n" + ("\n".join(br.rows))) return pd.read_csv(buf, index_col=0, dtype=unicode, engine='python') Essentially, we glue the BatchedRows.header to the BatchedRows.rows to simulate a stand-alone csv file, which we then pass to pandas.read_csv in the form of a StringIO buffer. We can now pass the resulting enriched dataframe to the fancy_ml_black_box.classify_df() function. All of the above work, including marshaling the data into a dataframe, happens in parallel, with every Wallaroo worker in the cluster getting a different instance of BufferedRows. Step 4: Encoding Back to a File The dataframe output by classify(), above, gets serialized and framed by the encode step. By now you should be somewhat familiar with the simple TCP framing used throughout this project: def encode(df): s = dataframe_to_csv(df) return struct.pack('>I',len(s)) + s With the helper function dataframe_to_csv defined as: def dataframe_to_csv(df): buf = StringIO() df.to_csv(buf, header=False) s = buf.getvalue().strip() buf.close() return s This representation is read by the Wallaroo tool data_receiver, which is told to listen for --framed data: nohup data_receiver \ --framed --listen "$LEADER":"$SINK_PORT" \ --ponynopin \ > "$OUTPUT" 2>&1 & Which is great, because that’s what it’s going to get. The output will be written to a file, specified by the environment variable OUTPUT. The Effects on Run-Time First, let’s verify that the new code produces the same output as the old code: $ /usr/bin/time make run-old INPUT=input/1000.csv ./old_pipeline.py input/1000.csv "output/old_1000.csv" 3.85user 0.47system 0:03.70elapsed 116%CPU (0avgtext+0avgdata 54260maxresident)k 176inputs+288outputs (0major+17423minor)pagefaults 0swaps $ /usr/bin/time make run-new N_WORKERS=1 INPUT=input/1000.csv INPUT=input/1000.csv OUTPUT="output/new_1000.csv" N_WORKERS=1 ./run_machida.sh (..) 4.48user 0.90system 0:04.13elapsed 130%CPU (0avgtext+0avgdata 63808maxresident)k 0inputs+352outputs (0major+989180minor)pagefaults 0swaps $ diff output/new_1000.csv output/old_1000.csv $ echo $? 0 Yay! The results match, and the run-time is only 1-second slower, which is not that bad, considering we’re launching 3 separate processes (sender, wallaroo, and receiver) and sending all the data over the network twice. Now, let’s see the gains to be had on bigger inputs. First, the 10,000-line file: Now, with the 100,000-line file: And with the million-line file: Why Didn't You Test on Two Workers? Due to the single-threaded constraints of Python’s execution model, the initializer in a wallaroo cluster will often aggressively undertake its share of a parallel workload before sending out work to the rest of the cluster. This means that running a parallel job on two workers will not yield speed benefits. We recommend running clusters of at least four workers in order to leverage Wallaroo’s scaling capabilities. As you can see above (and verify for yourself by cloning this example project), we were able to cut the million-line processing time down to sixteen minutes. Moreover, if the input datasets become too large for our single-machine, eight-worker cluster, we can very easily add more machines and leverage the extra parallelism, without changing a single line of code in our Wallaroo application. This gives us considerable capacity to weather the storm of increasing load, while we design a more mature streaming architecture for the system as a whole. What’s Next? Hopefully, I’ve made the case above that Wallaroo can be used as an ad-hoc method for adapting your existing Pandas-based analytics pipelines to handle increased load. Next time, I’ll show you how to spin up Wallaroo clusters on-demand, to handle those truly enormous jobs that will not fit on one machine. Putting your analytics pipelines in a streaming framework opens up not only possibilities for scaling your data science, but also for real-time insights. Once you’re ready to take the plunge into a true evented model, all you have to do is send your data directly to Wallaroo, bypassing the CSV stage completely. The actual Wallaroo pipeline doesn’t need to change! With a little up-front investment, you’ve unlocked a broad range of possibilities to productionize your Python analytics code. Published at DZone with permission of Simon Zelazny , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/make-python-pandas-go-fast
CC-MAIN-2019-22
refinedweb
1,936
53
Class to hold statuses. More... #include <status.hpp> Class to hold statuses. Returns an array of matched accounts. Returns application from which the status was posted. Returns card. Returns content of status. Sets content of status. Returns time of creation. Returns an array of emojis. Constructs an Entity object from a JSON string. Returns true if the user has favourited the status. Returns the number of favourites. Returns the ID of the status. Returns the ID of the account it replies to. Returns the ID of the status it replies to. Sets the ID of the status it replies to. Returns the language of the status. Overrides the language of the status (ISO 639-2) Returns the attachments. Sets the attachments. Returns the mentions. Returns true if the user muted the conversation. Returns true if the status is pinned. Returns the reblogged Status. Returns true if the user has reblogged the status. Returns the number of reblogs for the status. Returns the number of replies for the status. Returns true if the attachments should be hidden by default. Sets sensitive flag for attachments. Returns the spoiler text. Sets the spoiler text. Returns the tags. Returns the Fediverse-unique resource ID. Returns the URL to the status page. Returns true if the Entity holds valid data. Implements Mastodon::Easy::Entity. Returns the visibility of the status. Sets the visibility of the status.
https://doc.schlomp.space/mastodon-cpp/classMastodon_1_1Easy_1_1Status.html
CC-MAIN-2019-22
refinedweb
231
73.34
Using Vue.js with Ruby on Rails 5.2 Application Vue is a progressive JavaScript library for building user interfaces inspired by both Angular and React. You can use Vue.js with Ruby on Rails to build full-stack web applications using different approaches, such as: - Using separate front-end (Vue.js) and back-end apps (Ruby on Rails), - Including Vue.js in Ruby on Rails views In this tutorial, we'll see how to use Vue.js in your Rails application without creating a separate back-end. This is useful if you want to use Vue.js instead of plain JS or jQuery. Creating a Ruby on Rails Project Let's start by creating a new Ruby on Rails project using the following command: $ rails new vuerailsdemo --skip-turbolinks --webpack=vue The --webpack=vue option tells Rais to use Webpack for managing assets. You need to put your JavaScript code in the app/javascript/packs folder. Creating a Controller & View Now let's proceed by creating a Rails controller and view using the following command: $ rails generate controller Main index Next, let's make the index action as the root of your Rails application. Open the config/routes.rb file and add: root 'main#index' Hello Vue Create a app/javascript/packs/vueapp.js file and add the following code to create a Vue instance: import Vue from 'vue/dist/vue.esm' document.addEventListener('DOMContentLoaded', () => { new Vue({ el: '#app', data: { message: 'Hello Vue!' }, methods: {} }); }); The Vue instance takes a bunch of properties such as el used to specify the selector of the DOM element here to attach the instance, data used for adding any data required by the Vue instance and methods for adding any methods you need to use. Next, open the app/views/main/index.html.erb file, and add a <div> with the #app id <h1>Rails & Vue.js App</h1> <div id="app"> </div> We use the parentheses to display the content of the message variable of the Vue instance in our template. Next, you need to include the app/javascript/packs/vueapp.js file in the application.html.erb file. Open the app/views/layouts/application.html.erb file and use the javascript_pack_tag tag to include the JS file inside <head>. <%= javascript_pack_tag 'vueapp' %> That's all you need to include Vue.js in your Rails view. Serving your Ruby on Rails Application Head back to your terminal and run the following command to serve your Rails web application: $ rails s Now, navigate to the address with your web browser to see your app up and running. You can see from this simple example how easy it is to use Vue.js within your Rails application. Rails 5.1+ added support for Webpack which you can use to include modern JavaScript frameworks and libraries like Vue.js or React.
https://www.techiediaries.com/vuejs-ruby-on-rails/
CC-MAIN-2020-34
refinedweb
474
57.77
The explorer displays the contents of your scene in a hierarchical structure called a tree. This tree can show objects as well as their properties as a list of nodes that expand from the top root. You normally use the explorer as an adjunct while working in Softimage, for example, to find or select elements. Using the explorer, you can: Display and navigate among various elements. Set the scope to determine the range of information shown. Set filters to determine the types of information shown. Sort and reorder elements. Create parent/child relationships between objects by dragging and dropping. Add objects to groups, layers, and partitions by dragging and dropping. Duplicate elements by Ctrl+dragging. You can open a full explorer, or any one of many specialized pop-up explorers. Note that the pop-up explorers only allow you to select elements and open property editors. Click outside a pop-up explorer to close it. Opening the Full Explorer Do any one of the following: Press 8 (use the key at the top of the keyboard, not the numeric keypad) or choose View General Explorer from the main menu bar. This opens the explorer in a floating window. Choose Explorer from a viewport's Views menu. The explorer opens docked in that viewport. Opening Select Panel Explorers You can use the buttons on the Select panel to quickly open specialized pop-up explorers. You can quickly display a pop-up explorer for a single object — just select the object and press Shift+F3. If the object has no synoptic property or annotation, you can press simply F3. You can press those keys again to close it. Opening Schematic View Explorer To display information on an element in the schematic view, right-click on its node and choose Explore from the menu. For more information on the schematic view, see The Schematic View. The explorer displays elements in a tree-like hierarchy of nodes. Selecting Elements in the Explorer Selection in the explorer is different than the 3D and schematic views, but it is quite straightforward. You can select any element at any time in the explorer by simply clicking on its name, even if it is unselectable or hidden in the 3D views. You do not need to activate a selection tool first (unless another tool like Parent or Reorder is active), and the selection filters on the Select panel are ignored. Selected elements are highlighted in white in the explorer, and the children of a branch-selected element are highlighted in gray. Note that selecting parameters has the effect of marking them for animation. In addition, you cannot select components in the explorer. For more information about selecting in general, including commands for modifying the selection, see Selecting [Scene Elements]. Selecting a Single Element in the Explorer To select any element in the explorer Note that clicking on its icon opens the corresponding property editor without selecting the element. Multi-selecting Elements in the Explorer You can use the Shift key to select multiple elements in the explorer. When you Shift+click to add a second element to the selection, the rules are as follows: If the both first and second elements are of the same type (3D object, property, group, pass, partition, etc.), then all elements of that type between them are added to the selection. Other types of elements in-between are not selected; for example, if you click on a 3D object and then Shift+click on another 3D object, then all visible 3D objects in-between become selected, but properties, shaders, and so on, remain unselected. When Shift+clicking in the explorer, models and 3D objects are considered to be one type, as are properties and materials (but not shaders). If the two elements are of different types, the second one is simply added to the selection. Only visible elements are selected. Elements that are under a collapsed node remain unaffected. Select the first element in the range. You can select the element in any way, using the explorer or not. However, note that if a element appears multiple times in the explorer (for example, if it is a member of a group), you should select it in the explorer so that it uses the correct "anchor point" for subsequent clicks. Shift+click on the name of the last element in the range. Use the left mouse button to add elements to the selection. Use the middle mouse button to add branches to the selection. To add ranges to the selection, Ctrl+click on the first element in a new range and then Ctrl+Shift+click on the last element. Toggle-selecting Elements in the Explorer Ctrl+click on the name of any element. Use the left mouse button to toggle elements. Use the middle mouse button to toggle branches. Deselecting Elements in the Explorer Ctrl+Shift+click on the name of any element. Use the left mouse button to deselect elements. Use the middle mouse button to deselect branches. Keeping Track of Selected Elements in the Explorer If you have selected elements in another view and their nodes are not visible in the explorer, choose View Find Next Selected Node. The explorer scrolls up or down to display the first object node in the order of its selection. Each time you choose this option, the explorer scrolls up or down to display the next selected node. Once the end of the selection order is reached, the command loops around to the beginning again. Choose View Track Selection if you want to automatically scroll the explorer so that the node of the first selected object is always visible. Setting the Scope of the Explorer The Scope button determines the range of elements to display. You can display specific parts of the scene, as well as preferences and other parts of the Softimage application itself. Locking onto a Selected Object The Selection option in the explorer's scope menu displays information associated with the currently selected object. If you click the Lock button with the Selection option active, the explorer continues to display the property nodes of the currently selected objects, even if you go on to select other objects using other views. Click Lock a second time to switch off the lock feature. When Lock is on, you can also select another object and click Update to lock on to it and update the display. Filters control which types of nodes are displayed in the explorer. For example, you can choose to display objects only, or objects and properties but not clusters or parameters, and so on. By displaying exactly the types of elements you want to work with, you can find things more quickly without scrolling through a forest of nodes. The basic filters are available on the Filters menu (between the View menu and the Lock button). The label on the menu button shows the current filter. The filters available on the menu depend on the scope. For example, when the scope is Scene Root, the Filters menu offers several different preset combinations of filters, followed by specific filters that you can toggle on or off individually. The filters you set are saved and restored the next time you use the same scope. To restore the defaults for all scopes, you can restore the default preferences for the explorer view as described in Restoring Preferences [Data Management], Note that the Groups filter toggle also affects partitions and layers. Finding Elements in the Explorer Although the explorer lets you search for elements, you may prefer to use the Scene Search view if you are looking for objects and other elements under the scene root. See The Scene Search View. The search box on the right of the explorer command bar lets you search for elements by name, by type, or by keyword. When you perform a search in the explorer, the matching elements are displayed in a flat list and the scope automatically changes to Custom. If there is no match, nothing happens. If you perform a new search immediately, the previous scope is used automatically so you don't need to reset it. To display all elements again, click the triangle to the right of the search box and choose All Items. Alternatively, select another scope. Finding Elements by Name in the Explorer The search box lets you search for elements by name using wildcards and regular expressions. Set the explorer's scope to the desired range (see Setting the Scope of the Explorer). The scope of the explorer determines the range of the search. For example, you can search for any object in the scene if the scope is Scene Root. However, if the scope is Current Layer, then the search is restricted to objects in the current layer. Type a string in the search box. For a list of wildcards that you can use, see Valid Search Patterns. Press Enter or click outside the search box. To repeat a recent name search Click the triangle to the right of the search box, and choose a previous search string from the Recent Name Search list. Finding Elements by Type in the Explorer The filters available on the name search box let you search for elements by type. Set the explorer's scope to the desired range (see Setting the Scope of the Explorer). The scope of the explorer determines the types of element you can search for. For example if the scope is Scene Root, then you can search for nulls, curves, polygon meshes, and so on. Click the triangle to the right of the search box, and choose an item from the Filters or Custom Filters submenu. For information about adding your own custom filters, see Custom Filter [SDK Guide]. Finding Elements by Keyword If there are user keywords on elements, you can search for them in the explorer. For information about adding keywords to elements, see User Keywords [Scene Elements]. Set the explorer's scope to the desired range (see Setting the Scope of the Explorer). The scope of the explorer determines the types of element you can search for. For example if the scope is Scene Root, then you can search for keywords on objects. Click the triangle to the right of the search box, and choose an item from the User Keywords submenu. If the current scope is Custom, then the previously selected keyword appears with a checkmark. Sorting and Reordering Elements in the Explorer You can sort the elements in the explorer according to various criteria using options in the View menu. In addition, you can reorder elements to change the default, or creation, order. The sort orders are remembered for each scope and they are updated and saved in your Explorer [Preference Reference]. To sort objects, sources, clips, and other basic elements, choose the desired option from the View General Sort submenu of the explorer: None (creation) uses the default order, based on when an element was created or parented. Alphabetical sorts the elements alphabetically. Any numeric suffix is sorted in correct numerical order, so Object2 comes before Object10. The children of an object are always listed after any parameter sets of the object. Type + Alphabetical sorts the elements by type first, and then alphabetically within each type. The types depend on the scope. For example with the Scene Root scope, the explorer lists all the cameras in alphabetical order, then all the lights, models, referenced models, nulls, chains, curves, polygon meshes, NURBS surfaces, text, point clouds, hair objects, control objects, forces, dynamic constraints, implicits, and geoshaders, each in alphabetical order. If the scope contains only one type of element, this option is equivalent to Alphabetical. Used + Alphabetical sorts the elements into used and unused groups, and then sorts alphabetically within each group. This option is available only with the Sources/Clips scopes. To sort parameters, choose the desired option from the View Parameter Sort submenu of the explorer: None (creation) uses the default order based on when the parameter was created. Alphabetical sorts the parameters alphabetically. Any numeric suffix is sorted in correct numerical order, so Param2 comes before Param10. Layout uses the order in which the parameters appear in their property editor. If a parameter does not appear in the corresponding property editor, it is not listed. Reordering Scene Objects in the Explorer By default, elements in a scene are ordered according to when they were created or when they became children of their parent. This underlying order is reflected in the explorer and schematic views when elements are not sorted, and is also used when selecting the next or previous sibling using the buttons on the Select panel or the Alt+arrow keys. You can change this underlying order in the explorer using the Reorder tool. The Reorder tool allows you to reorder: Child objects of a parent. Passes in the pass list. Note that the order of passes in the Pass Selection menu on the Render toolbar and main menu bar is based on the sort order set in the explorer. Layers in the layers list. Clusters in a cluster container. To reorder scene elements in the explorer Make sure that View General Sort None (creation) is checked. Choose View Reorder Tool. The mouse pointer changes to show that the tool is active. Drag an element above or below another one in the scene explorer. Repeat to reorder more objects. When you have finished, exit the tool by pressing Esc. Using Context Menus in the Explorer You can right-click on any element in the explorer to perform a variety of functions, such as expanding or collapsing hierarchies, renaming or deleting nodes, or opening property editors. Right-clicking certain nodes provides specific options. For instance, you can use a constraint node's context menu to activate or deactivate the constraint. Right-clicking a container node (for example, the node that contains all of an object's constraints) displays a menu for activating, deactivating, or deleting all the nodes in the container at once. In general, when you open a context menu: If the element under the mouse pointer is not selected, then only that element is affected by the menu command you choose. If the element under the mouse pointer is selected, then all selected elements are affected. The exception is when you use the context menu in the explorer to mute/unmute deformations or activate/deactivate constraints. These commands act like toggles, and affect only the deformation or constraint under the pointer. A check mark in the menu indicates whether the node is currently muted or active. Renaming Scene Elements in the Explorer You can rename elements in your scene using the explorer. You can rename lights, cameras, geometric objects, render passes, and layers. Right-click on any element node in the explorer and choose Rename from the menu. Select the element you want to rename in the explorer and press F2. If you have multiple nodes selected, pressing F2 lets you rename the last node you selected. Type a new name. You can click with the mouse or use the arrow keys to shift the cursor position. You can drag with the mouse or use the Shift+arrow keys to select text. To finish, either press Enter or click outside the highlighted name area. If the new name is not unique in its namespace, a number is appended automatically.
http://download.autodesk.com/global/docs/softimage2013/en_us/userguide/files/3dexplorer510.htm
CC-MAIN-2016-44
refinedweb
2,569
54.83
Lucas-Carmichael Numbers January 6, 2015 We start the new year with a simple task from number theory. A Lucas-Carmichael number is a positive composite integer n, odd and square-free, such that p + 1 divides n + 1 for all prime factors p of n. For instance, 2015 is a Lucas-Carmichael number because 2015 = 5 × 13 × 31 and 5 + 1 = 6, 13 + 1 = 14, and 31 + 2 = 32 all divide 2015 + 1 = 2016. The restriction that n be square-free prevents the cubes of prime numbers, such as 8 or 27, from being considered as Lucas-Carmichael numbers. Your task is to write a program to find all the Lucas-Carmichael numbers less than a million. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below. Perl, 5s for 1e7 (135 L-C numbers), <1.5min for 1e8 (323 L-C numbers). Perl: use warnings; use strict; use ntheory qw/:all/; foroddcomposites { my $n=$_; say unless moebius($n) == 0 || vecsum( map { ($n+1) % ($_+1) != 0 } factor($n) ); } 1e6; I decided to use a sieve. For p = 3, write out the multiples of p and of (p+1): 3 6 9 12 15 18 21 24 27 4 8 12 16 20 24 28 The only values of n for which (n+1)%(p+1) == 0 are n = 15, 27, 39, …. Similarly, for p = 5, the only values are n = 35, 65, 95, …. In general, for a prime ‘p’, the only values of ‘n’ for which (n+1)%(p+1) == 0 are given by: p + k*p*(p+1) for k = 1,2,3,… Initialize the sieve to all 1s. For each odd prime ‘p’: for k in 1, 2, 3, …: sieve[p+k*p*(p+1)] *= p return a list of all n where sieve[n] == n The code below sieves the primes I parallel with sieving the LCNs. The sieves also only contain odd n, so the index for a value of n is given by n//2. Finds 323 LCNs under 1e8 in about 12 seconds. Haskell import Data.List factors :: Int -> [Int] factors n = factors’ n 2 where factors’ n f | f*f > n = [n] | n `mod` f == 0 = f : factors’ (n `div` f) f | otherwise = factors’ n (f+1) noDupes :: [Int] -> Bool noDupes xs = length(nub xs) == length xs lcn :: [Int] -> [Int] lcn xs = filter (\x -> let fs = factors x in length(fs) > 1 && noDupes(fs) && (all (\f -> (x + 1) `mod` (f + 1) == 0) fs) ) xs limit = 1000000 n = [3,5..limit] main = print (lcn n)
https://programmingpraxis.com/2015/01/06/lucas-carmichael-numbers/
CC-MAIN-2017-51
refinedweb
442
75.54
<ac:macro ac: <h2>Information</h2> <ul> <li>Date: 01 February 2012, 18:00-19:00 UTC</li> <li><ac:link><ri:page ri:<ac:link-body>Agenda</ac:link-body></ac:link></li> <li>Moderator: Matthew Weier O'Phinney (nickname weierophinney)</li> <li>Next meeting: 08 February 2012</li> </ul> <h2>Summary</h2> <h3>Weekly meetings? </h3> <p>(Search for "18:03:13" in the log.)</p> <p>During <ac:link><ri:page ri:<ac:link-body>last week's meeting</ac:link-body></ac:link>, a number of folks brought up that they would like more frequent meetings, in part:</p> <ul> <li>To allow for shorter meetings</li> <li>To get more frequent updates on project status</li> <li>To space out votes (i.e., fewer votes per meeting)</li> </ul> <p>There was little to no real discussion this week – everybody felt it was a good idea, and thus it got a quick "+1".</p> <p><strong>tl;dr</strong>: Meetings are now weekly, at 18:00 UTC on Wednesdays.</p> <h3>Vote on the Zend\View Proposal </h3> <p>(Search for "18:13:42" in the log.)</p> <p>Matthew (me) posted the <ac:link><ri:page ri:<ac:link-body>View layer RFC</ac:link-body></ac:link> some weeks ago, and has a prototype ready; he has asked for a vote so he can continue in earnest.</p> <p>There was a little discussion surrounding the following topics:</p> <ul> <li>Terminology. ralphschindler thinks there may be confusion between the terms "Response" (response object), "Result" (result of execution, passed to the MvcEvent), and ViewModels. He did not think it was a deal-breaker in terms of approval however; Matthew asked him to clarify his problems on the list so that he can address them if necessary.</li> <li>EvanDotPro suggested an "ErrorHandlingViewStrategy" should be included by default.</li> <li>Where should the strategy be instantiated and attached to the event manager? Matthew suggested we could potentially do this in the Application or Bootstrap objects, but it needs a little thought.</li> <li>Aldemar wanted to ensure that the component handles disambiguation properly. As an example, as of beta2, two controllers in two modules using the same methods will result in the same view script being invoked – which can lead to the wrong view script being invoked. Matthew indicated that this is resolved via a combination of hinting to the ViewModel returned by the controller which view script to use, and configuring your resolvers correctly.</li> </ul> <p>In the end, there were no dissenting votes, providing approval for the RFC.</p> <p><strong>tl;dr</strong>: The View Layer RFC was approved.</p> <h3>Discussion of Coding Standard Ballot Items</h3> <p>(Search for "18:39:59" in the log.)</p> <p>Ralph started a discussion last week about naming of interfaces (and abstract classes, and traits), in light of usage patterns and problems encountered so far in ZF2's life-cycle. The discussion is backed by an <ac:link><ri:page ri:<ac:link-body>RFC</ac:link-body></ac:link>, which details some of the issues and rationale behind proposed changes.</p> <p>Ralph has now created a <ac:link><ri:page ri:<ac:link-body>poll</ac:link-body></ac:link>, and wanted to ensure it accurately listed all options we should vote on. </p> <p>The only change suggested was by Thinkscape, who suggested adding an option for having traits in their own subnamespaces of components.</p> <p>With that change, we voted to open the poll for one week, starting today.</p> <p><strong>tl;dr</strong>: the poll on coding standards for interfaces, abstract, and trait naming conventions is now open.</p> <h3>PHP 5.4?</h3> <p>(Search for "18:50:52" in the log.)</p> <p>I raised the idea of considering PHP 5.4 as the minimum required version for ZF2. The primary reasons:</p> <ul> <li>PHP 5.4 stable is on the verge of <em>in</em>> <p>In the end, we decided to provide a poll in the wiki, allow discussion for a week, and then open the poll to voting. Matthew will create this in the next day or so. <strong>UPDATE:</strong> poll is <ac:link><ri:page ri:<ac:link-body>here</ac:link-body></ac:link>.</p> <p><strong>tl;dr</strong>: Many, many opinions on PHP 5.4 adoption and what it would mean to the framework, and little consensus. More discussion is needed.<"> Feb 01 18:03:13 <weierophinney> First topic: do we want to move to weekly meetings for the foreseeable future? Feb 01 18:04:04 <NickBelhomme_> I cannot comment on that one, because I don't follow the bi-weekly meetings anymore on a regular basis. Feb 01 18:04:19 <NickBelhomme_> The people who requested that are here? Feb 01 18:04:25 <weierophinney> I'll admit I only perused the log quickly last week. Feb 01 18:04:27 <ralphschindler> Thinkscape had issues with weekly meetings IIRC Feb 01 18:04:28 <NickBelhomme_> if not... then 2 weekly seems logical Feb 01 18:04:28 <lubs> weierophinney: yes; i would like to see the certain ones of them more status related Feb 01 18:05:07 <ralphschindler> lugs you're implying that decision making meetings can't happen weekly then, right? Feb 01 18:05:12 <weierophinney> But I know from using my PM hat that having weekly makes it much easier for me to gauge what stuff is ready, and to coordinate volunteers for initiatives that are having issues (missing contribs, busy contribs, etc) Feb 01 18:05:13 <ralphschindler> lubs* Feb 01 18:05:53 <lubs> ralphschindler: well; yes and no... smaller decisions could be made but potentially larger ones could wait for a bi-weekly or monthly meeting? Feb 01 18:06:27 <EvanDotPro> weierophinney: that's my perspective as a project manager as well Feb 01 18:06:39 <weierophinney> lubs, or we could have fewer decisions per meeting as well Feb 01 18:06:45 <weierophinney> which may keep them shorter. Feb 01 18:06:52 <lubs> weierophinney: that would work; which would be awesome Feb 01 18:06:59 <ralphschindler> Thinkscape: you had reservations about the weekly meeting (That is what we're discussing) Feb 01 18:07:01 <weierophinney> Those just joining: what are your thoughts on weekly meetings? Feb 01 18:07:26 <MikeA> +1 Feb 01 18:07:30 <ralphschindler> i think fewer decisions is a product of going weekly, no? Feb 01 18:07:30 <Bostjan> +1 Feb 01 18:07:40 <weierophinney> ralphschindler, I'd think so. Feb 01 18:07:56 <NickBelhomme_> weekly meetings gives more flexibility to people who want to attend, if they skip one, they have the bi-weekly. If they have something important to throw they can do that in the next upcoming week meeting. Feb 01 18:07:57 <EvanDotPro> i think weekly meetings to better maintain a picture of the progress being made is good – i do think we should put tihngs like voting into more persistent mediums like the wiki Feb 01 18:08:22 <Thinkscape> Sorry, my alarm clock failed. I'm ok for weekly, unless we don't have any topics to discuss which I doubt. Feb 01 18:08:25 <Bostjan> i like reading irc logs, it's nice to see progress (from those who are not involved in developing zf2) Feb 01 18:08:44 <PadraicB> +1 for weekly Feb 01 18:08:47 <ralphschindler> +1 Feb 01 18:08:47 <weierophinney> EvanDotPro, I think that makes sense as well. Feb 01 18:08:55 <Slamdunk> +1 for weekly Feb 01 18:08:56 <EvanDotPro> +1 for weekly from me as well Feb 01 18:09:29 <NickBelhomme_> +1 weekly: smaller and quicker feedback Feb 01 18:10:09 <weierophinney> kk, I'm calling it: agreed to weekly meetings. Feb 01 18:10:12 <lubs> yay +1 for weekly Feb 01 18:10:15 »» Thinkscape is unable to vote on because there is a small lock there, and there was no info on ML on that Feb 01 18:10:19 <Bostjan> \o/ Feb 01 18:10:31 <weierophinney> Is the 18:00 UTC do-able, or should we switch things up. Feb 01 18:10:34 <weierophinney> I'm big on consistency. Feb 01 18:10:47 <weierophinney> Thinkscape, the discussion today is whether that poll captures all the options. Feb 01 18:10:55 <weierophinney> Thinkscape, once approved, we'll open it. Feb 01 18:11:08 <Thinkscape> ah, ok. Feb 01 18:11:14 <weierophinney> back on topic.. Feb 01 18:11:21 <weierophinney> 18:00 UTC okay, or a different time? Feb 01 18:11:43 <weierophinney> going once Feb 01 18:11:43 <NickBelhomme_> In europe (Belgium) 18h is perfect Feb 01 18:11:49 <MikeA> +1 Feb 01 18:11:52 <NickBelhomme_> +1 Feb 01 18:11:53 <EvanDotPro> 18:00 is good for me. Feb 01 18:12:13 <Thinkscape> +1 Feb 01 18:13:04 <weierophinney> and there we go. Feb 01 18:13:14 <weierophinney> Decision: weekly meetings, 18:00 UTC on Wednesdays. Feb 01 18:13:42 <weierophinney> Next topic: Zend\View RFC Feb 01 18:13:51 <weierophinney> Can we approve it? Feb 01 18:14:14 <weierophinney> As a reminder, the RFC is here: Feb 01 18:15:32 <MikeA> Poll there is unanimous in favour at present Feb 01 18:15:32 <ralphschindler> i like the general architecture, I'm not fond of some of the terminology Feb 01 18:16:01 <Thinkscape> I'm +1 for the general architecture - there are a few things to polish and determine (like those events, configuring layouts etc.) but that's up to consequent prototypes. Feb 01 18:16:04 <weierophinney> MikeA, that poll is for whether to include the topic in the meeting. Feb 01 18:16:10 <weierophinney> ralphschindler, which terminology? Feb 01 18:16:29 <ralphschindler> theres a disconnect in this line: Feb 01 18:16:30 <ralphschindler> $result = new ViewModel(); Feb 01 18:16:36 <weierophinney> Thinkscape, exactly – that's exactly what I've run into as well. RFCs capture the general architecture, but things often need tweaks during development. Feb 01 18:16:42 <ralphschindler> AFAIK, there is no result infrastructure? Feb 01 18:16:49 <weierophinney> ralphschindler, actually, there is. Feb 01 18:16:59 <PadraicB> With Thinkscape, I believe Feb 01 18:16:59 <weierophinney> ralphschindler, the MvcEvent composes a "result" of dispatch Feb 01 18:17:14 <weierophinney> ralphschindler, in this case, the result is a ViewModel. Feb 01 18:17:21 <weierophinney> (vs a Response object) Feb 01 18:17:41 <ralphschindler> but the result could be something other than related to a view, like a redirect object? Feb 01 18:18:05 <weierophinney> ralphschindler, correct. (redirect object == response object with a Location header and alternate status code) Feb 01 18:18:34 <ralphschindler> well, that comes back to my original statement about terminology results and responses might get confusing for folks Feb 01 18:18:57 <Thinkscape> good question though - if controllers producte ViewModel's that are consumed by strategies that produce redirect Responses ---- how can a controller control a redirect? Feb 01 18:19:03 <saltybeagle> ralphschindler: you're just talking about the name of the variable there? $result vs $viewModel or some such thing? Feb 01 18:19:42 <Thinkscape> Unless "redirection" is one of the strategies .. Feb 01 18:19:44 <weierophinney> Thinkscape, typically, once you hit the View, you won't be creating a redirect. Feb 01 18:19:47 <ralphschindler> saltybeagle: the concept, and educating users on what a result is (MvcEvent listens for) and a response object (similar in nature to dispatch's second parameter) Feb 01 18:19:53 <weierophinney> Thinkscape, but yes, it could be a strategy Feb 01 18:20:21 <ralphschindler> so, no not the variable name, but what ends up handling something returned from an action (the underlying architecture) Feb 01 18:20:22 <weierophinney> ralphschindler, mvc doesn't "listen" for a result. Controllers set the event's result when done with dispatch. Feb 01 18:20:40 <Thinkscape> hmm... but that introduces some lazy typing (implicit structures) like, inside a controller: return array('redirect'=>true,'redirectUrl'=>'') ... Feb 01 18:20:50 <weierophinney> ralphschindler, if the result is a Response object, the event loop ends, and the response gets kicked out and returned immediately. Feb 01 18:21:12 <Thinkscape> aaah Feb 01 18:21:17 <weierophinney> Thinkscape, like I said above, if you're going to redirect, you'll likely simply return a response object. Feb 01 18:21:40 <Thinkscape> that makes sense... you skip the strategies+rendering part then Feb 01 18:21:44 <weierophinney> exactly Feb 01 18:21:46 <Thinkscape> (altogether) Feb 01 18:22:20 <PadraicB> I don't see it causing confusion muself Feb 01 18:22:32 <ralphschindler> if you return a result of some kind, does the Application\View\Listener not get an opportunity to do something with it? Feb 01 18:22:38 <ralphschindler> but if you return a response, it bypasses that? Feb 01 18:22:42 <EvanDotPro> that's one of the places in zf2 so far where i haven't been confused, tbh Feb 01 18:22:54 <flavius> Hi Feb 01 18:23:14 <weierophinney> ralphschindler, if you return a Response object, rendering would not occur. If you return something else, the view will pass it to the rendering strategies to determine what to do with it. Feb 01 18:23:15 <EvanDotPro> (the returning a response to short circuit the dispatching) Feb 01 18:23:45 <Thinkscape> actually, similar things will happen with exception during dispatch Feb 01 18:23:53 <Thinkscape> if a controller throws an exception (i.e. db error) Feb 01 18:24:08 <Thinkscape> That's something for either app-leve event or a view strategy Feb 01 18:24:12 <weierophinney> yep Feb 01 18:24:13 <EvanDotPro> Thinkscape: yes – that's something we seriously need to normalize right now, too. Feb 01 18:24:39 <weierophinney> the view layer should make this fairly easy to implement, though it will require some minor changes to the workflow within Application::run Feb 01 18:24:59 <Thinkscape> EvanDotPro: agreed. Imagine a streamlined "ErrorHandlingViewStrategy" that does just that... Feb 01 18:25:35 <EvanDotPro> Thinkscape: yep, something along those lines would be really nice Feb 01 18:25:39 <weierophinney> Feb 01 18:26:24 <Thinkscape> Anyone against the "view rendering RFC" as an architecture ? Feb 01 18:26:41 <PadraicB> That is confusing though - presumably exceptions are app level concerns (and the view's thereafter should that be configured) Feb 01 18:26:53 <weierophinney> As noted, in prototyping, I'm seeing a few rough edges, but overall, it feels sound when I implement it in projects Feb 01 18:27:29 <MikeA> Given that in the skeleton application it begins in Application/Module.php at present, where do you intend the view should "start"? Feb 01 18:27:58 <weierophinney> MikeA, what do you mean, exactly? Feb 01 18:28:26 <PadraicB> Where it's first instantiated do you mean, MikeA? Feb 01 18:28:34 <weierophinney> ah Feb 01 18:28:42 <MikeA> PadraicB: yes Feb 01 18:29:21 <weierophinney> So, right now, when Akrabat and I have used it, we've done it in Application\Module in a bootstrap listener. We pull the View from the locator, and then attach it (as an aggregate) to the application events instance. Feb 01 18:29:40 <weierophinney> We could potentially automate that in either the Application instance and/or Bootstrap, however. Feb 01 18:29:58 <MikeA> Saw that, but when writing about it ins Feb 01 18:30:10 <MikeA> ...instantiating view seemed to belong elsewhere Feb 01 18:30:18 <MikeA> Not that I know where Feb 01 18:30:51 <DASPRiD> oh meeting! Feb 01 18:31:00 <ocramius> wb DASPRiD Feb 01 18:31:08 <Thinkscape> zf1 also used bootstrap (view app resource) for that, but it DID have a "default" implementation if one didn't use the bootstrap. Feb 01 18:31:49 <weierophinney> Thinkscape, we can actually still do that. The way I prototyped means that it will attach renderers by default. Feb 01 18:32:05 <weierophinney> the bit it can't do right now is pull those renderers from DI, which would simplify it even more. Feb 01 18:32:09 <PadraicB> cough, ViewRenderer Feb 01 18:32:09 <ocramius> well, until it's sealed in Zend\Mvc Feb 01 18:32:20 <weierophinney> PadraicB, different beast. Feb 01 18:32:29 <PadraicB> Ugly terrible beast, aye... Feb 01 18:32:32 <ocramius> weierophinney: what would you pull from the locator? an interface? an alias? Feb 01 18:32:45 <MikeA> k – I'm +1 for RFC Feb 01 18:33:10 <weierophinney> ocramius, the DefaultRenderingStrategy attaches three renderers by default – PhpRenderer, JsonRenderer, and FeedRenderer, and has some strategies for selection of each. Feb 01 18:33:40 <weierophinney> ocramius, so, we'd potentially pull those from the locator within the strategy class, just to make usage simpler. Feb 01 18:33:50 <EvanDotPro> would it make sense to simply pull a ListenerAggreaget from the locator for this? Feb 01 18:33:54 <Aldemar> weierophinney: in beta2 you can't have 2 controllers in 2 modules with the same di alias, di gets confused and always takes either one, I don't know if you addressed that problem in this rfc Feb 01 18:33:59 <EvanDotPro> then leave everything else up to the implementation Feb 01 18:33:59 <weierophinney> EvanDotPro, that's what I do, actually. Feb 01 18:34:02 <DASPRiD> (The PhpRenderer renders PHP?) Feb 01 18:34:06 <ocramius> weierophinney: I would pull $listener = $locator->get('Zend\Mvc\View\RenderingStrategy'); Feb 01 18:34:09 <EvanDotPro> weierophinney: oh nice! i should actually look at it lol Feb 01 18:34:10 <ocramius> instead of $listener = $locator->get('Zend\Mvc\View\DefaultRenderingStrategy'); Feb 01 18:34:17 <ocramius> that would allow for preferences Feb 01 18:34:25 <ocramius> (maybe I'm liking that feature too much) Feb 01 18:34:36 <weierophinney> Aldemar, this addresses it insofar as you will typically return ViewModel instances specifying the view "script" (just a token, actually) to render. Feb 01 18:35:06 <Thinkscape> ocramius's got a point Feb 01 18:35:13 <weierophinney> ocramius, makes sense to me. Feb 01 18:35:22 <Thinkscape> but it also needs to work without any di alias/def Feb 01 18:35:35 <ocramius> Thinkscape: that needs Di preference Feb 01 18:35:40 <ocramius> Thinkscape: no way around it Feb 01 18:35:45 <weierophinney> Thinkscape, if we do it as a marker interface, then it's pretty easy to use as a DI preference or an alias. Feb 01 18:35:48 <Thinkscape> ah - and it need to be extendable (as opposed to replacable) Feb 01 18:35:51 <ocramius> Thinkscape: otherwise definition, choose one of the two Feb 01 18:35:57 <ocramius> Feb 01 18:36:05 <weierophinney> Thinkscape, yep. Feb 01 18:36:07 <Thinkscape> ocramius: ah, so now I dislike it Feb 01 18:36:19 <flavius> shouldn't aliases be prefixed with the module name, as to avoid confusion? Feb 01 18:36:26 <Thinkscape> Probably 90% of people will use DefaultRend* because it does the job... Feb 01 18:36:34 <DASPRiD> wasn't di meant to be completly optional? Feb 01 18:36:38 <Thinkscape> but there needs to be a simple way to add-on to that behavior Feb 01 18:36:39 <DASPRiD> or do i misunderstadn that? Feb 01 18:36:45 <Thinkscape> (instead of replacing the whole) Feb 01 18:36:46 <weierophinney> flavius, not always. Sometimes you may want to extend a module – in that case, a prefix may actually be counter-intuitive. Feb 01 18:36:49 <ocramius> DASPRiD: then call it "locator" Feb 01 18:37:09 <weierophinney> DASPRiD, it's really an implementation detail Feb 01 18:37:11 <ocramius> DASPRiD: I still confuse them when writing, my fault Feb 01 18:37:26 <weierophinney> so, any objections to ratifying the RFC and moving forward with development in earnest? Feb 01 18:37:43 <ocramius> I'm fine with the RFC Feb 01 18:38:12 <EvanDotPro> +1 let's make this happen Feb 01 18:38:19 <ralphschindler> +1 Feb 01 18:38:24 <saltybeagle> weierophinney: I say you move forward, in earnest. +1 Feb 01 18:38:55 <weierophinney> there's still room for some changes, and if ralphschindler has some really good objections to terminology, I'll listen. Feb 01 18:38:55 <Thinkscape> going once... going twice... Feb 01 18:39:10 <flavius> ralphschindler: has the issue we just discussed on #zftalk.2 been raised here too? Feb 01 18:39:18 <weierophinney> DONE Feb 01 18:39:31 <weierophinney> flavius, nope. Save it for after the other topics are complete. Feb 01 18:39:35 <DASPRiD> Thinkscape, sold for $29 Feb 01 18:39:40 <weierophinney> NEXT ITEM: Feb 01 18:39:43 <Thinkscape> + S&H Feb 01 18:39:55 <Aldemar> +1 as far as I can have same aliases in more than 1 module Feb 01 18:39:59 <weierophinney> ralphschindler has created a poll surrounding CS for type names: Feb 01 18:40:11 <MikeA> it's locked Feb 01 18:40:13 <weierophinney> Aldemar, yes – see above. It's more explicitly done now than implicitly. Feb 01 18:40:17 <weierophinney> MikeA, right, that's the poitn Feb 01 18:40:19 <weierophinney> SO Feb 01 18:40:31 <DASPRiD> i like explicitlity Feb 01 18:40:39 <weierophinney> that page's polls are LOCKED because ralphschindler wants to find out if it accurately captures the various options. Feb 01 18:40:54 <Thinkscape> chuckles – "Naming for Trait Types" Feb 01 18:40:56 <ralphschindler> (as we discussed the plan was last week ) Feb 01 18:40:57 <Thinkscape> Traits FTW Feb 01 18:40:58 <weierophinney> DOES ANYBODY WANT ANY CHANGES TO THAT POLL? Feb 01 18:41:13 <DASPRiD> Thinkscape, uh Feb 01 18:41:20 <PadraicB> None here Feb 01 18:41:24 <DASPRiD> Thinkscape, shouldn't the 5.4 discussion come first? Feb 01 18:41:29 <Thinkscape> DASPRiD: it's like a little hint on what's coming next Feb 01 18:41:30 <PadraicB> Feb 01 18:41:36 <weierophinney> DASPRiD, NO Feb 01 18:41:53 <MikeA> Poll's fine Feb 01 18:42:19 <ralphschindler> if no additions, the next question is when to open and close it Feb 01 18:42:33 <Thinkscape> ralphschindler: how about traits were held in a namespace container ? Feb 01 18:42:40 <ralphschindler> btw, you can always change your vote once you've cast it, until it closes Feb 01 18:42:50 <Thinkscape> As it's new for php, how would i.e. java do that ? Feb 01 18:43:06 <ralphschindler> i don't follow Thinkscape Feb 01 18:43:15 <Thinkscape> My idea is: Zend\Stdlib\Traits Zend\Http\Client\Traits etc. Feb 01 18:43:21 <Thinkscape> Group them Feb 01 18:43:30 <Thinkscape> not thought out Feb 01 18:43:34 <DASPRiD> Thinkscape, singular Trait plz Feb 01 18:43:34 <weierophinney> ew Feb 01 18:43:34 <Thinkscape> just throwing a bone .. etc. Feb 01 18:43:35 <NickBelhomme_> aren't we discussing poll still? The moderator didn't move on yet Feb 01 18:43:47 <ralphschindler> Exceptions are the only things we do that for Feb 01 18:43:47 <weierophinney> thank you, NickBelhomme_ Feb 01 18:43:54 <Thinkscape> I know Feb 01 18:43:56 <Thinkscape> traits are new Feb 01 18:43:59 <Thinkscape> so we can discuss it Feb 01 18:44:05 <ocramius> weierophinney: poll open till next wednesday imo Feb 01 18:44:07 <weierophinney> Thinkscape, java doesn't have traits, so we're carving new territory here with naming. Feb 01 18:44:11 <ocramius> from now Feb 01 18:44:25 <MikeA> ocramius: +1 Feb 01 18:44:30 <Thinkscape> Well, I kinda like the grouping because it allows for: Feb 01 18:44:41 <Thinkscape> use Zend\Stdlib\Features <--------- this holds all traits Feb 01 18:44:42 <PadraicB> Poll seems fine unless someone is just about to object... Voting period of 5-7 days should be plenty. Feb 01 18:44:54 <ralphschindler> Thinkscape is proposing an addition to the provided names, so i think its valid to entertain Feb 01 18:45:06 <Thinkscape> then later class XYZ Feb 01 18:45:07 <weierophinney> I'm fine with adding the option – more options is fine. Feb 01 18:45:10 <ralphschindler> I'm ok with adding it, Thinkscape , although i wouldn't vote for it Feb 01 18:45:20 <weierophinney> Thinkscape, can you edit that page and add it, please? Feb 01 18:45:37 »» Thinkscape is editing Feb 01 18:45:41 <ralphschindler> thanks Thinkscape Feb 01 18:45:55 <weierophinney> any other changes anybody wants to see? Feb 01 18:46:06 <ocramius> nope Feb 01 18:46:06 <DASPRiD> Thinkscape, tho why would you specially namespace traits but not interfaces Feb 01 18:46:26 <EvanDotPro> ocramius: +1 for leaving polls open for 7 days Feb 01 18:46:41 <Thinkscape> DASPRiD: because of how traits work Feb 01 18:46:42 <ralphschindler> we have a full week to vote, so i think discussions on individual items can happen in zftalk.2 Feb 01 18:46:43 <weierophinney> DASPRiD, usage is different. Feb 01 18:46:47 <Thinkscape> It's like mini-classes Feb 01 18:46:53 <ralphschindler> and, you can change your vote as many times as you like Feb 01 18:47:05 <Thinkscape> you compose a big class from smaller ones, that's why you can call it features, or plugins or blocks or som,eting like that Feb 01 18:47:20 <weierophinney> I can see the point. Not 100% sure I agree, and I think the namings don't have to dictate a subnamespace. But we can discuss on the list and/or #zftalk.2 Feb 01 18:47:26 <ralphschindler> wed following this meeting and closed before next weeks meeting, is that ratified? Feb 01 18:47:31 <NickBelhomme_> dispatch or setOptions is not really a feature Feb 01 18:48:12 <weierophinney> NickBelhomme_, it is in a way – you can compose that stuff in via traits in order to reduce the amount of coding. But yes, "feature" as a terminology may not be completely accurate. Feb 01 18:48:21 <DASPRiD> NickBelhomme_, i thought we were getting rid of setOptions eventually Feb 01 18:48:30 <ralphschindler> i meant, opening it after this meeting, and closing it before the next. Feb 01 18:48:42 <weierophinney> any object to ralphschindler? Feb 01 18:48:56 <DASPRiD> nah, sounds fine mwop Feb 01 18:48:59 <weierophinney> if you're uncertain about some of the options/ramifications, ask on the ML or #zftalk.2 Feb 01 18:49:03 <NickBelhomme_> DASPRiD, just examples, that feature is a wrong name grouping, but hell, that can be included in the poll, I will not vote for that Feb 01 18:49:19 <weierophinney> I'm going to call it in 3 Feb 01 18:49:25 <DASPRiD> NickBelhomme_, mh, not working for it, or voting against it? Feb 01 18:49:27 <weierophinney> 2 Feb 01 18:49:40 <weierophinney> 1 Feb 01 18:49:43 <weierophinney> NEXT TOPIC Feb 01 18:50:00 <DASPRiD> (get ready to rumble) Feb 01 18:50:00 <ralphschindler> Thinkscape: by Feature, do you mean Trait ? Feb 01 18:50:10 <weierophinney> I sprung this yesterday in #zftalk.2 : do we want to consider a minimum version of 5.4 for ZF2. Feb 01 18:50:10 <ralphschindler> b/c that particular poll is specifically about traits Feb 01 18:50:19 <weierophinney> ralphschindler, yes Feb 01 18:50:23 <weierophinney> but we've moved on now. Feb 01 18:50:30 <ralphschindler> Thinkscape: pm Feb 01 18:50:31 <DASPRiD> why not just call that namespace "Trait"… or "Mixin" Feb 01 18:50:31 <weierophinney> So, re: 5.4. Feb 01 18:50:40 <EvanDotPro> personally i'd be +1 for 5.4 (i probably want it more than most here), but in general, i think it might be jumping the gun. Feb 01 18:50:41 <weierophinney> DASPRiD, WE'VE MOVED ON Feb 01 18:50:52 <weierophinney> So, let me summarize a few points real quick. Feb 01 18:50:52 <DASPRiD> weierophinney, that was just a sidenote Feb 01 18:51:03 <NickBelhomme_> Personally seeing how ZF2 is evolving Feb 01 18:51:04 <EvanDotPro> my concern in NOT going with 5.4 is that we now have to also maintain zf2 on 5.3 while we move on to zf3. Feb 01 18:51:13 <NickBelhomme_> I think we should consider targetting 5.4 Feb 01 18:51:38 <weierophinney> One reason to consider it is (a) mainteance later down the line (less to rewrite), (b) pushing 5.4 adoption, and (c) baseline performance of the framework (5.4 is faster even than 5.3) Feb 01 18:51:47 <weierophinney> that's actually three reasons. Whatever. Feb 01 18:52:15 <weierophinney> As a few folks noted, there are a lot of companies already planning for ZF2, and since we've said 5.3 all along, this is a bit of an upset. Feb 01 18:52:15 <EvanDotPro> let's even ignore (b) as that shouldn't be our primary influence in this decision, just a potential positive side-effect Feb 01 18:52:29 <DASPRiD> mh, c) is no really valid, current zf2 already makes use of the performance gain in 5.4, no? Feb 01 18:52:41 <NickBelhomme_> I agree with EvanDotPro , 2.0 took already 1.5 years, if you want to produce zf3.0 it will take again a long time, so why not put it in there already, 1 less major version to maintain Feb 01 18:52:52 <weierophinney> DASPRiD, I said baseline – we'd be able to depend on that for all ZF2 users. Feb 01 18:52:56 <ocramius> (c) isn't valid for me too.... Feb 01 18:53:14 <PadraicB> I'm -1 - for personal reasons: I'm invested in PHP 5.3 for at least another 12 months. Feb 01 18:53:19 <weierophinney> So, on the flip side of things... 5.3 adoption has not been getting a ton of traction, particularly in distros. Feb 01 18:53:29 <flavius> weierophinney: those companies are ready to make the jump, right? so why not do it to 5.4 then, if they do it anyway? Feb 01 18:53:40 <MikeA> I thought about this all day: I was definate NO due to limited number of 5.4 hosts. However, what about with VMs whereby developers can load 5.4 if hosts don't want to? Feb 01 18:53:40 <weierophinney> and for those who have, moving to 5.4 seems to be a "it's a ways in the future yet" Feb 01 18:54:01 <weierophinney> flavius, see above: distros are not necessarily making the jump, and not everyone wants to maintain their own packages. Feb 01 18:54:17 <flavius> long live archlinux heh Feb 01 18:54:23 <weierophinney> MikeA, that was part of my argument as well. Feb 01 18:54:31 <NickBelhomme_> I gave a talk on php5.4 this weekend, and if the room was packed with 100 people, only 4 had tried php5.4 (1 was including derick) Feb 01 18:54:31 <DASPRiD> so ubuntu will have php 5.4 in april: Feb 01 18:54:41 <weierophinney> Anyways, that's the summary of the discussion. Feb 01 18:54:54 <NickBelhomme_> so ... that says something about the frontline of PHP devs (conference people are considered front line) Feb 01 18:55:02 <ocramius> NickBelhomme_: haven't tried it on my own too if not for travis... Feb 01 18:55:06 <Aldemar> -1 to 5.4 here Feb 01 18:55:09 <weierophinney> DASPRiD, it's a blueprint. Discussion on the internals list is they may back out of that due to lack of a few security patches they feel are critical. Feb 01 18:55:12 <NickBelhomme_> low interest in new techs, features Feb 01 18:55:13 <EvanDotPro> i think it's at least worth putting up for a community vote Feb 01 18:55:51 <Slamdunk> +1 for community vote Feb 01 18:55:53 <weierophinney> One thing I didn't note: we can OFFER traits for end-user consumption, without making ZF2 require 5.4 to operate. Feb 01 18:55:55 <ralphschindler> EvanDotPro: zf2-meeting vote, or a poll on the wiki? Feb 01 18:56:05 <weierophinney> poll on the wiki, I think. Feb 01 18:56:07 <DASPRiD> weierophinney, yeah, it is probably more likely to get 5.4 in ubuntu 12.10 (which is a non LTS release…) Feb 01 18:56:17 <Thinkscape> -1 for offering traits in PHP 5.3 framework Feb 01 18:56:19 <MikeA> Turning to the marketing aspect of this bombshell, you put it up for community vote and there comes potential for negative press feedback Feb 01 18:56:21 <Thinkscape> That's confusng Feb 01 18:56:21 <weierophinney> DASPRiD, and that highlights the concerns for some. Feb 01 18:56:22 <ocramius> EvanDotPro: ok for the poll on the meeting Feb 01 18:56:24 <weierophinney> Thinkscape, why? Feb 01 18:56:32 <Aldemar> to launch something in a tech that doesn't have at least 1 year (5.4) in the market is not well seen Feb 01 18:56:39 <EvanDotPro> SpiffyJr metioned that is seems a little late for a change like this, being half way through beta, but i feel like maintaining zf2 on 5.3 and then trying to move on with zf2 on 5.4 will turn into a huge headache and hold things back a lot. Feb 01 18:56:39 <EvanDotPro> ralphschindler: poll on the wiki, at LEAST 7 days with posts to the mailing list.. it's a rather large decision, everyone should have voice Feb 01 18:56:42 <weierophinney> Thinkscape, if it makes development easier/faster for those useing 5.4, why not offer them? Feb 01 18:56:53 <ralphschindler> +1 to EvanDotPro's suggestion Feb 01 18:57:08 <weierophinney> EvanDotPro, +1 Feb 01 18:57:12 <Slamdunk> +1 to EvanDotPro's suggestion Feb 01 18:57:16 <ocramius> +1 Feb 01 18:57:17 <Aldemar> zf2.5 should be in 5.4, by now we should stay using 5.3 Feb 01 18:57:20 <DASPRiD> EvanDotPro, i don't really see that zf2->3 transition would take up much time Feb 01 18:57:26 <ralphschindler> although, and this is directed at weierophinney - how do we resolve the greater good against the needs of a few? Feb 01 18:57:28 <Thinkscape> weierophinney: the same reason there aren't any 5.3 features used in zf1 Feb 01 18:57:28 <MikeA> EvanDotPro: that's a technologists POV – business people who pay for projects want security Feb 01 18:57:42 <EvanDotPro> DASPRiD: i mean ongoing maintenance holding us back / slowing us down, mostly. Feb 01 18:57:50 <EvanDotPro> DASPRiD: think of the constant backports Feb 01 18:58:00 <EvanDotPro> zf2 users will press for white a long period of support Feb 01 18:58:00 <ralphschindler> i suspect the larger community wants to target 5.4, but we need to decide what we're sacrificing and if its worth it Feb 01 18:58:01 <weierophinney> Aldemar, we can't jump to a new minor version of PHP within the same major cycle. Feb 01 18:58:04 <Aldemar> +1 MikeA Feb 01 18:58:09 <DASPRiD> EvanDotPro, didn't we want to shorten the release cycles anyway? Feb 01 18:58:17 <EvanDotPro> s/white/quite Feb 01 18:58:19 <NickBelhomme_> DASPRiD, zf2->zf3 is not a big transition, but some will not host 5.4 and thus you have to maintain to frameworks Feb 01 18:58:20 <DASPRiD> and thus, the lifetime cycles? Feb 01 18:58:28 <NickBelhomme_> s/to/two Feb 01 18:58:29 <weierophinney> Thinkscape, this is different. We'd be offering code for end-user use, not for internal use. Feb 01 18:58:35 <DASPRiD> NickBelhomme_, well at least for some time… Feb 01 18:58:58 <Thinkscape> My opinion: zf2 already requires a lot of commitment in both consumption and extension. It requires a new php ver (as compared to zf1) so the adoption will be abysmal at first. BUT – zf2 is "cutting-edge" so it can use any modern php version. Feb 01 18:59:16 <DASPRiD> weierophinney, zf2 release was planned around summer/zendcon, right? Feb 01 18:59:29 <Thinkscape> We can argue 5.4 vs 5.3 the same way we argue 5.1 vs 5.3 — same thing, same problems in some businesses and webhosts .. Feb 01 18:59:30 <MikeA> I suggest some research of hosts intentions around the world before embarking on this route Feb 01 18:59:33 <NickBelhomme_> the users who will be using zf2 will already be the bleeding edge of devs Feb 01 18:59:37 <weierophinney> DASPRiD, no planned date, but currently targetting summer. Feb 01 18:59:53 <DASPRiD> weierophinney, well, by then, not all of the major distros will have 5.4 Feb 01 18:59:55 <NickBelhomme_> I do not see general devs implement it too be honest, sorry guys Feb 01 19:00:08 <Aldemar> Thinkscape: there are some hosts that hasn't even upgrade to 5.3 yet Feb 01 19:00:11 <NickBelhomme_> not at least the first year after release Feb 01 19:00:16 <flavius> +1 "the users who will be using zf2 will already be the bleeding edge of devs" Feb 01 19:00:19 <DASPRiD> which basically answers the question to not go for php5.4 dependency Feb 01 19:01:00 <weierophinney> would anyone here like to volunteer and reach out to some of the major hosting companies, and find out plans for 5.3/5.4 adoption? Feb 01 19:01:09 <Thinkscape> Aldemar: EXACTLY, but you've proven my point Feb 01 19:01:16 <Thinkscape> There are probably hosts that haven't upgraded to 5.0 Feb 01 19:01:19 <EvanDotPro> in the first wave of zf2 users, developers without control of thier _AMPP stack are probably going to be a minority, if i had to venture a guess. Feb 01 19:01:22 <MikeA> What's the possibility of starting ZF3 to run in parallel with ZF2? Feb 01 19:01:27 <Thinkscape> not in the scope of zf2 which is advanced as it is (code-wise) Feb 01 19:01:38 <weierophinney> MikeA, we actually already forked with that idea in mind when at ZendCon Feb 01 19:01:44 <Aldemar> Thinkscape: lol, no php5, that would be the cheapest Feb 01 19:01:56 <NickBelhomme_> maintaining ZF1, ZF2 and ZF3 => horrible Feb 01 19:02:01 <MikeA> weierophinney: that's interesting – and? Feb 01 19:02:03 <EvanDotPro> NickBelhomme_: exactly my concern Feb 01 19:02:05 <DASPRiD> EvanDotPro, sure, on a root you can install anything you want… but usually sys admins go with the distributions packages Feb 01 19:02:15 <weierophinney> MikeA, it's possible, but see NickBelhomme_ and EvanDotPro Feb 01 19:02:34 <ocramius> Ok, now I'm confused... Feb 01 19:02:36 <Thinkscape> why zf3?? zf3 === zf2+traits ? awful Feb 01 19:02:52 <Thinkscape> Guys – regarding packages Feb 01 19:02:52 <EvanDotPro> DASPRiD: sysadmins should be at the mercy of the web developers – the organization you're describing has a flawed setup Feb 01 19:02:54 <weierophinney> Thinkscape, it would be more than that Feb 01 19:03:06 <ocramius> I'm thinking about symfony 2 in this exact moment and about doctrine 2, which has no file caching mechanism, and which almost requires the user to provide a VM Feb 01 19:03:12 <Thinkscape> zf2 launch === distro packages of 5.4 + 1 month (at least) Feb 01 19:03:15 <EvanDotPro> DASPRiD: let's not get off topic, that's a different argument lol Feb 01 19:03:22 <ocramius> and this is a case where 5.4 could be addressed by the dev Feb 01 19:03:35 <EvanDotPro> anyway – i have to take off, but see my earlier suggestion. Feb 01 19:03:40 <DASPRiD> EvanDotPro, i personally won't upgrade my server as well and wait for the ubuntu dist-upgrade (+1 month for usual patches) Feb 01 19:03:41 <MikeA> whether it's Zf2, ZF3 or ZF2 & 3 there are going to be maintenance issues – they are technical, not business strategy matters Feb 01 19:04:07 <EvanDotPro> "poll on the wiki, at LEAST 7 days with posts to the mailing list.. it's a rather large decision, everyone should have voice" Feb 01 19:04:18 <Bittarman> zf3? thats quite a clanger to turn up and see talked about Feb 01 19:04:20 <Aldemar_> we haven't released 2.0 and we are talking about 3.0 Feb 01 19:04:21 <ralphschindler> +1, polls can have comments btw Feb 01 19:04:22 <EvanDotPro> ^^ that's my vote here Feb 01 19:04:28 <EvanDotPro> ralphschindler: yep Feb 01 19:04:34 »» Thinkscape_ got disconnected Feb 01 19:04:44 <EvanDotPro> anyway, i'll be back in ~30 in ztalk.2 at least. Feb 01 19:05:02 <Thinkscape_> sooo - zf3 should be a huge leap forward, same as zf2 compared to zf1 Feb 01 19:05:10 <ocramius> btw, I guess I'll think about it a bit more, because I just changed my idea to "+1", which means I have no clear ideas Feb 01 19:05:37 <DASPRiD> Thinkscape_, not neccessarly Feb 01 19:05:38 <weierophinney> Thinkscape, see, that's where you're in disagreement with a lot of folks. The discussions since August are suggesting devs want shorter intervals between major releases, with fewer big changes. Feb 01 19:05:55 <DASPRiD> yeah, switchting to a rapid release model Feb 01 19:05:55 <NickBelhomme_> weierophinney, +1 Feb 01 19:06:02 <NickBelhomme_> DASPRiD, +1 Feb 01 19:06:09 <weierophinney> I'm going to call it at this point, as we're past the hour. Feb 01 19:06:19 »» Aldemar_ is now known as Aldemar Feb 01 19:06:26 <weierophinney> Let's open a poll on the wiki, and start a discussion there, on the ML, and in #zftalk.2 Feb 01 19:06:26 <NickBelhomme_> so we follow EvanDotPro his suggestion? Feb 01 19:06:32 <Bittarman> whats wrong with jsut offering support for 5.4 in zf2, but not making it a minimum, like we already do in zf1 for php 5.3 Feb 01 19:06:34 <weierophinney> we'll leave it open for at least a week. Feb 01 19:06:43 <Thinkscape_> ugh, that means less consistency, parallel maintenance of several concurrent versions, documentation nightmare, missing features... Feb 01 19:06:45 <Aldemar> +1 weierophinney Feb 01 19:06:49 <weierophinney> Bittarman, I suggested that earlier. Most like the idea. Thinkscape hates it. Feb 01 19:06:56 <Thinkscape_> i.e. in one point in time you could have zf3, zf4, zf5 :\ Feb 01 19:07:12 <weierophinney> Thinkscape, we need to setup a release policy, obviously, so that doesn't happen. Feb 01 19:07:12 <DASPRiD> Thinkscape_, not different than prior minor releases Feb 01 19:07:14 <flavius> zfx++ Feb 01 19:07:16 <Thinkscape_> weierophinney: wait, don't stick me out Feb 01 19:07:22 <Thinkscape_> It's about project management Feb 01 19:07:24 <Bittarman> weierophinney ah, k. that had me worried, (was scanning up and down the backlog thinking wtf!) Feb 01 19:07:27 <DASPRiD> weierophinney, Thinkscape_: heh, think: Firefox Feb 01 19:07:32 <Thinkscape_> show me a good way to do it for "a php framework" and I'm all in Feb 01 19:07:32 <weierophinney> Thinkscape, the idea is 18-24 months between major releases. Feb 01 19:07:34 <MikeA> Thinkscape_: that's an area of persuasion for me Feb 01 19:07:39 <Thinkscape_> Firefox is NOT php framework Feb 01 19:07:43 <Thinkscape_> same as Google Chrome Feb 01 19:07:56 <Thinkscape_> ZF will not auto-update itself in the background ... Feb 01 19:08:04 <DASPRiD> Thinkscape_, heh… Feb 01 19:08:07 <Thinkscape_> weierophinney: 18-24 mo is ok! Feb 01 19:08:08 <weierophinney> As such, fewer big bits need rewriting, but newer versions are happening more frequently to allow BC breaks, architectural changes, etc – just more targetted. Feb 01 19:08:15 <Thinkscape_> So that leaves us zf2 for the next 18 months Feb 01 19:08:19 <weierophinney> Thinkscape_, right. Feb 01 19:08:25 <Thinkscape_> back to business - php 5.4 for zf2 or not ? Feb 01 19:08:28 <MikeA> I'm for letting folk dwell on this for a week or two before polling Feb 01 19:08:34 <weierophinney> at about which time, ZF1 support ends, leaving us with just ZF2 + ZF3 Feb 01 19:08:39 <DASPRiD> Thinkscape_, well, in 18 months, php 5.4 should be spread enough Feb 01 19:08:39 <ralphschindler> +1 on MikeA Feb 01 19:08:43 <Thinkscape_> or wait for zf3 and in 18 months we could have php 6 Feb 01 19:08:44 <weierophinney> MikeA, +1 Feb 01 19:08:46 <Bittarman> Thinkscape_, we already do it for php 5.3 support in zf1... whats so bad about it in zf2?? Feb 01 19:08:47 <ralphschindler> lets do the poll between next meeting and the one following Feb 01 19:08:52 <ralphschindler> one major poll at at a time Feb 01 19:08:53 <weierophinney> Thinkscape_, unlikely. Feb 01 19:08:53 <DASPRiD> Thinkscape_, heh Feb 01 19:09:09 <weierophinney> kk Feb 01 19:09:10 <PadraicB> Sry, missed the conversation . Feb 01 19:09:11 <weierophinney> so... Feb 01 19:09:13 <Thinkscape_> ... just a general idea... call it php 5.6 or whatever Feb 01 19:09:28 <ocramius> MikeA: spread the word... Feb 01 19:09:30 <weierophinney> Summary: we'll do a poll on the wiki, and have continued discussion in the comments there, on the ML, and in IRC. Feb 01 19:09:31 <Thinkscape_> We are at tipping point now — between 5.3 and 5.4, next major Feb 01 19:09:38 <PadraicB> Guys, is there some fundamental reason why ZF2 should adopt 5.4? This is a massive course correction right at the end of the process. Feb 01 19:09:44 <weierophinney> Poll will open officially in 1 week, and stay open in a week. Feb 01 19:10:13 <weierophinney> PadraicB, the reasons I raised it are because 5.4 is dropping in the next few weeks as stable. Feb 01 19:10:25 <Bittarman> PadraicB, because in a years time, it would be nice for zf2 users to be able to make use of php 5.4 features, without being held back by their framework. Feb 01 19:10:26 <weierophinney> And there are a number of features that could make development and maintenance easier. Feb 01 19:10:27 <DASPRiD> PadraicB, i would +1 it only if php 5.4 would already be in all major distris by now Feb 01 19:10:29 <Thinkscape_> + traits for de-duplication and add-on functionality for user classes Feb 01 19:10:38 <NickBelhomme_> If ZF2 would be released as stable within 2 months => NO Feb 01 19:10:50 <Thinkscape_> DASPRiD: will you +1 for that idea in may ? Feb 01 19:10:51 <MikeA> DASPRiD: +1 Feb 01 19:11:13 <Thinkscape_> Remember - We are talking future .... not TODAY, because today is not zf2 RC day. Feb 01 19:11:21 <ralphschindler> All in favor of discussing this in #zftalk.2 for a week, then, if it makes sense and its still valid, polling to the larger community between next wed. and the wed following? Feb 01 19:11:24 <DASPRiD> exactly Thinkscape_ Feb 01 19:11:29 <MikeA> Dare I say it, how long did it take MS to stabilise? Feb 01 19:11:30 <Thinkscape_> so – should zf2 be php 5.4 WHEN it shipps mid 2012 Feb 01 19:11:31 <NickBelhomme_> ralphschindler, +1 Feb 01 19:11:32 <Thinkscape_> ? Feb 01 19:11:36 <weierophinney> PadraicB, but we can have more discussion – I only wanted to raise it for discussion with this meeting. Feb 01 19:11:56 <weierophinney> MikeA, bad analogy. Feb 01 19:11:59 <DASPRiD> Thinkscape_, i wouldn't be able to run it on my dev machine, bad idea Feb 01 19:12:04 <Aldemar> we have here +20 servers running 5.2 that haven't been able to update because of legacy applications, Now I'm trying to make the business understand that the path is zf2+5.3 and you are talking about 5.4! Feb 01 19:12:10 <MikeA> weierophinney: couldn't resist Feb 01 19:12:19 <weierophinney> MikeA, LOL Feb 01 19:12:20 <Thinkscape_> DASPRiD: you lazy a.... dev... compile it Feb 01 19:12:30 <weierophinney> Aldemar, that's why we're discussing it now, not deciding on it yet. Feb 01 19:12:34 <Thinkscape_> Aldemar: so skip 5.3 Feb 01 19:12:38 <Thinkscape_> straight to 5.4 Feb 01 19:12:48 <Thinkscape_> (faster, safer, better, more lifetime ahead) Feb 01 19:12:51 <DASPRiD> Thinkscape_, surely not compiling it Feb 01 19:12:58 <weierophinney> Aldemar, bring up your concerns and what's driving them in the ML/IRC/etc so we can all understand how this affects users. Feb 01 19:13:07 <Thinkscape_> DASPRiD: I'll hack your box and compile it for you.. Feb 01 19:13:09 <MikeA> Thinkscape_: then why isn't all the rage now? Feb 01 19:13:10 <weierophinney> kk Feb 01 19:13:11 <Aldemar> Thinkscape_, safer? 5.4 is already in rc Feb 01 19:13:18 <weierophinney> LET'S MOVE DISCUSSION TO #zftalk.2 Feb 01 19:13:22 <DASPRiD> Thinkscape_, and make a deb package? Feb 01 19:13:24 <weierophinney> thanks all for coming! </pre> ]]></ac:plain-text-body></ac:macro>
http://framework.zend.com/wiki/display/ZFDEV2/2012-02-01+Meeting+Log
CC-MAIN-2014-42
refinedweb
8,585
67.18
whatsup hacka? howzit? Nothing going tonight. Up late, needing sleep. See you tomorrow. PS found a light read on mind/brain sleep research in the 20th century, linked under current readings. -kevin ditto Thanks Immonad (talk) 18:40, 31 July 2013 (UTC) this is an example of a note to say hello via wiki because email is a broken protocol, considered harmful, and depracatable say what ??? --Danf (talk) 01:18, 22 August 2013 (UTC) Good stuff By golly have a listen Connectivity Restored... to switch 31, aka the collaboration station. ??? {code} def egcd(a, b): if a == 0: return (b, 0, 1) else: g, y, x = egcd(b % a, a) return (g, x - (b // a) * y, y) {code} I've never seen anything like it.
https://www.noisebridge.net/index.php?title=User_talk:Danf&direction=prev&oldid=40336
CC-MAIN-2015-14
refinedweb
125
73.27
): - OpenCV is included as submodule and the version is updated manually by maintainers when a new OpenCV release has been made - Contrib modules are also included as a submodule Find OpenCV version from the sources Install dependencies (numpy) Build OpenCV - tests are disabled, otherwise build time increases too much - there are 4 build matrix entries for each build combination: with and without contrib modules, with and without GUI (headless) - Linux builds run in manylinux Docker containers (CentOS 5) Copy each .pyd/.sofile to cv2 folder of this project and generate wheel - Linux and macOS wheels are checked with auditwheel and delocate Install the generated wheel Test that Python can import the library and run some sanity checks Use twine to upload the generated wheel to PyPI (only in release builds) The cv2.pyd/.so file is normally copied to site-packages. To avoid polluting the root folder this package wraps the statically built binary into cv2 package and __init__.py file in the package handles the import logic correctly. Since all packages use the same cv2 namespace explained above, uninstall the other package before switching for example from opencv-python to opencv-contrib-python. Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/opencv-python/4.0.1.23/
CC-MAIN-2019-43
refinedweb
215
53.75
Twice a month, we revisit some of our readers’ favorite posts from throughout the history of Activetuts+. This tutorial was first published in July, 2009.. Final Result Preview Let's take a look at the final result we'll be working towards: Step 1: Brief Overview We're going to create a preloader MovieClip using Flash tools such as the Rectangle Primitive Tool and something very important to get the correct alignment: the Snap To Objects option. The clip will have its movement in the timeline and we'll build the code in two classes. The first class will take care of the preloader and the other will be the Document Class, where we'll start the preloader. Step 2: Starting Open Flash and create a new Flash File (ActionScript 3). Set the stage size to your desired dimensions and add a background color. I've used 600 x 300px in size and added a gray radial gradient (#666666, #333333). Step 3: Creating the Basic Shape This preloader is composed of one simple shape which is repeated 12 times. Select the Rectangle Primitive Tool and set the corner radius to 15, make sure to lock the corner radius control so every corner is equally rounded. Set the color to white and and draw a 25 x 85px rectangle, don't use a stroke. That's it, this is the basic shape that will be the main part of our preloader. Step 4: Positioning the Shapes Use the Align panel to set the previously created shape in the top-center of the stage. Duplicate the shape (Cmd + D) and align it to bottom-center. Duplicate both shapes and then go to Modify > Transform > Rotate 90º. Here comes the tricky part, select the Free Transform Tool and make sure you've selected the Snap To Objects option (this is the magnet icon in the toolbar or you can go to View > Snapping > Snap to Objects). Start rotating the top and bottom shapes, you'll notice that the rotation stops at every determined angle, we'll use two stops to separate the shapes from each other, ending into something like this: Step 5: Changing the Alpha We'll change the shapes' alpha property to get the "follow" effect we're after. There are 12 shapes, that's a little more than 8 but to avoid the use of decimals we'll set 9 in 8 multiples and the for last 3 we'll add 10. This gives us alpha values of 8, 16, 24...72, 80, 90, 100. Take a look at the image to get the idea. Step 6: Animating Convert all the shapes into a single MovieClip and name it "Preloader". Check the Export for Actionscript checkbox and write "classes.Preloader" in the class textfield. Double-click the clip to get access to its Timeline. The animation process is very simple; add a new Keyframe and rotate the shapes until the 100% alpha shape is in position where the 8% alpha shape was. Repeat this until you get the full animation. The frames should be in this order: Since the animation is timeline based, the speed will depend on the Frames per Second of your movie, mine is 25fps and I've used 2 frames per state. Step 7: Choosing the Size Our preloader is 300 x 300px in size, normally it wouldn't be so large, but it's good to have the option. Choose an appropiate size for your preloader and center it on the stage. I chose 48 x 48px. Step 8: Loading Information Create a Dynamic Textfield and give it the instance name "info". This will display the total KB to load, the amount currently loaded and the percent that it represents. Write some text to get an idea of the size it will use and center it. Step 9: Creating the Preloader Class Create a new ActionScript file and start importing the required classes: package classes { import flash.display.MovieClip; import flash.text.TextField; import flash.events.Event; import flash.events.ProgressEvent; Step 10: Extending the Class public class Preloader extends MovieClip { Since our preloader is a MovieClip and it's using a timeline, we're going to extend this class using the MovieClip class. Step 11: Variables We only need to use one variable in this class. This variable will store the instance name of the textfield we're using to show the loading information. private var dataTextField:TextField; Step 12: Start Function public function start(dataTextField:TextField):void { this.dataTextField = dataTextField; //Sets the dataTextField var to the parameter value /* The loaderInfo Object is in charge of the loading process, in this code we add listeners to check the progress and when the movie is fully loaded */ this.loaderInfo.addEventListener(ProgressEvent.PROGRESS, onProgress); this.loaderInfo.addEventListener(Event.COMPLETE, onComplete); } Step 13: The Progress Function private function onProgress(e:ProgressEvent):void { /* Here we use some local variables to make better-reading code */ var loadedBytes:int = Math.round(e.target.bytesLoaded / 1024); var totalBytes:int = Math.round(e.target.bytesTotal / 1024); var percent:int = (e.target.bytesTotal / e.target.bytesLoaded) * 100; /* Sets the loading data to the textfield */ dataTextField.text = String(loadedBytes + " of " + totalBytes + "KB Loaded\n" + percent + "% Complete"); } Step 14: The Complete Function private function onComplete(e:Event):void { /* Remove listeners */ this.loaderInfo.removeEventListener(ProgressEvent.PROGRESS, onProgress); this.loaderInfo.removeEventListener(Event.COMPLETE, onComplete); //Here you can add a function to do something specific, I just used a trace */ trace("Loaded!"); } Step 15: Document Class Create a new ActionScript file and start writing: package classes { import flash.display.MovieClip; public class Main extends MovieClip { public function Main():void { /* Starts the preloader, "preloader" is the instance name of the clip */ preloader.start(info); } } } This code will be the document class, so go back to the .Fla file and add "classes.Main" to the class textfield in the properties panel. Conclusion You can always change the color of the preloader to use it with differents backgrounds, an easy way to do that is to change the Tint value in the properties of the clip, try it! Thanks for reading, feel free to leave comments and questions. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/create-an-apple-inspired-flash-preloader--active-915
CC-MAIN-2017-22
refinedweb
1,047
63.7
NAME msgctl - message control operations SYNOPSIS #include <sys/types.h> #include <sys/ipc.h> #include <sys/msg.h> int msgctl(int msqid, int cmd, struct msqid_ds *buf); DESCRIPTION msgctl() performs the control operation specified by cmd on_IPC. IPC_INFO (Linux-specific) Returns) Returns) Returns a msqid_ds structure as for IPC_STAT. However, the msqid argument is not a queue identifier, but instead an index into the kernel's internal array that maintains information about all message queues on the system. RETURN VALUE. ERRORS process is not privileged (Linux: it does not have the CAP_SYS_ADMIN capability). CONFORMING TO SVr4, POSIX.1-2001. NOTES The IPC_INFO, MSG_STAT and MSG_INFO operations are used by the ipcs(1) program to provide information on allocated resources. In the future these may modified or moved to a /proc file system msgget(2), msgrcv(2), msgsnd(2), capabilities(7), mq_overview(7), svipc(7) COLOPHON This page is part of release 3.27 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.ubuntu.com/manpages/oneiric/man2/msgctl.2.html
CC-MAIN-2015-06
refinedweb
173
50.02
On the Friday after the Hadoop Summit 2009 a group of Hadoop committers and developers met at Cloudera's office in Burlingame to talk about Hadoop development challenges. Here are some notes and pictures (see the attachments) from the discussions that we had. Contents Attendees Eric Baldeschwieler, Dhruba Borthakur, Doug Cutting, Nigel Daley, Alan Gates, Jeff Hammerbacher, Russell Jurney, Jim Kellerman, Aaron Kimball, Mahadev Konar, Todd Lipcon, Alex Loddengaard, Matt Massie, Arun Murthy, Owen O'Malley, Johan Oskarsson, Dmitriy Ryaboy, Joydeep Sen Sarma, Dan Templeton, Ashish Thusoo, Craig Weisenfluh, Tom White, Matei Zaharia, Philip Zeiliger Five things Things we don't like about Hadoop - config hell - unit tests need improvement - too few unit tests vs functional tests - Hudson, JIRA, patch process woes - too slow (hudson) - too much mail (JIRA) - Hard to debug / profile - Confusing / arcane APIs - Dcache - Docs could use improvement - new functionality lacks specs, test plan - NN SPOF Things we like about Hadoop - community - open source - ecosystem - good components: HDFS, ZK Project challenges Nigel brings up: Patch Development Process: - pretty convoluted - attach patch to JIRA, Hudson, committer takes it - Releaseaudit and other output unreadable - Running ant hudson-test-patch takes five hours, usually fails - Compiling is slow too ... for testing we need to know that the tests are sufficient -- this means that docs are required Test plan template: What are you testing? what are the risks? How do you test this? Phil: What are examples of *good* JIRAs? Doug: Should we add a test plan field to JIRA? Nigel: People need to take ownership of features and consider scalability, etc. The patch queue is too long... Doug: committers need to be better here Dhruba: doing reviews right is hard Phil: What do people do here? - read patch by itself - apply and use a diff viewer Can we use a review tool -- e.g., FishEye, Reviewboard? Phil will take point on this Doug: We need a "wall of shame" to incentivise people to review other peoples' code rather than just write new patches of their own Pushing Warnings to Zero: - Running test-patch is slow - "rat" has verbose output, needs an overrides system - nobody really volunteering to fix this - checkstyle has 1200+ warnings =-- this is absurd - Need ECliupse settings file that reflects reality - Patch system disincentivizes "janatorial work" to lines near what you're really working on Typo Fixes, etc: - Can we commit things without a JIRA? - Can we commit things without a review? Most things require both Exceptions: www site, rolling a release How do you continuously refactor? Phil : Can we agree that it's ok to fix comments/typos/etc in the same file you're working in? Doug Yes, but people need to say that this is what they're doing in the JIRA comments Tom: This should be clarified on the HowToCommit wiki page Checkstyle: - Eclipse style file (Phil) NetBeans style file (Paul) - both of these should be in top level and included via svn-extenrals - Tune checkstyle config (Tom) - Wholesale codebase sweep - Need to clear patch queue first? - Per-project Core (Owen) / MR / HDFS -- who will take care of the last two here? - Enable in patch testing (Nigel) Matt: Could we think about an auto-reformatting patch service? (lukewarm reaction here) Testing: - We have 10 new machines which can be used soon - Need "ten minute test" so that individuals can run this - Some MR folks are removing MiniMR from tests where it's not needed - Phil: We need a build sheriff to nag on build/test breaks - Nigel: need continuous build on all active branches on commit - not just patch-isolated tests - need code coverage (and diffs of this) - Functional tests are not first-class citizens - need a harness for running shell scripts, etc. - maybe run on EC2? Alex will do some work here - Phil will work on improving docs related to test utils. - We should start using some mock framework LocalJobRunner needs improvement - TestNG allows for shared MiniMRCluster which will help - This should do things like *not* start Jetty, etc. Build System: - Ivy vs. Mason - Nigel is working with someone to build a POM repo - Should we convert to mvn? - How bad is Ivy itself vs. just our usage of Ivy? - Pig and Hive both do something different for depending on Hadoop - Forrest needs to go. - Todd will investigat migrating soimewhere else - There are other XML-based documentation systems Wish Lists MapReduce - JT API (e.g., REST) - pluggable job submission API InputSplit generation as a task - Simplify JT/TT interaction - MR 2.0: - Separate persistent rsrc manager from the per-job manager - Avro input format - For Pipes, too - Preemption and better priorities JobInProgress refactoring - JSON Job History - Pipeline MR (M*R*) - Streaming in mapred, not contrib MapReduce and HDFS - Separate client/server (interface/implementation) jars, packages HDFS - Append and 4379 (flush/sync) - NN metadata inspection tool (already exists in some form in 20) - Multi-NN for same DNs (federated NN) - Multi-DC Hadoop - HDFS mounts + symlinks - pluggable block placement - DFSClient requery NN if it can't find a block Build And Test - One framework for system testing (loca,l and in the cloud) - Standard build and dep managemtn framework - Failure injection - Based on Aspect/J - Back compatibility and verificatino framework - Need ability to run big sets of real jobs without apache necessarily owning those jobs - A messaging issue to the community is that non-committers can and should independently test and then vote +1/-1 on release candidates - Symlink test suite Avro - language-agnostic RPC (C, C++, Ruby, Python...) - column store format in Avro - or put in pig/contrib? - Standard text format (currently only has a binary fmt) - JSON based - HTTP RPC - RPC proxy for testing Common - config variable names, uniform schema - better errors for bad config - range testing for config values - LDAP-based config - Config should support lists of elements, and not just a big string "value" wiuth commas - remove non-public things from public API Pig - decent error messages - column store format in Avro Subproject discussions MapReduce HDFS - Append/Sync : It is being targeted for 0.21 release. Project status "yellow".... meaning that it could be "at-risk" for this release. - HADOOP-4379 : seems to satisfy the append-tests that the Hbase team is doing. The Hbase team is requesting that this jira be included in some sort of a Hadoop release. - HDFS mounts and symlinks : HADOOP-4044. This is very much essential for federating a flock of hadoop clusters. This is targeted for 0.21 release. NameNode scaling to large number of files: One idea that was discussed was to pagein/pageout metedata from the namenode memory on demand. This introduces aditional complexity in the NN code, especially needs fine-grain locking of data structures. This is not going to be attempted in the short term. Federating the hadoop namespace across multiple hadoop namenode (using symlinks) would be a way to solve the problem of a "large number of files". - DFSClient read latency performance: It was discovered that the performance bottleneck was because of a single thread doing sequential reads from one data block followed by the next. An alternative approach would be to open multiple DFSClient instances, and make each instance read different blocks in parallel. This can improve the latency of a single file-read tremendously. Todd Lipcon will try this out. Pig/Hive Approaches to column storage were discussed. The Hive and Pig implementations are fundamentally different -- Hive uses a PAX-like block-columnar approach, wherein a block is organized in such a way that all values of the same column are stored next to each other and compressed, with a header that indicates where the different columns are stored inside the block. This file format, RCStore, can be used by Pig by virtue of writing a custom slicer/loader. The Pig representatives indicated that they were interested in experimenting with RCStore as an option for their workloads. The Pig model splits columns into separate files, and inxeses into them; there is a storage layer that needs to coordinate the separate files. The layer is (naturally) able to do projection pushdown, looking into also pushing down filters. Pig integration is in progress. No technical impediments to implementing a Hive SerDe for Zebra. Sharing code and ideas would be easier if there was a common place to look for this sort of stuff. For example, RCStore can be pulled out of hive -- but where? Commons? Avro? Neither project has plans for storage-layer vector operations (a-la vertica). Metadata discussion pushed off to next week. Join support is roughly equivalent for both systems (map-side aka FRJoin, and regular hash join). Neither supporting or planning on bloom joins (no real demand -- be it because the users aren't familiar with it, or because the workloads don't need it, is unknown). Either way, "we accept patches ;-)" Blue-sky stuff: Oozie integration -- a way to define a hive node? Some other custom processing node? Having Hive and Pig clients automatically push queries off as Oozie jobs so that they are restartable, etc? Pig is looking to move off JavaCC. Currently leaning in the direction of cup+jflex ; hive uses Antlr, which is now a lot less memory-intensive as they've started using lookaheads. This information might be putting Antlr back on the table for Pig. Avro/Common Configuration Registry class for configuration that allows you to "register" configuration - Allows for easier documentation and more control - Allows for warning for unknown keys to help with typos - Give support for description of units and range testings - Do before 1.0 - Push the value validation into the registered class (e.g. range tests, units, allows for <elem></elem>, umask in octal (HADOOP-3847) - Allows for explicit tagging/filtering of "stable", "experimental" and "deprecated" Hadoop configs should be read from a distributed filesystem (HADOOP-5670) - Read from LDAP, ZK, HTTP Avro - C/C++ Support - Single RPC Mechanism across all Hadoop processes/daemons - Language Agnostic RPC w/ versioning and security - RPC Proxy Object (e.g. fault-injection, light-weight embedding) - To/From Import/Export JSON Build And Test Top Level components Nigel proposed: - Backwards Compatibility Testing - System Testing Framework - Patch Testing Improvements - Test Plan Template (for JIRA) - Mock Objects Backwards Compatibility Testing: - API (syntactic) - Static Analysis - API (semantic) - We need scrutiny during code review when compatibility breaks; ways to determine if compatibility is broken during CR: - Be strict with the JIRA incompatible change flag - jdiff changes - Javadoc changes - Test changes (e.g., a JUnit test changes) - Other ideas: - Community-driven testing - Either allow users to submit a workflow and test their workflow for them, or - Just have users submit to a website if compatibility is good for them - Think of a long checklist, where each item is a volunteer running Hadoop that says if compatibility is good or not - Write API tests around 1 or 2 key APIs - Get well-defined specs for these APIs - Test all assertions - Sub-project test suites can help a lot - Solr, Pig, Hive, Mahout, HBase, Chukwa - Sub-projects can be part of the community idea mentioned above - Protocol - Not 0.21, because of Avro - Config - Syntax-static analysis (diff), also semantic tests apply - Data System Test Framework: - Shell framework (perhaps a derivative of Y!'s) - Run on cluster (possibly AWS) or in local debug mode - Can run system and performance tests Patch Testing: - RAT - need more control - Code coverage diff (really interesting) - Better test detection - +0 if they have a test/ folder - +1 if they have test/**/Test*.java - +1 if they have a new @test or "test*(" in a test/ file - -1 otherwise - Other tools (JDepend, Classycle) - Sonar (would run alongside Hudson) Test Plan Template: - New features have the template in the JIRA - Nigel has done some work on this Mock Objects: - (nothing discussed)
http://wiki.apache.org/hadoop/DeveloperOffsite20090612?action=diff
CC-MAIN-2016-07
refinedweb
1,957
54.97
+ in our new projects. In the remainder of this blog post, I’ll detail how to install OpenCV 3.0 with Python 3.4+ bindings on your Ubuntu 14.04+ system. If you have followed along from the previous tutorial, you’ll notice that many of the steps are the same (or at least very similar), so I have condensed this article a bit. That said, be sure to pay special attention when we start working with CMake later in this tutorial to ensure you are compiling OpenCV 3.0 with Python 3.4+ support! How to Install OpenCV 3.0 and Python 3.4+ on Ubuntu UPDATE: The tutorial you are reading now covers how to install OpenCV 3.0 with Python 3.4+ bindings on Ubuntu 14.04. This tutorial still works perfectly, but if you want to install OpenCV on the newer Ubuntu 16.04 with OpenCV 3.1 and Python 3.5+, please use this freshly updated tutorial: A few weeks ago I covered how to install OpenCV 3.0 and Python 2.7+ on Ubuntu, and while this was a great tutorial (since many of us are still using Python 2.7), I think it’s really missing out on one of the major aspects of OpenCV 3.0 — Python 3.4+ support! That’s right, up until the v3.0 release, OpenCV only provided bindings to the Python 2.7 programming language. And for many of us, that was okay. As scientific developers and researchers, it’s a pretty standard assumption that we’ll be sequestered to Python 2.7. However, that’s starting to change. Important scientific libraries such as NumPy, SciPy, and scikit-learn are now providing Python 3 support. And now OpenCV 3.0 joins the ranks! In general, you’ll find this tutorial very similar to the previous one on installing OpenCV 3.0 and Python2.7 on Ubuntu, so I’m going to condense my explanations of each of the steps as necessary. If you would like to full explanation of each step, please refer to the previous OpenCV 3.0 article. Otherwise, simply follow along with this tutorial and you’ll have OpenCV 3.0 and Python 3.4+ installed on your Ubuntu system in less than 10 minutes. Step 1: Install prerequisites Upgrade any pre-installed packages: Install developer tools used to compile OpenCV 3.0: Install libraries and packages used to read various image formats from disk: Install a few libraries used to read video formats from disk: Install GTK so we can use OpenCV’s GUI features: Install packages that are used to optimize various functions inside OpenCV, such as matrix operations: Step 2: Setup Python (Part 1) Let’s get pip , a Python package manager, installed for Python 3: Note that I have specifically indicated python3 when installing pip . If you do not supply python3 , then Ubuntu will attempt to install pip on your Python 2.7 distribution, which is not our desired intention. Alright, so I’ve said it before on the PyImageSearch blog, and I’ll see it again. You should really be using virtual environments for Python development! We’ll be using virtualenv and virtualenvwrapper in this tutorial. These packages allow us to create entirely separate and independent Python environments, ensuring that we don’t junk up our system Python install (and more importantly, so we can have a separate Python environment for each of our projects). Let’s use our fresh pip3 install to setup virtualenv and virtualenvwrapper : Again, notice how I am specifying pip3 instead of just pip — I’m just making it explicitly obvious that these packages should be installed for Python 3.4. Now we can update our ~/.bashrc file (place at the bottom of the file): Notice how I am pointing VIRTUALENVWRAPPER_PYTHON to where our Python 3 binary lives on our Ubuntu system. To make these changes take affect, you can either open up a new terminal or reload your ~/.bashrc file: Finally, let’s create our cv virtual environment where we’ll be doing our computer vision development using OpenCV 3.0 and Python 3.4: Step 2: Setup Python (Part 2) We’re halfway done setting up Python. But in order to compile OpenCV 3.0 with Python 3.4+ bindings, we’ll need to install the Python 3.4+ headers and development files: OpenCV represents images as NumPy arrays, so we need to install NumPy into our cv virtual environment: If you end up getting a Permission denied error related to pip’s .cache directory, like this: Then simply delete the cache directory and re-run the NumPy install command: And you should now have a nice clean install of NumPy: Figure 2: Deleting the .cache/pip directory and re-running pip install numpy will take care of the problem. Step 3: Build and install OpenCV 3.0 with Python 3.4+ bindings Alright, our system is all setup now! Let’s pull down OpenCV need to grab the opencv_contrib repo as well (for more information as to why we need opencv_contrib , take a look at my previous OpenCV 3.0 Ubuntu install post): Again, make sure that you checkout the same version for opencv_contrib that you did for opencv above, otherwise you could run into compilation errors.. Let’s take a second to look at my CMake output: Figure 3: It’s a good idea to inspect the output of CMake to ensure the proper Python 3 interpreter, libraries, etc. have been picked up. Notice how CMake has been able to pick up our Python 3 interpreter! This indicates that OpenCV 3.0 will be compiled with our Python 3.4+ bindings. Speaking of compiling, let’s go ahead and kickoff the OpenCV compile process: Where the 4 can be replaced with the number of available cores on your processor to speedup the compilation time. Assuming OpenCV 3.0 compiled without error, you can now install it on your system: Step 4: Sym-link OpenCV 3.0 If you’ve reached this step, OpenCV 3.0 should now be installed in /usr/local/lib/python3.4/site-packages/ Figure 4: The Python 3.4+ OpenCV 3.0 bindings are now installed in /usr/local/lib/python3.4/site-packages/ Here, our OpenCV bindings are stored under the name cv2.cpython-34m.so Be sure to take note of this filename, you’ll need it in just a few seconds! However, in order to use OpenCV 3.0 within our cv virtual environment, we first need to sym-link OpenCV into the site-packages directory of the cv environment, like this: Notice how I am changing the name from cv2.cpython-34m.so to cv2.so — this is so Python can import our OpenCV bindings using the name cv2 . So now when you list the contents of the cv virtual environment’s site-packages directory, you’ll see our OpenCV 3.0 bindings (the cv2.so file): Figure 5: In order to access the OpenCV 3.0 bindings from our Python 3.4+ interpreter, we need to sym-link the cv2.so file into our site-packages directory. Again, this is a very important step, so be sure that you have the cv2.so file in your virtual environment, otherwise you will not be able to import OpenCV in your Python scripts! Step 5: Test out the OpenCV 3.0 and Python 3.4+ install Nice work! You have successfully installed OpenCV 3.0 with Python 3.4+ bindings (and virtual environment support) on your Ubuntu system! But before we break out the champagne and beers, let’s confirm the installation has worked. First, ensure you are in the cv virtual environment, then fire up Python 3 and try to import cv2 : Here’s an example of me importing OpenCV 3.0 using Python 3.4+ on my own Ubuntu system: Figure 6: OpenCV 3.0 with Python 3.4+ bindings has been successfully installed on the Ubuntu system! As you can see, OpenCV 3.0 with Python 3.4+ bindings has been successfully installed on my Ubuntu system! Summary In this tutorial I have demonstrated how to install OpenCV 3.0 with Python 3.4+ bindings on your Ubuntu system. This article is very similar to our previous tutorial on installing OpenCV 3.0 and Python 2.7 on Ubuntu, but takes advantage of OpenCV 3.0’s new Python 3+ support, ensuring that we can use the Python 3 interpreter in our work. While having Python 3.4+ support is really awesome and is certainly the future of the Python programming language, I would also advise you to take special care when considering migrating from Python 2.7 to Python 3.4. For many scientific developers, the move from Python 2.7 to 3.4 has been a slow, arduous one. While the big Python packages such as NumPy, SciPy, and scikit-learn have made the switch, there are still other smaller libraries that are dependent on Python 2.7. That said, if you’re a scientific developer working in computer vision, machine learning, or data science, you’ll want to be careful when moving to Python 3.4 as you could easily pigeonhole your research. Over the coming weeks the OpenCV 3.0 install-fest will continue, so if you would like to receive email updates when new install tutorials are released (such as installing OpenCV 3.0 with Homebrew, installing OpenCV 3.0 on the Raspberry Pi, and more), please enter your email address in the form below. Hi Adrien, Thanks a lot! I followed your tutorial, and at last, I’ve been able to install OpenCV3 with Python3.4 on my Ubuntu VM. I have a related question : your instruction is to update the .bashrc with “export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3” so python3.4 will become the default interpreter in every future virtualenv. Is it still possible to create virtual environments with python2.7? Hey Sébastien, you can still create virtual environments using Python 2.7. Using a command like this should work: $ mkvirtualenv foo --python python2.7 Thanks! Last question: is it safe to delete the opencv/build directory after install? Or must we keep it forever. Its size is 2.9Gb… As long as you have ran sudo make install, you can safely delete the builddirectory. ok, thanks. And what about the opencv and opencv_contrib directories themselves? It makes up a total of 5.2Gb for me on version 3.3.0 Yes, once OpenCV is successfully installed you can delete both the opencvand opencv_contribdirectories. I’m gonna keep it anyway. Linux with 1TB hard drive but I only use about 25GB of it xD I as well had to rename to Everything worked great installed version 3.1.0 on Ubuntu 14.04 Adrian you by far have the best tutorials out there. Super appreciate what you do. Thanks for the kind words 🙂 And congrats on getting OpenCV installed! Hi I cannot find the cv2 files in dist-packages as well as site-packages and everything worked without error up to this step. If makeexited without error then there was like an issue with your CMake configuration, specifically with the Python 3 section. I would double-check the “Python 3” output. Also, this tutorial has been updated via this one, so there is a chance that you are following the wrong tutorial. Hi Adrian, Such a good tutorial, would you mind explaining the installation steps if I choose to use standard virtual environment coming with Python3.4 (Python3 -m venv foo) instead of virtualenv? Thanks Hey Ferdi, I’m actually unfamiliar with the virtual environment that comes with Python 3.4 (I’m just migrating to Python 3 myself — previously all I could use was Python 2.7). Do you have a link where I can read more about it? Hi Again, thanks for your reply. More information about Python’s native virtual environment can be found in the following link. It would be nice if can use it since it comes with Python 3.4 by default and there is no need to install any other 3rd party tools Awesome, thanks for passing along the link I’ll be sure to read up on it. Sir i am getting error as shown below. I have installed as per your instructions but still unable to install correctly. pls help me. root@chetan-VirtualBox:~# workon cv workon: command not found root@chetan-VirtualBox:~# python Python 2.7.6 (default, Jun 22 2015, 18:00:18) [GCC 4.8.2] on linux2 Type “help”, “copyright”, “credits” or “license” for more information. >>> import cv2 Traceback (most recent call last): File “”, line 1, in ImportError: No module named cv2 Hey Chetan — if you are getting an error related to the workoncommand, then it’s most likely that your ~/.bash_profilefile was not updated corrected or was not reloaded via the sourcecommand. Go back to the “Step 2: Setup Python (Part 1)” section and ensure you have updated your ~/.bash_profilefile to reflect the changes I suggested. Hello adrian, very nice step-by-step tutorial. Thanks man!! 🙂 I had to spend some time before figuring out I had to comment out export ‘PYTHONPATH=$PYTHONPATH:/usr/lib/local/python2.7/dist-packages’ (if such line exists in your ~/.bashrc file) and replace it with ‘PYTHONPATH=$PYTHONPATH:/usr/lib/local/python3.4/dist-packages’ or simply replace 2.7 with 3.4 This will allow our openCV3.0 compilation to choose the python3.4 interpreter specifically and to include the opencv3.0’s bindings for python3.4 as shown above in step 3’s cmake output. Nice catch! OK…here is what I did wrong….I had to exit in the middle of this process and when I came back into my terminal I was not on the cv environment when I did the cmake command. I tried running it again with no luck. I then removed the entire build directory and started again making sure I entered “workon cv” to be sure I was in the virtual environment. I followed the steps and all seems OK now. Nice, I’m glad it’s working for you Mike! And yes, to all other readers: if you leave your computer, restart it, open up a new terminal, etc. be sure to use the workon command to re-enter your virtual environment. thanks. Sorry for all the comments. I also get the following output when I type cv2.__version__: ‘3.0.0-dev’ . Are other people getting the dev version? I was surprised to get a different output. After pulling down the repository from GitHub did you run git checkout 3.0.0? It seems like you might have forgotten the checkout command. Your suspicions were correct. I could have sworn that I ran the “git checkout 3.0.0″command, but I must have forgot it. After I followed the steps again, being more careful I do get the correct version installed. Fantastic, I’m glad it was just a git checkout issue 🙂 i’ve encountered an error whilst running “sudo pip3 install virtualenv virtualenvwrapper”… The response,”Cannot fetch base index………” That sounds like an an issue with pip or your machine. Thanks a lot…..After running the command “cmake -D CMAKE_BUILD_TYPE=RELEASE \ -D CMAKE_INSTALL_PREFIX=/usr/local \ -D INSTALL_C_EXAMPLES=ON \ -D INSTALL_PYTHON_EXAMPLES=ON \ -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \ -D BUILD_EXAMPLES=ON ..” i have encountered this error,”CMake Error at 3rdparty/ippicv/downloader.cmake:75 (message): ICV: Failed to download ICV package: ippicv_linux_20141027.tgz. Status=22;”HTTP response code said error” How can that be resolved?? That sounds like an error related to your internet connection and not OpenCV. The ICV package needs to be downloaded and for whatever reason, the the download is failing — likely due to a connectivity issue (either on your end or OpenCV’s). If anyone found dist-package instead of site-package, then use these commands and i guess it will work… Thanks for sharing Nitin! Hello, i use your guide today at xubuntu 16.04.1 LTS. My link is now: ln -s /usr/local/lib/python3.5/dist-packages/cv2.cpython-35m-x86_64-linux-gnu.so cv2.so regards Thomas Thanks for sharing Thomas! I’ll actually have an updated tutorial covering Ubuntu 16.04 LTS online within the next couple of weeks. i have do in my ubuntu peppermint7, work. but in raspberry not found same like your case. Oops, looks like I’d forgotten to update the .bashrc file! It works fine now. My other query still stands! Hello, I’ve run through the compile and make processes with no warnings or errors. When I run sudo ldconfig from the build directory, there is no output. And when I try to look for the opencv *.so files, I cannot find them (neither in /usr/local/lib/python3.4/dist-packages or site-packages directories. Do you know what might be causing the problem ? What information can I provide to help you understand the issue ? Thanks in advance – and for the great tutorial ! Hey Jonathan — be sure to check for .sofiles in the build/libdirectory. This has happened to me a few times before and then I just manually moved it to the site-packagesdirectory. did you moved all those files??? Nice, works out of the box. Although, I installed virtual environment with just “virtualenv env –python python3.4”. Awesome, I’m glad the tutorial worked for you Milos! 🙂 Thanks for this great tutorial which worked pretty good. But everytime I start terminal the “workon command: not found” then I have to go thru exporting the virtualenv and virtualenvwrapper. Why is it happening and is it normal? Is there any way to make a virtual terminal which will work for cv directly? Thanks Hey Almazi — just to clarify: only the workoncommand is not found? The mkvirtualenvand rmvirtualenvcommands work? Hi Adrian, thanks a lot for your tutorial. I’m having the same problem as Almazi, and I get also the same message when executing mkvirtualenv and rmvirtualenv, how can a deal with it? Thanks in advance Hey Andrea — you might want to take a look at the “Troubleshooting” section of this post. While the instructions are for the Raspberry Pi, you can apply the same logic to fix the virtualenv commands. I’m working on a SIFT project that need matplotlib as prerequisite but I can’t build(หรือ compile) it. Could you give me any example or instruction? Please see my reply to Sommai above. I’m working on a SIFT project that need matplotlib as prerequisite but I can’t build it to virualane . Could you give me any example or instruction? Please see this tutorial on how to install matplotlib into the cvvirtual environment. The best, most straightforward installation procedure I’ve ever read. THANKS!!! Thanks so much Evan, I’m glad it worked for you! 🙂 Works great in Terminal, but i can’t import cv2 in idle or spyder ? how do configure them to work in your virtual cv ? Thanks ! I’m not familiar with the Sypder IDE, but if it’s anything like PyCharm, you just need to set the proper virtual environment for the project. Please see this post for more information. It works !!! with Eclipse or Pycharm … I don’t understand why not with my spyder ? I Have yet some trouble to solve with matplotlib… (I will try your patch of your blog ) – Thank you very much ! have you heard of Conda/Anaconda? It makes managing your packages and environments easier. I think it would be a better fit for computer vision researchers and practitioners than pip/virtualenv which are designed more for python programmers. Yes, I’ve heard of and used conda before. They actually used to include OpenCV in their pre-compiled packages but it was removed awhile back. I also don’t talk about it publicly, but the Continuum company and myself had a pretty big falling out (at least on my end of things). You won’t see me use their products on the blog. I agree Hello, I’m looking to connect my “cv” virtualenv that was created in this tutorial to PyCharm (as per one of your other tutorials) but I can’t figure out where the “cv” virtualenv is stored. As per this guide, where do I look to find that virtual environment? (also what is it’s final extension?) Thanks a million, Jack Ahh, of course. I try to dig around as long as I can before posting stupid comments. I found the .virtualenvs dir inside my home directory then all I had to do was point the PyCharm interpreter at .virtualenvs/cv/bin/python3.4 (as reference for anyone else in my position). Thanks for the guide! -Jack I’m glad you figured it out Jack. Also, for future reference, I have a tutorial dedicated to setting up PyCharm with virtual environments. Hey, I did as you instructed and I get the similar output on the cmake but after the packages path: site-packages, there’s another line that says python(for build): 2.7 Also, compiling with -j4, I get several warnings all .cpp or .hpp files. Anyway, the install works, with my camera, video files, all fine but now, how do I use opencv on my regular python install? should I redo all the steps without virtualenv? You can ignore the “for build” section, that part of the CMake script is buggy. As for using OpenCV in your regular Python install, just the cv2.sofile into your regular site-packagesdirectory (if it’s not already there). From there, you’ll be able to import OpenCV outside of the virtual environment. I want to keep both install-opencv-3-0-and-python-3-4-on-ubuntu/ and install-opencv-3-0-and-python-2-7-on-ubuntu/ will this tutorial keep older one intact ? Actually I want all following on my system, so that I can use according to need:- 1. opencv 3.0 & python 3.4 2. opencv 3-0 & python 2.7 3. opencv 2.4 & python 2.7 How can I have all the three ? It certainly is possible, but it’s non-trivial. The trick is to compile and install OpenCV 3.0 (for either Python version first). But that is the only time you run make install. Afterwards, you create separate Python virtual environments for the remaining two installs. Then you compile your respective Python versions. But only keep the builddirectory after compiling. You can then sym-link in the cv2.sobindings from the respective build/libdirectories into the correct Python virtual environments. Like I said, it’s not easy to do — but it’s absolutely possible. FYI, for those who are using Anaconda as their python install, here’s a site that goes through installing OpenCV 3 (worked for me): Thanks for sharing Tom! Thanks a ton sir. This is what I was looking. I did everything what is in this blog and didn’t get the library file. I was thinking it had something to do with anaconda. But thanks a lot. Now how to update the package in future? Until and unless the user updates, we can’t do anything? hello Can i install different versions of python on one machine too? e.g python 3.4 with opencv 3 in one virtual environment and python 2.7 with opencv 2.4 in other one? Thanks. You can, but it’s not a trivial process. See my reply to Hacklavya above for more information. sudo python3 get-pip.py i cant execute this . can you tell me how ASAP. thank you I dont understand this line “Note that I have specifically indicated python3 when installing pip . If you do not supply python3 , then Ubuntu will attempt to install pip on your Python 2.7 distribution, which is not our desired intention.” how to do that? thank you please help Why can’t you execute the command? Are you getting an error message? I am using ubuntu 14.04 and python2.7.6 was already in this. I installed opencv and python3.4. opencv can be imported using python2.7 but cant be used by python3.4 Double check your output of CMake and sure Python 3.4 was properly picked up. It sounds like OpenCV support was compiled for Python 2.7. This tutorial is great, but I had one problem with it that wound up taking me a day to debug. The current version of cmake installed by aptitude does not search for python libraries beyond version 3.3, which means it couldn’t find the libraries my 3.4 version of python. The solution is add an option to cmake: “-DPYTHON_LIBRARIES=/usr/lib/x86_64-linux-gnu/libpython3.4m.so” Once I added this configuration option, then cv2.cpython-34m.so magically appeared (albeit in “/dist-packages/”, not “/site-packages/”) and all was resolved. Thanks for sharing your solution David! Thanks, Adrian! Nice work. Thanks Matt! 🙂 Brilliant writeup, and great job with the site. Not sure if you studied with him, but it has Ramit’s fingerprint all over it…in a good way! On AWS Ubuntu, attempting to import cv2gave: The temporary fix for this is to associate null device with raw1394 (firewire, which we don’t need anyways). Thanks for sharing Alan! And which Ramit is that 😉 I assume Sethi? Nice article… I will test it using Gnome Ubuntu 16.04 Alpha 2 🙂 Thanks Hi Adrian, It would be great if you can have a similar tutorial for setting up open cv on Red Hat OS. Thanks hi Adrian, is it possible to install opencv 3 on a raspberry pi 2 which is running ubuntu? Yes, you can absolutely install OpenCV 3 on a Raspberry Pi that is running Ubuntu. The same instructions apply. Hey Adrian, great tutorial – thank you very much! You could save a lot of time and disc space by just downloading and extracting the release archives of opencv and opencv_contrib instead of cloning the whole git repository. For version 3.1.0, they can be found here: Thanks for sharing the tip! Thanks a lot for the tutorial . I managed to install everything in Kubuntu 15.10. Works great. However, I have a question. Why do you have to be inside of this cv environment (workon cv) to be able to import cv. If I just open up a terminal and write python3 , then import cv2, it does not work. Could you explain a bit just what that environment is and why this library does not work outside of it? For a more detailed discuss of Python virtual environments (and why they are important), I would suggest giving this article a read. The short answer is that virtual environments give you independent and sequestered environments for each of your projects — that way, you don’t have to worry about conflicting library versions from one project to the next. Hello could you helpme? just after I writo: impor cv2: I I get: Traceback (most recent call last): ImportError: No mudule named ‘cv2’ If you’re having trouble importing the cv2bindings, please see the “Troubleshooting” section of this post. Hey, Adrian. How can i cancel all changes i did? I want to try all steps from the beginning with a clear system? It depends on what steps you want to restart from. But in general, the best way to simply restart and clear the system is to re-install Ubuntu and start fresh. I followed the tutorial and it worked for me. But everytime when I need to use OpenCV, i should run “mkvirtualenv cv” and then use python. Is there any way to avoid this virtual environment? Using Python virtual environments is a best practice and I highly recommend you use them. That said, if you want to skip using them, just ignore all steps that relate to virtualenv and virtualenvwrapper. Thanks for this Adrian. A nice clean way to get things set up. I experimented inside a xubuntu 14.04 VM also, just to keep things even cleaner.. However, I found for modules other than the ones starting with x, like xfeatures2d, the other contrib modules don’t export to python2/3. I checked my ../build/modules/python3/pyopencv_generated_include.h for the modules, and they are not listed. I checked here to see why not all contrib modules are listed in that file: Seems the headers are not properly tagged with export macros for opencv3… Else, i’m doing something wrong… Adrain, or anyone else, did you get all the contrib modules working? Does anyone have an example of objectnessBing working following this method? Thanks Hey Paul — not all modules in opencv_contribhave Python bindings. This could be because the modules are now and experimental, or there has not been a request to create the Python bindings. Again, it’s normal for not all modules in opencv_contribto have Python bindings. Hi Adrian, Thanks a lot! I followed your tutorial, I installed openCV 3.1.0 with python3.5. on my Raspbarry pi 3 with ubuntu 16.04 mate. I just met one error before . Need to change the name “cv2.cpython-34m.so cv2.so” to “cv2.cpython-35m-arm-linux-gnueabihf.so” in my case. Just said thank you again,OpenCV is really interesting . Thanks for sharing the update Ashing! Thanks a lot for the great tutorial! One thing that confuses me: the package name. So even though we are using opencv 3, we nees to import “cv2”?? Not “cv3”? Thank you. Yep, it’s quite confusing! The name of the package for both OpenCV 2.4 and OpenCV 3 is cv2. I’m not sure why they didn’t change the name to cv3for OpenCV 3, but that’s just how it is! Thank you It worked for python3! Thank you Hey Adrian! I installed 3.1.0 on Ubuntu 15.10 with this guide and it works like a charm! Thanks! I have a question– Now after installing openCV with python, if I want to use it with C/C++ what do I need to do? I don’t formally support C/C++ code on the PyImageSearch blog, but you’ll need to use g++ to compile your code. This works great. But I need to install pycuda with this opencv installation. What procedure should be followed?? Coz I need them both for same program. Hey Adrian!! I am currently using opencv 2.4 with python 2.7. Now i want to upgrade to opencv 3 but is it recommended to upgrade python too. Will python 2.7 be not effective with OpenCV 3 ?? You certainly don’t need to upgrade to Python 3 if you don’t want to. OpenCV 3 is just as compatible and effective with Python 2.7 as it is Python 3. Thank you, only by your guide I’ve successfully installed OpenCV 3.1.0 on a clean Ubuntu 16.04 with Python 3.5, everything went smoothly. At first I tried Ubuntu 14.04 but something went wrong and there was no cv2*.so file in site-packages after installation. Congrats on getting OpenCV installed Alex! Hi Adrian, I am trying to install OpenCV along with the extra modules but is still running into issues with sift being not defined. I have followed your instructions step by step and also calling sift with: sift = cv2.xfeatures2d.SIFT_create() I am still getting: ‘module’ object has no attribute ‘xfeatures2d;. Please advise. Much appreciation in advance. Double-check your output of CMake and ensure that xfeatures2dis listed in the set of OpenCV modules to be compiled. If it’s not, then it’s likely that your path to the opencv_contribdirectory is incorrect. First of all great tutorial. I have done everything as mentioned in the blog. I am able to use cv2.imshow() with images. When I try to use cv2.imshow() in videocapture(I am using logitech webcam) no errors are generated but no window with video opens up. When I try to store one of those frame as images it works. So how do I get video feed window opened? (Note: I am using Linux Mint 17.3 xfce) How are you accessing your system? Natively with keyboard and monitor? Or via SSH or VNC? I also haven’t tried accessing a webcam via Linux Mint, so I’m not sure if that would cause a problem or not. Thanks Adrian for this easy to understand tutorial, even for Ubuntu beginners like me! No problem, I’m happy I could help! 🙂 Thank you for your article, very useful I managed easily to install OpenCV working with Python thanks to your explanations 🙂 Congrats on getting OpenCV installed, that’s awesome! 🙂 Hi Adrian, thanks for the in-depth tutorial, but I ran into a problem and am interested to see if you might know what’s up here. I’m attempting to install opencv 3.1.0 with python 3.4.3 on Ubuntu 14.04. After running sudo make installand sudo ldconfig(which gives no output) I end up with a bunch of libopencv_*.so files in /usr/local/lib/ (corresponding to different opencv modules, I believe) and all the same ones in opencv/build/lib/, however I have no cv2.cpython-34m.so in my /usr/local/lib/python3.4/dist-packages/ folder, nor a copy of it in the opencv/build/lib folder. I attempted to find it using sudo updatedb && locate cv2.cpython-34m.soin case it was created somewhere else, but had no success. I saw that another commenter here had a similar issue, then upgraded to Ubuntu 16.04, and then was successful, but upgrading from 14.04 isn’t quite an option for me right now. Do you have any idea what I can do differently in compilation / configuration, or with my current results, in order to get opencv working? Thanks, Tom Hey Tom — if you don’t have a .sofile for your OpenCV bindings in opencv/build/lib, then it’s likely that the bindings were not built. Go back to your CMake step and check the list of libraries to be compiled. Ensure that “python3” is in that list. Hey man just wanted to say you are doing awesome work providing education for free, thanks a ton. Hello Adrian i wanted to know what if i do not wish to work in some virtual environment but would rather work on an IDE such as pycharm , what should I do for that after i have gone through all steps mentioned here. If you would like to work in PyCharm, you can simply create a new PyCharm project and update the “Project Interpreter”, just like I do in this tutorial. I installed on xubuntu 14.04, all steps are the same. There are few modifications. After following all steps I found no cv2.cpython-34m.so file. Turned out, cmake could not find python libraries. So I had to add option -D PYTHON_LIBRARIES=/usr/lib/python3.4/config-3.4m-i386-linux-gnu/libpython3.4m.so to cmake configure. But surprisingly, it still didn’t work and cmake was failing to find python 3.4 lib headers. Only after I deleted CMakeCache.txt and executed cmake command, it was able to pick up py3.4 libs. Thanks for sharing your experience! I wanted to use OCR example and teserract to build as a part of Open cv. To do this you need to add libtesseract-dev, libtesseract-ocr and libleptonica-dev before configuring cmake. hi, I am also getting cv2.__version__: ‘3.0.0-dev’ , I tried to run git checkout 3.0.0 but problem is that i have downloaded zip file from github so it shows “fatal: Not a git repository (or any of the parent directories): .git” will it cause any problem? This shouldn’t cause a problem, but keep in mind that you’re on the development branch. Since you downloaded the .zip rather than the repo, that is why the git command is not working. for the command mkvirtualenv cv it says command not found but till that step everything went correct Please see the “Troubleshooting” section of this post — it details suggestions on how to resolve this problem. what does “(place at the bottom of the file”) signify ? to which file shall we add this ? Open up your ~/.bashrcfile in your favorite text editor, scroll to the bottom, and append the relevant commands. the above problem got solved but fot the command: sudo apt-get install python3.4-dev it says E: Unable to locate package python3.4-dev E: Couldn’t find any package by glob ‘python3.4-dev’ E: Couldn’t find any package by regex ‘python3.4-dev’ Can you check which Python version you are using I’m willing to bet you are using Python 3.5, in which case the command would be: $ sudo apt-get install python3.5-dev Is there anyway i can use Opencv2 without disturbing the environment for opencv3 ? Yes — please see my reply to “Hacklavya” above. Hi Adrian, I want to run a python script using OpenCV in the virtual environment, but then have it continue running (in the background), after I log out. It seems like the script gets terminated when I log out. I tried the “&” at the end of the execution line… Am I missing something? Thanks for your help. Hi Adrian, Thanks, I guess I was not using the subshell right in ubuntu. This thread helped: Thanks for sharing! It didn’t create python library. Found the reason. It’s required to have CMake with CPack, so from CMake 3.1. Best regards! I had the same problem, took me a while to find it. Updated CMake to 3.6.0 and it fixed the problem. Some changes to Python/OpenCV/Ubuntu might have made it necessary. Maybe the page could be updated to reflect this? Thanks for the tutorial! Thanks for sharing Yohan, I’ll look into this. Good tutorial +10, i’d tried everything and it worked fine, sorry for my English, but is a good tutorial. I’m happy to hear it worked for you Arnoldo! 🙂 Hello Adrian, Thank a lot for this tutorial. Instructions were crisp and clear. I was able to install opencv with python3.4 without any issues. -Rajeev Congrats on getting OpenCV installed on your system, Rajeev! 🙂 Thank you so much 🙂 It worked for me on Ubuntu 16.04 64bit with Python3.5.2 and opencv 3.0.0.0 checkout. Congrats, good job! I just installed OpenCV 3.1.0 into an Ubuntu 16.04 virtual machine. I had a weird issue in that after I ran the make install command, the appropriate cv2.xxx.so file hadn’t appeared in the /usr/local/lib site packages folder. After struggling to work out what the issue was (I checked the relevant make file, and I couldn’t see a problem), I just ran the cmake and make install commands again. After this second time, the file appeared, and I was able to follow the rest of the process fine. No idea what the problem was, nor really what resolved it (I don’t know much about the cmake system), but if anyone else is having this trouble, maybe they should try running the commands a second time. Hopefully this helps! Hello sir! Thanks for your great work! Everything on your site works like charms for me but have a little inquiry. I have recently taken interest in firewire devices while working with opencv-python. From what i’ve found, libdc1394 is the one handling firewire in ubuntu and that i have to include that while building opencv from source. As I’m just started learning programming early this year and not yet familiar with cmake, so could you give me some guidance or hint? And please pardon my bad english. Hey Amirul — thanks for the comment, although to be honest, I’m not sure what your specific question is? Hey Adrian, Thank you, it successfully installed on my pc. But, how coud I enter into virtual environment from the next time onwards to use workon cv. I have tried: source ~/.bashrc mkvirtualenv cv but it is throwing an error that command could not found. Could you please help me to fix this issue Thank you Hey Shravan — if the mkvirtualenvcommand is not being found, then you’ll want to double-check your .bashrcfile. It seems that it might not have been updated properly. I have a smooth pass following your installation steps. I am super excited! Thanks for sharing and I am going to share this with my friends on fb! Great job getting OpenCV installed on your system Jeffrey — and thanks for sharing the post with your friends 🙂 Hi Adrian, what do you suggest to do? EDIT: the answer to my question: libtiff5-dev would work just as fine 🙂 Nice job resolving the issue Mona Jalal! 🙂 Thanks for the thorough tutorial! I wanted the install for both python3 and python2.7. Python3 works fine now, could you please guide me how to get it working for python 2.7? Please see my reply to Hacklavya above where I discuss the general procedure on how to do this. I don’t know if someone has post it yet or not, but I had some problems with pyenv and cmake, instead of virtualenv. Some answers: Thank you for the amazing tutorial! When I follow the instruction then in Step 4 you hv said that the file location will be /usr/local/lib/python3.4/site-packages/. but in my installation it shows python3.5/dist-packages. So if I change all the next steps to Python3.5 & dist-packages it dosent work Any hints to solve the problem plz??? You can manually copy the cv2.sofile from dist-packagesto dist-packagesif you need to. I will be posting an updated tutorial covering Ubuntu 16.04 and Python 3.5 within the next 2 weeks so I would also suggest waiting for that tutorial. Hey Adrian, have you posted the tutorial? Because I’m not able to locate cv2.so and also, not able to import cv2: (terminal output removed due to formatting) Hey Gaurav — can you clarify which OpenCV version you are using? My latest guide is for Ubuntu 18.04. Hi, I reached the cmake step but after that when i ran the make-j4, i get fatal errors like math.h not found or stdlib.h not found, does any one have any ideas to what might be happening? I’m on an ubuntu 14.04 system and ran all this with the git chekout 3.0.0 NOTE: Edited to remove terminal output that was destroying comment formatting. Hey Kelevra — try adding the following switch to your CMake command: -D ENABLE_PRECOMPILED_HEADERS=OFF I’m willing to be that should resolve the issue. Hi Adrian, Thank you so much for your installation description. I am almost through, gone through all the steps, but when I want to open cv2 in python it does not find the module. The question is, I am working with python 3.5, so have replaced that in part of the steps where it said python 3.4, else to install the devices in step 2 (part 2) was not possible *the devices are not available since I have python 3.5). Then there seem to be two problems, no /usr/local/lib/python3.5/site-packages/ file is created for python 3.5, there is only a dist-packages file. So when I eventually want to open the cv2 file, it cannot be found. Although I can open openCV and use the workon cv command in cd ~/.virtualenvs/cv/lib/python3.5/site-packages/ so the change for python 3.5 is made there, but not in the usr file. Any suggestions on how I can solve this problem? Kind regards, Atilla Thanks a lot! Hi Atilla — if you are using Python 3.5 I would suggest using this tutorial instead. Hey i have a query regarding this.. See I have installed this opencv and python in my local user and it is working correctly.. My question is “What if I want to make it work in hduser? How can I set a path in that user?” If you have a separate user you can share a virtual environment. This isn’t recommended, but it can be done. Otherwise, I would suggest creating a separate virtual environment for your other user and sym-linking in the cv2.sobindings. Okay but i tried to installed in another user but the error is saying that you have already installed it. Now how i need to uninstall the opencv from my local user? Python, the associated virtual environments, and OpenCV should already be installed on your machine (if you successfully followed this tutorial). To create a virtual environment for your new user, you would have to: 1. Switch to that user. 2. Ensure their .bashrcfile is updated correctly. 3. Create a new virtual environment. 4. Install NumPy. 5. Sym-link in the OpenCV bindings. Again, another option would be to use the already existing virtual environment for user A for user B as well (this might involve changing permissions though, I’ve never directly tried this). Hello Adrian! Thanks a lot for this Tutorial! But i’ve got a problem. In the shell after workon cv etc.. all works fine like in this tutorial. But if i set the Python SDK on usr/bin/python3.4 I`m not able to do “import cv2”. It works fine in shell but my IDE (IntelliJ -Python Plugin) marks it red with “no module named cv2”. Can u help? Best, Dominik I’m not an IntelliJ user, but it sounds like you need to set the Python interpreter for your project to be the virtual environment. I demonstrate how to do this with PyCharm here. I imagine a similar solution exists for IntelliJ. Hi Adrian, I followed your instructions, and successfully installed open CV 3.2.0 on Yosemite 10.10.5. However, I then learned that I need ffmpegsupport, so I installed ffmpegwith Homebrew, and then tried to rebuild openCV, but it failed because of undefined symbols in freetype. I then tried to build openCV again without ffmpeg, but it failed with the same errors [This gist] () shows the cmakecommand and the error messages. Perhaps I should note that I always ran make cleanbetween build attempts. I ran nm freetype.cpp.oand those symbols are undefined all right, but I was able to install openCV 3.2.0 on Ubuntu 16.04 (with ffmpeg) by following your instructions, and I see that the same symbols are undefined in freetype.cpp.oso the linker must be finding them in another file. Before installing ffmpeg, I ran brew upgradeand I noticed that it upgraded freetype, so this may be the source of the problem, but I have have no clue what the problem is. Unfortunately, I don’t have much knowledge of makefiles this complex, so I don’t know what to look at next. How can I get more verbose output from the make process? I’d like to see what the ldcommand is. I’m guessing that when Homebrew upgraded freetype, it downloaded a prebuilt binary instead of building from source, so that some object file needed for the build is missing. If that’s the case, I still wouldn’t know what to do about it, but at least, I’d know what to google. I really need openCV on the Mac. The Ubuntu installation is just a temporary stopgap. I will appreciate any advice you can give me. P.S. I hope I’ve addressed the formatting problems this time. It would help klutzes like me if the site displayed previews of replies, but I don’t know how to accomplish that. Hey Saul — unfortunately without direct access to your machine I’m not sure what the exact error may be. I personally don’t use FFMPEG so I’m a bit unfamiliar with the intricacies it may introduce to the FFMPEG compile. While less than ideal, I would suggest re-installing Yosemite and then following the instructions from the very beginning. This will ensure you have a fresh machine to work on with no previous libraries causing problems. Hi Adrian! Can I use the same installation of openCV 3.1 for both python2 and python3? Y am using it with no problem, after installing the 3.1 version of opencv using your tutorial for python 2.7. I tried just doing the last steps of symlinking to the other installation, but it seems not to work. Do I have to do the whole installation again to be able to use it with python3? No, Python 2.7 and Python 3 are separate Python versions and therefore have separate interpreters and libraries. You will need to compile OpenCV twice — once for Python 2.7 and then for Python 3, taking care to copy the cv2.sofile to the respective site-packagesdirectory for your Python versions. Hi Adrian! I followed your instruction but I had a problem at line: ~$ sudo python3 get-pip.py because it gives me: sudo: python3: command not found I’m using Ubuntu 12.04 LTS with python 2.7.3. Do I have to install python3.4 before starting to install OpenCv? Thanks! This tutorial is intended for Ubuntu 14.04, so if you’re using a prior version of Ubuntu you will need to install Python 3 on your system. Hi Adrian! I solved the problem found yesterday by installing Ubuntu 14.04. Now I have followed your great tutorial and i reached the end! 🙂 But I have a little problem… when I import cv2, it gives me: Traceback (most recent call last): File “”, line 1, in ImportError: No module named ‘cv2’ Even another guy had a similar error (Chetan on July 31, 2015) but I followed your suggestion to him and I can’t resolve the problem. But I try to look in the directory, as you’ve done, with ~/.virtualenvs/cv/lib/python3.4/site-packages ls and, while for you cv2.so is sky blue, for me it is red! Why? Could it be this the problem? Thanks a lot! If the cv2.sofile is red, then your sym-link is pointing to a file that doesn’t exist. Locate your cv2.sofile on disk, then re-create the sym-link so that it points to the valid file path. Hello! I encountered the same problem, then I found myself to have made the mistake of not running this step earlier: sudo make install in the ~/opencv/build folder I hope this helps! BTW, thank you very much Mr. Rosebrock! Thank you very much for this extensive tutorial. Works perfectly fine with current versions. OpenCV 3.2.0 installed fine with Python 3.5 on Ubuntu 16.04.2, I just modified the version numbers from your tutorial. Also I didn’t set -D INSTALL_C_EXAMPLES=OFF in step three and the compilation worked flawlessly. The bug in OpenCV 3.1.0 seems to have been fixed in 3.2.0. 🙂 Thanks for sharing Joe! And congrats on getting OpenCV installed! I already installed opencv , but now only i realised that it only works in python 2.7 . When i run it in python 3 it shows error that ” no module named cv2″ but i need it in python3 . So should i follow ur steps without uninstalling the already existing opencv ? please help You don’t need to uninstall OpenCV, simply re-run CMake from within a Python 3 virtual environment. Re-compile. And then re-install. From there you’ll be good to go. Hi! Thanks for everything! Little question, I’ve got the same problem as sheikha, but my virtual environment doesn’t work, how can I open the “~/.bashrc” file to change it like indicated in step 1? Cheers You can use your favorite text editor to edit ~/.bashrc. For beginners, I would suggest using nano: $ nano ~/.bashrc I’ve been having much problems on installation and compilation of opencv and python3 opencv on my system. But with this tutorial, I found it much easier. Danke Congrats on getting OpenCV installed Vanshergzie, nice job! Hi! I am running into a little problem when I run the following code, $ make -j4 I get a fatal error: stdlib.h: No such file or directory After this the program terminates. Can you please help me out? I would make sure you have installed the “build-essentials” package. Don’t forget this line at the top of the tutorial: $ sudo apt-get install build-essential cmake git pkg-config Thank you so much . Finally I am able to set up openCV . Finally up and running 🙂 Nice job Milan, congrats! Thanks a lot. Saved a lot of time Congrats on getting OpenCV installed Omer. Hi Adrian, Fantastic tutorial, thank you so much! I am only stuck on the last bit, and I think it is because I have previously installed Anaconda on my computer. When running in python import cv2 I get the following response: ImportError: /home/una/anaconda3/lib/libstdc++.so.6: version `GLIBCXX_3.4.21′ not found (required by /home/una/.virtualenvs/cv/lib/python3.5/site-packages/cv2.so) Can you help me fix this? Thanks again, Rebecca Hi Rebecca — it sounds like OpenCV might have been compiled against a different OpenCV version than the one you are trying to import it into. Double-check your CMake paths or try on a fresh install without Anaconda. Hi Adrian. I followed your tutorial with python3.6.1 that has been installed from source just before. Every step of your tutorial worked, except the one where python3.6-dev should be installed. I get an error saying: E: Unable to locate package python3.6-dev E: Couldn’t find any package by regex ‘python3.6-dev’ do you have any idea what the problem is? with python 3.4 everything workes just fine… thanks for your help Try updating your apt-get packages: $ sudo apt-get update And then continuing. Awesome instructions. Thanks! Just in case someone has similar issues like I had … I had issues with building OpenCV for Python because of my Anaconda install. Anaconda installation appended another Python source to the $PATH environmental variable. Because of that I was not getting the output you’ve shown in Figure 3, as the required packages were not found. After I removed the culprit from the $PATH variable, could follow the rest of the steps successfully. Thank you for sharing Andrei! Sir provide me the source code in opencv python for fingerprint recognition ,,,,, i have lots of problems ,,,, i will be really thankfull to u for this kind act For those, who are bumping into: ImportError: libopencv_core.so.3.2: cannot enable executable stack as shared object requires: Invalid argument under Bash on Ubuntu on Windowswhen it’s to make import cv2, I would recommend doing the following: $ sudo apt-get execstack $ sudo execstack -c /usr/local/lib/libopencv_* After doing this it should get back to normal. At least I was able to perform some basic operations with Open CV. Thanks for such a detailed tutorial, Adrian! One small correction: $ sudo apt-get install execstack, sure. Thank you for sharing, Vladislav! Thanks for sharing! During installation, i came across issues with missing OpenEXR header files. If anyone else faces this issue please take a look in /usr/include/OpenEXR and move the missing headers to /usr/local/include/OpenEXR. Thanks for sharing, Keags! hi! i completed the installation without any errors. but i can’t import opencv using “import cv2”. it shows the import error:no module named cv2. There are many reasons why you may run into import errors. Take a look at this blog post, specifically the “Troubleshooting” section as I detail the main reasons you may get an import error related to the “cv2” bindings. You should add –system-site-packages key to mkvirtualenv, I think. Because it took couple of hours for me to get why I can import numpy in default system but can’t int cv virtual environment that created by mkvirtualenv. Hi Eugene — I actually don’t recommend using the system wide site-packagesdirectory. It sounds like you may have forgotten to install NumPy into your “cv” Python virtual environment: Hi Adrian, Just wanted to say thanks for this guide. I was able to use your process to build a docker image with python 3.6.3, opencv 3.3.1 and opencv_contrib 3.3.1. I’m an OpenCV noob so I can’t tell if everything is 100%, but the SIFT/SURFT example with test_image.jpg work with no errors or warnings. Thanks! If the SIFT/SURF example is working then you have OpenCV installed with all the extras. Congrats on getting OpenCV installed! Sir, I can import cv2 in my terminal, but not in my python idle. How can I import cv2 in my python idle? The GUI version of IDLE does not respect Python virtual environments. If you want an IDLE-like environment I suggest installing Jupyter Notebooks inside your Python virtual environment (Jupyter Notebooks are superior to IDLE). Thank you, I followed your tutorial and got opencv+python3 working on my rPI zero! Congrats on getting OpenCV installed! Thank you sir, i have a question. I installed opencv through this post completely. but i need downgrade of opencv. so i want to re-build or remove cv, and i removed ‘build’ directory…..and attempt to – pip uninstall – sudo apt-get purge libopencv-* python-data python-opencv but opencv don’t appear in pip list and although seems like success delete by apt-get purge, still possible ‘import cv2’ in python 🙁 how can i delete cv2 or re-build??? thank you for reading. please understand i don’t speak english well Hey Hong — OpenCV will not appear in the “pip” list. I’m not sure how you installed or uninstalled OpenCV on your system so if at all possible I would recommend re-installing Ubuntu on your system and installing OpenCV from scratch. This is the best method to ensure you follow the tutorial without error. It has been a good tutorial, I used it! , but now I would like to change the version to opencv 3.3, what do I have to do? Thank you You will need to recompile + reinstall OpenCV using the OpenCV 3.3 source code. If you would like to keep your original environment you will need to create a new Python virtual environment before compiling + installing: $ mkvirtualenv cv33 Has anyone tried installing opencv3 with python3 on Ubuntu 18.04? I am really excited. The OpenCV + Ubuntu install method in this post will work with just a few changes to Ubuntu 18.04. I will be posting a new OpenCV + Ubuntu 18.04 install tutorial in the next 1-2 months. Even though I’m new to Ubuntu, I was able to install Python and Open CV. It was seamless. Thanks Adrian for step by step description. I’m also interested in installing Machine Learning package. (scikit-learn) so that I can try out some projects that will be combination of image processing and machine learning. Could you please guide me with that? Thank you Adrian…I’m able to install machine learning and other packages like numpy, scikit-image using this link: This link also has description for setting up PyCharm environment. Congrats on getting OpenCV + Python installed on your Ubuntu machine, Suhas! Great job 🙂 I have a number of OPenCV + scikit-learn tutorials here on PyImageSearch. This one on simple image classification would be a great start for you. I also like this one on Local Binary Patterns. I would also recommend working through both Practical Python and OpenCV and the PyImageSearch Gurus course where I have many examples on image classification using machine learning and scikit-learn. Hi ,Adrian .I have a question, in my ubuntu,i have use both python2 and python3,how can i install opencv for both python2 and python3 at the same time.Thanks. You will need to perform two compiles: 1. One for Python 2 2. And another for Python 3 How is this process different from a pip install. Particularly… pip3 install opencv-python Using just “opencv-python” will not give you the full install of OpenCV. It leaves out the “contrib” package. If you wanted to use pip then the package name would be “opencv-contrib-python” Hi Adrian, thanks for this tutorial! I have been trying to follow the instructions but the cv2 library was not installed. I mean, I cannot find this file cv2.cpython-34m.so. I don’t know why the result swhow in the figure 3 no match with my result. “Figure 3: It’s a good idea to inspect the output of CMake to ensure the proper Python 3 interpreter, libraries, etc. have been picked up.” Thanks in advance. Hey Oscar — are you in the “cv” Python virtual environment? Make sure you are prior to executing “cmake”. Hey Adrian, thanks a lot for your time. It was my mistake, I was trying to follow the steps for Ubuntu but I needed install in raspberry with Raspbian stretch. So, I installed the Opencv library following this tutorial: And it works for me. Thanks again for your time and support with these tutorials. Have a good day! Congrats on getting OpenCV installed on your Pi, Oscar! im not able to use surf and sift though the procedure seem to have included contrib packages Make sure you follow my most recent OpenCV + Ubuntu install guide — it will help you install OpenCV 4 with the contrib module enabled. Hi Adrain, I have successfully installed openCV 3.4.2 by following the exact same steps given in your tutorial. Now, I would like to update to openCV 4 or the latest stable version. Do I need to uninstall 3.4.2 first? If so, how should I uninstall it. I am a novice in Ubuntu and python environment. Thanks for your contribution. No, you don’t need to uninstall OpenCV 3.4.2. You can compile and install OpenCV 4.
https://www.pyimagesearch.com/2015/07/20/install-opencv-3-0-and-python-3-4-on-ubuntu/
CC-MAIN-2019-43
refinedweb
10,441
76.72
Stream Control Transmission Protocol (SCTP) Associations In TCP, a stream is just a sequence of bytes. In SCTP, it has a different meaning; a stream is a logical channel along which messages are sent, and a single association can have many streams. The original motivation for streams came from the telephony domain, where multiple reliable channels were needed, but the messages on each channel were independent of those on other channels. In last month's article, we pointed out some TCP applications that could benefit from streams, such as FTP, which uses two sockets for data and control messages. In addition, an increasing number of applications are multithreaded, and streams open up the possibility of a thread in one peer being able to communicate with a thread in another peer without worrying about being blocked by messages sent by other threads. The socket I/O calls read/write/send/recv do not know about SCTP streams. By default, the write calls all use stream number zero (this can be changed by a socket option), but the read calls will read messages on all streams, and there is no indication as to which stream is used. So, to use streams effectively, you need to use some of the I/O calls that are designed specifically for SCTP. Each endpoint of an association will support a certain number of streams. A Linux endpoint, by default, will expect to be able to send to ten streams, while it can receive on 65,535 streams. Other SCTP stacks may have different default values. These values can be changed by setting the socket option SCTP_INITMSG, which takes a structure sctp_initmsg: struct sctp_initmsg { uint16_t sinit_num_ostreams; uint16_t sinit_max_ostreams; uint16_t sinit_max_attempts; uint16_t sinit_max_init_timeo; } If this socket option is used to set values, it must be done before an association is made. The parameters will be sent to the peer endpoint during association initialisation. Each endpoint in an association will have an idea of how many input and output streams it will allow on an association, as discussed in the previous paragraph. During the establishment of the association, the endpoints exchange these values. Negotiation of final values is just a matter of taking the minimum values. If one end wants 20 output streams, and the other wants only 10 input streams, the result is the smaller, 10, and similarly for the number of streams in the opposite direction. An endpoint will need to know how many output streams are available for writing in order not to exceed the limits. This value is determined during association setup. After setup, the endpoint can find this by making a query using getsockopt(). However, there is a little wrinkle here: a socket may have many associations (to different endpoints), and each association may have set different values. So, we have to make a query that asks for the parameters for a particular association, not just for the socket. The parameter to ask for is SCTP_STATUS, which takes a structure of type sctp_status:; }; This has fields sstat_instrms and sstat_outstrms, which contain the required information. See Listings 2 and 3 for a client and server negotiating the number of streams in each direction. Listing 2. streamcount_echo_client.c #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <netinet/sctp.h> #define ECHO_PORT 2013 char *usage_msg = "usage: astreamcount_echo_client ip-addr istreams ostreams"; char *msg = "hello"; void usage() { fprintf(stderr, "%s\n", usage_msg); exit(1); } int main(int argc, char *argv[]) { int sockfd; int len; struct sockaddr_in serv_addr; int port = ECHO_PORT; struct sctp_initmsg initmsg; struct sctp_status status; if (argc != 4) usage(); /* create endpoint */ sockfd = socket(AF_INET, SOCK_STREAM, IPPROTO_SCTP ); if (sockfd < 0) { perror("socket creation"); exit(2); } /* connect to server */ serv_addr.sin_family = AF_INET; serv_addr.sin_addr.s_addr = inet_addr(argv[1]); serv_addr.sin_port = htons(port); memset(&initmsg, 0, sizeof(initmsg)); initmsg.sinit_max_instreams = atoi(argv[2]); initmsg.sinit_num_ostreams = atoi(argv[3]); printf("Asking for: input streams: %d, output streams: %d\n", initmsg.sinit_max_instreams, initmsg.sinit_num_ostreams); if (setsockopt(sockfd, IPPROTO_SCTP, SCTP_INITMSG, &initmsg, sizeof(initmsg))) { perror("set sock opt\n"); } if (connect(sockfd, (struct sockaddr *) &serv_addr, sizeof(serv_addr)) < 0) { perror("connectx"); exit(3); } len = sizeof(status); memset(&status, 0, len); if (getsockopt(sockfd, IPPROTO_SCTP, SCTP_STATUS, &status, &len) == -1) { perror("get sock opt"); } printf("Got: input streams: %d, output streams: %d\n", status.sstat_instrms, status.sstat_outstrms); /* give the server time to do something */ sleep(2); /* no reads/writes are done */ close(sockfd); exit
http://www.linuxjournal.com/article/9749?page=0,2
CC-MAIN-2015-40
refinedweb
744
54.73
Read A MIDI File into a Score You can read a MIDI file into your program using the Read.midi() function. This function expects an empty score and the name of a MIDI file (saved in the same folder as your program). For example, Read.midi(score, "song.mid") inputs the musical data from the MIDI file “song.mid”, and stores them into score. Once the file has been read in, then you can manipulate or playback the score. For example, from music import * score = Score() # create an empty score Read.midi(score, "song.mid") # read MIDI file into it Play.midi(score) # play it back A Score created from an external MIDI File You can create a MIDI file from your program using the Write.midi() function. This function expects a Score (Part, Phrase, or Note) and a file name. For example, Write.midi(score, "song.mid") writes the musical data in score into the MIDI file called “song.mid”. This file is saved in the same folder as your program. If the MIDI file already exists, it will be overwritten.
https://jythonmusic.me/api/music-library-functions/read-write/
CC-MAIN-2020-05
refinedweb
182
76.32
Advanced pattern: Concurrent operations with async actor¶ Sometimes, we’d like to have IO operations to other actors/tasks/components (e.g., DB) periodically within an actor (long polling). Imagine a process queue actor that needs to fetch data from other actors or DBs. This is problematic because actors are running within a single thread. One of the solutions is to use a background thread within an actor, but you can also achieve this by using Ray’s async actors APIs. Let’s see why it is difficult by looking at an example. Code example¶ class LongPollingActor: def __init__(self, data_store_actor): self.data_store_actor = data_store_actor def run(self): while True: data = ray.get(self.data_store_actor.fetch.remote()) self._process(data) def other_task(self): return True def _process(self, data): # Do process here... pass There are 2 issues here. Since a long polling actor has a run method that runs forever with while True, it cannot run any other actor task (because the thread is occupied by the while loop). That says l = long_polling_actor.remote() # Actor runs a while loop l.run.remote() # This won't be processed forever because the actor thread is occupied by the run method. ray.get(l.other_task.remote()) Since we need to call ray.get within a loop, the loop is blocked until ray.get returns (it is because ray.getis a blocking API). We can make this better if we use Ray’s async APIs. Here is a documentation about ray’s async APIs and async actors. First, let’s create an async actor. class LongPollingActorAsync: def __init__(self, data_store_actor): self.data_store_actor = data_store_actor async def run(self): while True: # Coroutine will switch context when "await" is called. data = await data_store_actor.fetch.remote() self._process(data) def _process(self): pass async def other_task(self): return True Now, it will work if you run the same code we used before. l = LongPollingActorAsync.remote() l.run.remote() ray.get(l.other_task.remote()) Now, let’s learn why this works. When an actor contains async methods, the actor will be converted to async actors. This means all the ray’s tasks will run as a coroutine. That says, when it meets the await keyword, the actor will switch to a different coroutine, which is a coroutine that runs other_task method. You can implement interesting actors using this pattern. Note that it is also possible to switch context easily if you use await asyncio.sleep(0) without any delay.
https://docs.ray.io/en/master/ray-design-patterns/concurrent-operations-async-actor.html
CC-MAIN-2022-05
refinedweb
409
60.11
same scope (the global scope),,’s few good reasons to nest them more than 2 levels deep. However, in later lessons, we will see other related cases where the scope resolution operator needs to be used more than once.. 🙂 ‘access’. 🙂: ‘namespace’ 😉 >.cpp(). main.c 🙂 ‘std: ‘exf::bx()’ I dont understand why, should we actually be defining the full functions (not just the proto) in the header files now? (The rest of this tutorial has been great by the way… Thank you so much for sharing your wealth of knowledge and teaching skills 😉 ) 🙂 . ‘mio::suma(int,int)’ and ‘suma ‘namespace pollution’? You should never ‘use’
http://www.learncpp.com/cpp-tutorial/4-3b-namespaces/
CC-MAIN-2017-30
refinedweb
104
71.85
If anybody's telling you that BizTalk Server 2004 (BTS) is the best Microsoft product to come along in quite a spell, believe it. If they tell you that it's the closest thing to ERP that Microsoft has yet accomplished, believe it. If they tell you it's easy to use and sets the bar higher for distributed application configuration, you can safely laugh in their faces. There's a lot of irony in this, because the third BTS is by far the best, is deeply integrated into Visual Studio .NET (where it ought to be), and of course takes the very graphically friendly Orchestration feature to a new level of versatility. The whole point of these features is ease-of-use. But there's almost no developers' documentation yet available, so you're pretty much on your own in figuring out the best way to do things; and what little documentation exists isn't nearly detailed enough. The error handling also leaves much to be desired: tracking down your mistakes can be "pull-your-hair-out" frustrating. Below, we'll cover several bases: we'll track down the places where your errors are noted; we'll look at some of the easy-to-make mistakes; and consider some good practices to make interface development easier. Where the errors are When you've built an interface in BTS 2004 and start testing, you could find yourself submitting test messages to the server and having your enabled solution swallowing them whole. Nothing seems to go wrong, but your message doesn't end up where it was supposed to. Here's where to look: Health and Activity Tracking This powerful utility (Start | All Programs | Microsoft BizTalk Server 2004 | Health and Activity Tracking) is your first line of inquiry when a message vanishes. Select Operations | Messages to track down your missing message (Figure A). There are several options for refining a query to list messages, but leaving the defaults in place is usually fine. Click on Run Query, and you'll get a list of messages. Look for your missing message, which should have a "suspended" status. You can get more detail by selecting your message in the list, then right-clicking—choose either Orchestration Debugger or Message Flow. Event Viewer If you've sent yourself a test message that is intended for a receiving schema that will define it, or translate it, or reformat it, and the incoming message is a little off in some way, the schema's going to show a parsing error or application-internal error (this is common in message translation such as EDI or HL7). You'll get no outward sign of it, just a vanished message. Go to Start | All Programs | BizTalk Server 2004 | BizTalk Server Administration to bring up the BTS Administration Console. Then open the Event Viewer (below Console Root) and click on Application. You'll get an error log that will tell you what happened in the parsing of the message (Figure B). Click on the error to get the specific parsing error message. Warning: a parsing error will suspend processing of your message upon occurrence. Therefore, your message could contain many formatting mistakes or incorrect fields and you must discover them one at a time via this technique (i.e., find an error, fix the data, run it again, find an error, fix the date ... and so on). Not for your to-do list Here are some things not to do when developing and testing a messaging interface. There are many more, but these are a few that BizTalk won't give you any hints about. Deploying the same schema twice If you're developing more than one interface at once, it doesn't matter that you're using a different solution to store each interface's project components; if you deploy the same schema within different assemblies, you're going to run afoul with the fact that the namespace for both instances is the same. (Example: Suppose you're deploying two HL7 ADT interfaces, each for a different ADT document. They'll both probably use the same message header parsing schema, since they'll have those segments and elements in common.) Try to deploy two header projects as assemblies, containing the same schema, and neither will work! Solution: change the namespace—or, to your surprise, you may find that one deployed assembly will do the job for both solutions. Using the wrong pipeline When setting up Send and Receive ports for message transport, you'll have various pipelines available to you, configurable within the ports (See Figure C). These pipelines are the circulatory system for your messages, and using the wrong one can cause BTS to fail to process the incoming message (which pipeline to use for particular steps in your particular interface or process is too detailed a question to address here). In short, if a message didn't show up where it was supposed to, and you aren't yet experienced in using BTS 2004, experiment with changing the pipeline. If this gives you a good result, you'll rapidly learn to match the correct pipeline to the correct step in your processes. Going from one message format to another in an orchestration This kind of exercise is fraught with peril, but is exactly the kind of capability you really need for true distributed application messaging. There are many messaging transactions that BTS 2004 does painlessly—inbound formatted document to XML document, XML document to an adapter, and then into a SQL Server database, etc.—but jumping from an interim XML document to, say, a set of objects in a .DLL defining a database record, with an associated method for inserting a new record into a table—that's tricky stuff, and BizTalk will be very fussy about accommodating you. You need to create an orchestration to do something that complex, for a start. You'll do a Construct Message for the receiving object(s), which you must make XML-serializable (if you want to map from an XML document). Moreover, if the method you're using to add the elements to a database table is in a different .DLL, that one will have to be XML-serializable, too, even if it contains no objects with properties and even if it compiles cleanly in other contexts. Do your element-to-element mapping in a Transform expression (use xPath to pull data out of your XML document), and use an Expression to execute the Add. Make it easier on yourself There are several tricks you use, in development, to help make this whole process easier. Here are a few: Set up multiple Sends with file capture When you're putting together messaging in BizTalk Explorer (apart from any orchestrations), you can attach multiple Send ports (See Figure D) to a specific Party ("Party" is how a pipeline identifies an external messaging source). One of these Send ports will be the next step in your business process, but you can create one or more for your own use. This not only gives you a running breakpoint of sorts, but allows you to examine the contents of a message at different points along its journey. To create such a Send port, right-click on Send Ports in BizTalk Explorer and select Add Send Port (let it be a Static One-Way Port). Set it up with Transport Type File and a directory/file address. A copy of your message (or acknowledgment, or whatever it is you're processing) will be deposited in that file/directory (you can also use this process within an orchestration, if it's helpful, though the set-up of the File transport will be done through a wizard when you create the port). Isolate the orchestration from pipeline activity If possible, do your receiving, qualifying, and acknowledgment of messages in BizTalk Explorer, and do mapping and database work in orchestrations. Why? First, your orchestrations will be all the more complex if you do everything there. Second, the graphic display of those portions of the process is really unnecessary, since the receiving, qualifying, and acknowledgment steps are part of any messaging transaction with an outside party. Third, debugging is simpler: the techniques above will help you debug the pitch-and-catch with your messaging partner (call this the "network" interface), as well as permitting you to enable multiple partners in a collective process with less confusion (while confining the business logic in the orchestration to exactly that, the business logic). One of the powerful aspects of BTS 2004 is the pipelines; let them do as much work as possible and keep the front-end messaging work distinct from the crunch of mapping and database interface. (Note that this concept should not apply to communication with other points in your internal application system. Include them in your orchestrations or your solution design will be incomprehensible to other developers.) It may not be what BTS 2004's creators intended, but in the long run, it's tidier and easier to debug..
http://www.techrepublic.com/article/debugging-your-biztalk-server-2004-interface-solutions/5497814/
CC-MAIN-2017-39
refinedweb
1,511
56.29
DNAnexus Platform API bindings for Python Project description dxpy: DNAnexus Python API Building From the dx-toolkit root directory: make python Debugging Set the _DX_DEBUG environment variable to a positive integer before running a dxpy-based program (such as dx) to display the input and output of each API call. Supported values are 1, 2, and 3 with increasing numbers producing successively more verbose output. Example: $ _DX_DEBUG=1 dx ls Python coding style - Conform to PEP-8. - Relax the line length requirement to 120 characters per line, where you judge readability not to be compromised. - Relax other PEP-8 requirements at your discretion if it simplifies code or is needed to follow conventions established elsewhere at DNAnexus. - Document your code in a format usable by Sphinx Autodoc. - Run pylint -Eon your code before checking it in. - Do not introduce module import-time side effects. Do not add module-level attributes into the API unless you are absolutely certain they will remain constants. For example, do not declare an attribute dxpy.foo( dxpy._foois OK), or any other non-private variable in the global scope of any module. This is because unless the value is a constant, it may need to be updated by an initialization method, which may need to run lazily to avoid side effects at module load time. Instead, use accessor methods that can perform the updates at call time: _foo = None def get_foo(): initialize() return _foo Other useful resources: Python version compatibility dxpy is supported on Python 2 (2.7+) and Python 3 (3.5+) Code going into the Python codebase should be written in Python 3.5 style, and should be compatible with Python 2.7. Python 2.7 support will end on March 1, 2021. To facilitate Python 2 compatibility, we have the compat module in Also, the following boilerplate should be inserted into all Python source files: from __future__ import absolute_import, division, print_function, unicode_literals dxpy.compathas some simple shims that mirror Python 3 builtins and redirect them to Python 2.7 equivalents when on 2.7. Most critically, from dxpy.compat import strwill import the unicodebuiltin on 2.7 and the strbuiltin on python 3. Use strwherever you would have used unicode. To convert unicode strings to bytes, use .encode('utf-8'). - Use from __future__ import print_functionand use print as a function. Instead of print >>sys.stderr, write print(..., file=sys.stderr). - The next most troublesome gotcha after the bytes/unicode conversions is that many iterables operators return generators in Python 3. For example, map()returns a generator. This breaks places that expect a list, and requires either explicit casting with list(), or the use of list comprehensions (usually preferred). - Instead of raw_input, use from dxpy.compat import input. - Instead of .iteritems(), use .items(). If this is a performance concern on 2.7, introduce a shim in compat.py. - Instead of StringIO.StringIO, use from dxpy.compat import BytesIO(which is StringIO on 2.7). - Instead of <iterator>.next(), use next(<iterator>). - Instead of x.has_key(y), use y in x. - Instead of sort(x, cmp=lambda x, y: ...), use x=sorted(x, key=lambda x: ...). Other useful resources: Convention for Python scripts that are also modules Some scripts, such as format converters, are useful both as standalone executables and as importable modules. We have the following convention for these scripts: Install the script into src/python/dxpy/scriptswith a name like dx_useful_script.py. This will allow importing with import dxpy.scripts.dx_useful_script. Include in the script a top-level function called main(), which should be the entry point processor, and conclude the script with the following stanza: if __name__ == '__main__': main() The dxpy installation process (invoked through setup.pyor with make -C src pythonat the top level) will find the script and install a launcher for it into the executable path automatically. This is done using the entry_pointsfacility of setuptools/distribute. - Note: the install script will replace underscores in the name of your module with dashes in the name of the launcher script. Typically, when called on the command line, main() will first parse the command line arguments (sys.argv). However, when imported as a module, the arguments need to instead be passed as inputs to a function. The following is a suggestion for how to accommodate both styles simultaneously with just one entry point ( main): def main(**kwargs): if len(kwargs) == 0: kwargs = vars(arg_parser.parse_args(sys.argv[1:])) ... if __name__ == '__main__': main() Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/dxpy/
CC-MAIN-2022-21
refinedweb
770
58.58
Set the value of the specified display property of type integer. #include <screen/screen.h> int screen_set_display_property_iv(screen_display_t disp, int pname, const int *param) The handle of the display whose property is to be set. The name of the property whose value is being set. The properties that you can set are of type Screen property types. A pointer to a buffer containing the new value(s). This buffer must be of type int. param may be a single integer or an array of integers depending on the property being set. Function Type: Delayed Execution This function sets the value of a display property from a user-provided buffer. 0 if the command to set the new property value(s) was queued, or -1 if an error occurred (errno is set; refer to /usr/include/errno.h for more details).
http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.screen/topic/screen_set_display_property_iv.html
CC-MAIN-2018-47
refinedweb
140
66.64
One .NET 3.5SP1, which ships with Windows 7, you can still use these features thanks to the WPF Shell Integration Library that is now available on MSDN Code Gallery. The WPF Shell Integration Library shares the same features that are found in .NET 4, and the APIs are very compatible. This means it will be easy to upgrade your application to .NET 4 with only minor changes to the shell integration code. It also means that almost all of the documentation for the System.Windows.Shell namespace also applies to the WPF Shell Integration Library. There are a few differences, as noted on Code Gallery: - TaskbarItemInfo is implemented as an attached property on the TaskbarItemInfo class, and is attached to an instance of a Window. In .NET 4, the TaskbarItemInfo is a property directly on Window. - A TaskbarItemInfo instance has a strong affinity to a single window and cannot be shared. - Features are in the Microsoft.Windows.Shell namespace, and in the xmlns. In .NET 4, the taskbar integration APIs are within the System.Windows.Shell namespace and the standard WPF xmlns. To see how similar the APIs are, and what the differences are, I took the sample code for the TaskbarItemInfo class from the MSDN documentation, and converted it to work with the WPF Shell Integration Library on .NET 3.5SP1. What changes were needed? Let’s see… First, I downloaded the library from Code Gallery. Then I added a reference to Microsoft.Windows.Shell.dll to my sample project, and in MainWindow.xaml, I added the required xmlns: xmlns:shell=. From there, I changed <Window.TaskbarItemInfo> to <shell:TaskbarItemInfo.TaskbarItemInfo>, and then added the shell: prefix to the rest of the tags. <shell:TaskbarItemInfo.TaskbarItemInfo> <shell:TaskbarItemInfo x: <shell:TaskbarItemInfo.ThumbButtonInfos> <shell:ThumbButtonInfoCollection> <shell:ThumbButtonInfo <shell:ThumbButtonInfo </shell:ThumbButtonInfoCollection> </shell:TaskbarItemInfo.ThumbButtonInfos> </shell:TaskbarItemInfo> </shell:TaskbarItemInfo.TaskbarItemInfo> In the code behind page, I just changed my using/Imports statement from System.Windows.Shell to Microsoft.Windows.Shell. That’s it. With these few changes, the TaskbarItemInfo sample runs on .NET 3.5 exactly as it does on .NET 4.
https://blogs.msdn.microsoft.com/wpfsdk/2010/02/22/wpf-shell-integration-library-for-net-3-5/
CC-MAIN-2017-39
refinedweb
352
60.82
Some features added in recent years to Linux and other modern Unix operating systems. A supplement to "Advanced Programming in the UNIX Environment" by Stevens. sendfile The sendfile system call was added to FreeBSD 3.0 in 1998 and Linux 2.2 in 1999. It adds a way to copy data from one file descriptor to another without first copying the data into process memory. Here is the signature on Linux: #include <sys/sendfile.h> ssize_t sendfile(int out_fd, int in_fd, off_t * offset, size_t count); The sendfile on FreeBSD has a different signature. It can only be used to copy data from a file descriptor to a socket. However, in the case of Linux, the input file descriptor must be a regular file; i.e. it must be possible to call mmap on the file descriptor. Also, before Linux 2.6.13 the output file descriptor had to be a socket. There is a pip package pysendfile for using this feature from Python. epoll If a process needs to read from multiple file descriptors, it is a bad idea to block on one of them since data could arrive at the other. For example if another process has the two files open for writing and is blocked writing to the other file descriptor, a deadlock results. There are two POSIX system calls for dealing with the situation: select and the newer poll. select imposed a limit, usually 1024, on the number of file descriptors that could be multiplexed at the same time, whereas poll did not. However, both select and poll are rather slow when used on more than 100 file descriptors. There are also pselect and ppoll variants which allow the process to change the signal mask that is in effect while the process is blocked on the file descriptors. To fix the slowness, kqueue was introduced in FreeBSD 4.1 (2000) and epoll in Linux 2.5.44 (2002). The idea is create a separate system call for declaring a list file descriptors that can be blocked on. This allows the kernel to maintain a data structure containing the list so that it has less processing to do each time the process blocks. Here are the Linux epoll signatures: #include <sys/epoll.h> int epoll_create(int); typedef union epoll_data { void *ptr; int fd; uint32_t u32; uint64_t u64; } epoll_data_t; struct epoll_event { uint32_t events; /* Epoll events */ epoll_data_t data; /* User data variable */ }; epoll_create creates an epoll list and returns a file descriptor for it. epoll_ctl adds, modifies, or removes (depending upon the 2nd argument) a file descriptor (the 3rd argument) in the epoll list (the 1st argument). The 4th argument is a pointer to a struct. The events field of the epoll_event struct is a bitmap used by the kernel to indicate whether the file descriptor is available for reading or writing, as well as some error conditions. epoll_wait is used to block on the file descriptors. The process allocates an array of epoll_event structs. The 2nd argument points to the array, the the 3rd argument is the size of the array. A timeout can be specified, or set to -1 to block indefinitely. The system call returns the number of file descriptors available for i/o, or -1 on error. close is used to free an epoll list. kqueue in FreeBSD and Darwin has greater generality than epoll in Linux, but it is also more complicated to use. Rather than describe it, we describe the libuv library, which among other benefits allows one to write multiplexing code in a portable way. libuv Node.js started in 2009. libuv was made available as a separate library in 2012. libuv is available in package managers: $ brew install libuv $ sudo apt install libuv Here is an implementation of tee using libuv: #include <uv.h> uv_pipe_t stdin_pipe; uv_pipe_t stdout_pipe; uv_pipe_t file_pipe; typedef struct { uv_write_t req; uv_buf_t buf; } write_req_t; void free_write_req(uv_write_t *req) { write_req_t *wr = (write_req_t*) req; free(wr->buf.base); free(wr); } void on_stdout_write(uv_write_t *req, int status) { free_write_req(req); } void on_file_write(uv_write_t *req, int status) { free_write_req(req); } void write_data(uv_stream_t *dest, size_t size, uv_buf_t buf, uv_write_cb cb) { write_req_t *req = (write_req_t*) malloc(sizeof(write_req_t)); req->buf = uv_buf_init((char*) malloc(size), size); memcpy(req->buf.base, buf.base, size); uv_write((uv_write_t*) req, (uv_stream_t*)dest, &req->buf, 1, cb); } void alloc_buffer(uv_handle_t *handle, size_t suggested_size, uv_buf_t *buf) { *buf = uv_buf_init((char*) malloc(suggested_size), suggested_size); } void read_stdin(uv_stream_t *stream, ssize_t nread, const uv_buf_t *buf) { if (nread < 0){ if (nread == UV_EOF){ // end of file uv_close((uv_handle_t *)&stdin_pipe, NULL); uv_close((uv_handle_t *)&stdout_pipe, NULL); uv_close((uv_handle_t *)&file_pipe, NULL); } } else if (nread > 0) { write_data((uv_stream_t *)&stdout_pipe, nread, *buf, on_stdout_write); write_data((uv_stream_t *)&file_pipe, nread, *buf, on_file_write); } // OK to free buffer as write_data copies it. if (buf->base) free(buf->base); } int main(int argc, char **argv) { uv_loop_t *loop = uv_default_loop(); uv_pipe_init(loop, &stdin_pipe, 0); uv_pipe_open(&stdin_pipe, 0); uv_pipe_init(loop, &stdout_pipe, 0); uv_pipe_open(&stdout_pipe, 1); uv_fs_t file_req; int fd = uv_fs_open(loop, &file_req, argv[1], O_CREAT | O_RDWR, 0644, NULL); uv_pipe_init(loop, &file_pipe, 0); uv_pipe_open(&file_pipe, fd); uv_read_start((uv_stream_t*)&stdin_pipe, alloc_buffer, read_stdin); uv_run(loop, UV_RUN_DEFAULT); return 0; } Compile and run the program: $ gcc -luv -o uvtee uvtee.c $ cat /etc/hosts | ./uvtee output.txt inotify The inotify suite of system calls were appeared in Linux 2.6.13 (2005). It provides an efficient way for a process to be notified of changes to the file system. #include <sys/inotify.h> int inotify_init(void); int inotify_init1(int flags); int inotify_add_watch(int fd, const char *pathname, uint32_t mask); int inotify_rm_watch(int fd, int wd); inotify_init creates a file descriptor for a list of monitored files. read is used to discover changes to those files, and close is used to release the list. inotify_init1 takes a flag so that a non blocking file descriptor can be created. inotify_add_watch adds a file to the list of monitored files. The 3rd argument is a bit mask which specifies the types of operations which are monitored. For a regular file, possible values are: - IN_ACCESS - IN_ATTRIB - IN_CLOSE_WRITE - IN_CLOSE_NOWRITE - IN_DELETE_SELF - IN_MODIFY - IN_MOVE_SELF - IN_OPEN Additional values for directories: - IN_CREATE - IN_DELETE - IN_MOVED_FROM - IN_MOVED_TO non-blocking inotify kqueue on Darwin chroot chroot has been around since Version 7 Unix. It is useful to review it before discussing namespaces. Despite the age and ubiquity of chroot, it is not a POSIX standard. Here is the signature on Linux: #include <unistd.h> int chroot(const char *path); If a process makes this call with the directory path "/home/bob/stuff", then the process loses the ability to open files outside of that directory, either for reading or writing. The process is said to be in a chroot jail. Any child processes inherit the limitation. The process calling chroot has created a mapping from the files it can see of the form "/**" to files on the host operating system of the form "/home/bob/stuff/**". Namespaces, discussed below, work the same way: every resource in the child namespace is mapped to a resource in the parent namespace. The chroot command is not particularly secure. If the process had open file descriptors to paths outside of the jail, they are maintained. If the working directory of a jailed process is moved outside of the jail, the jailed process can access files outside of the jail using "../../../foo" style paths. chroot makes it possible to run an application with its own set of executables. One could even run the application as root, but if the process had access to a kill command, it could stop processes outside of the jail. clone/setns/unshare Linux namespaces are what make it possible to implement containers. Containers are a limited type of virtualization in which the host and guest are running the same operating system. In contrast to hypervisor virtualization, the processes, files, and other resources in the guest are also processes, files, or other resources in the host. Linux namespaces allow the creation of jails or containers in which other operating system resources, such as user ids, process ids, and network resources, are mapped from container to host. A process which does not have a container mapping is not even visible inside the container and signals cannot be sent to it, even by a privileged process inside the container. The mount namespace was the first, introduced with Linux 2.4.19 (2002). The user namespace was introduced with Linux 3.8 (2013). Namespaces can be assigned to a process when it is created if it is created with the clone system call. A process can also change its namespace with the setns system call. #include <sched.h> int clone(int (*fn)(void *), void *child_stack, int flags, void *arg, ... /* pid_t *ptid, struct user_desc *tls, pid_t *ctid */ ); int setns(int fd, int nstype); int unshare(int flags); When cloning a new process, the third argument is a bitmask. The following bits can be set to create a new namespace: - CLONE_NEWUSER - CLONE_NEWPID - CLONE_NEWNS - CLONE_NEWUTS - CLONE_NEWNET - CLONE_NEWIPC The namespaces for a process are available in the /proc directory. For example, for process 7236: - /proc/7236/ns/cgroup - /proc/7236/ns/ipc - /proc/7236/ns/mnt - /proc/7236/ns/net - /proc/7236/ns/pid - /proc/7236/ns/user - /proc/7236/ns/uts These paths can be passed to the open system call to get a file descriptor which can be used as the 1st argument of setns. The 2nd argument is a bit mask similar to the 3rd argument of clone. If the 2nd argument is zero, the file descriptor can be for any namespace type. The files in /proc/PID/ns are symlinks. Using readlink on them returns an inode number which can be used as an indentifier for the namespace. The unshare system call creates new namespaces for the current process and joins those namespaces. Which namespace types is specified by the argument, which is a bit mask. user Each user namespace—excluding the root user namespace—has a mappings of uids and gids from the namespace to the parent namespace. These mappings are created by writing to the files /proc/PID/uid_map /proc/PID/gid_map The writing process must have CAP_SETUID in the user namespace of PID. To use setns to join another user namespace, a process must have CAP_SYS_ADMIN in that namespace. pid When a new pid namespace is created, the first process in the namespace is assigned PID 1 and has root power inside the namespace. Its parent PID, should it make a call to getppid, is 0. This process also becomes the parent of descendant processes which are orphaned. Signals are treated specially for PID 1; signals for which the process does not have a signal handler are always ignored. If the PID 1 for a pid namespace exits, attempting to fork inside the namespace results in a ENOMEM error. If it is desirable to use the /proc file system for a namespace that was created, it must be explicitly mounted. This command, if executed inside the namespace, hides the parent namespace /proc file system: $ mount -t proc proc /proc unshare and setns can be used to create a new pid namespace. They do not put the calling process in the new PID namespace. Instead, the first child created by the calling process becomes the PID 1 in the new namespace. In this respect, unshare and setns behave differently that for other namespace types, but it means that getpid always returns the same value for a process.
http://clarkgrubb.com/modern-unix
CC-MAIN-2019-18
refinedweb
1,893
61.77
#include <Libs/MRML/Core/vtkURIHandler.h> Definition at line 11 of file vtkURIHandler.h. Definition at line 17 of file vtkURIHandler.h. This function writes the downloaded data in a buffered manner need something that goes the other way too... Determine whether protocol is appropriate for this handler. NOTE: Subclasses should implement this method Definition at line 48 of file vtkURIHandler.h. The Usual vtk class functions. This function gives us some feedback on how our download is going. Use this function to set LocalFile virtual methods to be defined in subclasses. (Maybe these should be defined to handle default file operations) Reimplemented in vtkHTTPHandler. various Read/Write method footprints useful to redefine in specific handlers. Definition at line 98 of file vtkURIHandler.h. local file, it gets passed to C functions in libcurl Definition at line 95 of file vtkURIHandler.h. Definition at line 97 of file vtkURIHandler.h. Definition at line 96 of file vtkURIHandler.h.
https://apidocs.slicer.org/master/classvtkURIHandler.html
CC-MAIN-2021-21
refinedweb
158
52.46
ITestContext TestNG Interface: This post, we are going to discuss one crucial ITestContext TestNG Interface in detail. Some times in the test script, we need to share the objects with different test cases during the execution, to handle such scenarios, we can take the help of ITestContext TestNG Interface. So ITestContext is used to store and share data across the tests in selenium by using the TestNG framework. TestNG offers a means of storing and retrieving objects between tests through the ITestContext interface. This interface allows you to store using the inherited setAttribute() method and retrieve using getAttribute() objects. ITestContext TestNG Interface Since the ITestContext is created once and remains active for the duration of your test run, this is the perfect way to implement object sharing in your test suite. Making the ITestContext available in your test methods is easy: pass it as a parameter to your test method. Let’s Go through one scenario and try to understand how you can use with TestNG: Suppose in a Class we have 10 Test Cases or @Test method, which are covering the end to end scenario. Now all 10 Test cases are sharing some data, for example, Customer_id, which is unique, and the same value should be used for the end to end scenario. We Can handle this in 2 ways: - If all the test cases are present in a class, then we can create a Class level variable and share it. But it requires high maintenance. - Use ITestContext Let us understand how we can handle such scenario using ITestContext: As we mentioned above, we can use the ITestContext by passing as a parameter to any test method like below: @Test public void test1a(ITestContext context) { } You can set the value for ITestContext TestNG Interface by using the setAttribute() method like below: @Test public void test1a(ITestContext context) { String Customer_id = "C11012034"; context.setAttribute("CustID", Customer_id); } Now we need to retrieve the value from the ITestContext by using the getAttribute() method and use as per our requirement like below: String Customer_id1 = (String) context.getAttribute(“CustID”); Sample TestNG Class: package iTestContextLearn; import org.testng.ITestContext; import org.testng.annotations.BeforeTest; import org.testng.annotations.Test; public class Test1 { @BeforeTest public void SetData(ITestContext context) { String Customer_id = "C11012034"; context.setAttribute("CustID", Customer_id); System.out.println("Value is stored in ITestContext"); System.out.println("+++++++++++++++++++++++++++++++++++++++++++++++++"); } @Test public void Test1a(ITestContext context) { String Customer_id1 = (String) context.getAttribute("CustID"); System.out.println("In Test1, Value stored in context is: "+Customer_id1); System.out.println("+++++++++++++++++++++++++++++++++++++++++++++++++"); } @Test public void Test2a(ITestContext context) { String Customer_id1 = (String) context.getAttribute("CustID"); System.out.println("In Test2, Value stored in context is: "+Customer_id1); System.out.println("+++++++++++++++++++++++++++++++++++++++++++++++++"); } } Note: All the @Test methods that we are going to run must be in one class only.
https://www.softwaretestingo.com/itestcontext-testng/
CC-MAIN-2020-10
refinedweb
459
53.71
How to Use Pointers and Strings in C Programming The C programming language lacks a string variable, but it does have the char array, which is effectively the same thing. As an array, a string in C can be completely twisted, torqued, and abused by using pointers. It’s a much more interesting topic than messing with numeric arrays. How to use pointers to display a string You’re most likely familiar with displaying a string in C, probably by using either the puts() or printf() function. Strings, too, can be displayed one character a time by plodding through an array. HELLO, STRING #include <stdio.h> int main() { char sample[] = "From whence cometh my help?n"; int index = 0; while(sample[index] != ' ') { putchar(sample[index]); index++; } return(0); } The code shown in Hello, String is completely legitimate C code, valid to create a program that displays a string. But it doesn’t use pointers, does it? Exercise 1: Modify the source code from Hello, String, replacing array notation with pointer notation. Eliminate the index variable. You need to create and initialize a pointer variable. The while loop’s evaluation in Hello, String is redundant. The null character evaluates as false. So the evaluation could be rewritten as while(sample[index]) As long as the array element referenced by sample[index] isn’t a null character, the loop spins. Exercise 2: Edit the while loop’s evaluation in your solution for Exercise 1, eliminating the redundant comparison. Exercise 3: Continue working on your code, and this time eliminate the statements in the while loop. Place all the action in the while statement’s evaluation. For the sake of reference, the putchar() function returns the character that’s displayed. How to declare a string by using a pointer Here’s a scary trick you can pull using pointers, one that comes with a boatload of caution. Consider A Pointer Announces a String. A POINTER ANNOUNCES A STRING #include <stdio.h> int main() { char *sample = "From whence cometh my help?n"; while(putchar(*sample++)) ; return(0); } In A Pointer Announces a String, the string that’s displayed is created by initializing a pointer. It’s a construct that looks odd, but it’s something you witness often in C code, particularly with strings. (You cannot use this convention to initialize a numeric array.) Exercise 4: Copy the source code from A Pointer Announces a String in your editor. Build and run. The boatload of caution in A Pointer Announces a String, and anytime you use a pointer to directly declare a string, is that the pointer variable can’t be manipulated or else the string is lost. For example, in A Pointer Announces a String, the sample variable is used in Line 7 to step through the string as part of the putchar() function. Oops. If you wantto use sample later in the code, it would no longer reference the start of the string. When declaring a string by using a pointer, don’t mess with the pointer variable elsewhere in the code. The solution is to save the pointer’s initial address or simply use a second pointer to work on the string. Exercise 5: Fix the code in A Pointer Announces a String so that the sample variable’s value is saved before the while loop runs and is then restored afterward. Add a puts(sample) statement to the code after the while loop is done executing, to prove that the variable’s original address is restored. How to sort strings Taking what you know about sorting in the C language, you can probably craft a decent string-sorting program. Or, at minimum, you can explain how it’s done. That’s great! But it’s a lot of work. What’s better when it comes to sorting strings is not to sort the strings at all. No, instead, you sort an array of pointers referencing the strings. Sorting Strings, Initial Attempt shows an example. SORTING STRINGS, INITIAL ATTEMPT #include <stdio.h> int main() { char *fruit[] = { "apricot", "banana", "pineapple", "apple", "persimmon", "pear", "blueberry" }; char *temp; int a,b,x; for(a=0;a<6;a++) for(b=a+1;b<7;b++) if(*(fruit+a) > *(fruit+b)) { temp = *(fruit+a); *(fruit+a) = *(fruit+b); *(fruit+b) = temp; } for(x=0;x<7;x++) puts(fruit[x]); return(0); } Exercise 6: Type the source code from Sorting Strings, Initial Attempt into your editor. Build and run to ensure that the strings are properly sorted. Well, it probably didn’t work. It may have, but if the list is sorted or changed in any way, it’s an unintended consequence and definitely not repeatable. The problem is in Line 19. You can’t compare strings by using the > operator. You can compare individual characters and you could then sort the list based on those characters, but most humans prefer words sorted across their entire length, not just the first character. Exercise 7: Modify your source code, and use the strcmp() function to compare strings to determine whether they need to be swapped.
https://www.dummies.com/programming/c/how-to-use-pointers-and-strings-in-c-programming/
CC-MAIN-2019-39
refinedweb
849
64.41
Archive Functional programming and Declarative programming Imperative programming paradigm contains the description of each step need to be performed to solve a problem. This approach is also referred as an algorithmic approach to solve a given problem. Most well-known languages like c, c++ or java are imperative type programming languages. Functional programming is implementation of a certain task in the form of composition of different functions. In a very simple terms, we can understand the difference between the above two as Imperative languages describe “how to” accomplish a goal and functional programming describes “what to” aspect of it. That means imperative programming languages contain loops, conditional statements, methods whereas, functional paradigm contains functions (recursions). The main difficulty to transition to the functional programming is many are quite familiar with the declarative method of thinking style. Functional programming requires a different set of mind set to describe the problems in the form of functions. Many imperative languages (such as C++11) are providing ways and means to add the support for the functional programming paradigm to their language tool kit. This allows the developers to easily make use the best of the both worlds. The following are very few language constructs provided by C++ language (C++ 11) to facilitate the functional programming paradigm. auto key word: ex: auto a = 10; #include <iostream> #include <stdio.h> using namespace std; int main(void) { auto n = 10; cout << n << endl; getchar(); } Output: 10 #include <iostream> #include <stdio.h> using namespace std; int main(void) { auto str = "Hi!"; cout << str << endl; getchar(); } Output: Hi! The above code makes declaring a variable much cleaner and much easier. Lambda functions: (Source from) Characteristics: - Unnamed function object - Captures scope in its closures Syntax: [ capture-list ] ( params ) -> ret { body } [ capture-list ] ( params ) { body } [ capture-list ] { body } 1) Full declaration. 2) Declaration of a const lambda: the objects captured by copy cannot be modified. 3) Omitted trailing-return-type: the return type of the closure’s operator() is determined according to the following rules: 4) Omitted parameter list: function takes no arguments, as if the parameter list was (). mutable – allows body to modify the parameters captured by copy, and to call their non-const member functions exception – provides the exception specification or the noexcept clause for operator() of the closure type attribute – provides the attribute specification for operator() of the closure type capture-list – [] captures nothing params – The list of parameters, as in named functions, except that default arguments are not allowed(until C++14). if auto is used as a type of a parameter, the lambda is a generic lambda (since C++14) ret – Return type. If not present it’s implied by the function return statements ( or void if it doesn’t return any value) body – Function body #include <iostream> #include <stdio.h> using namespace std; int main(void) { auto min = [](int a, int b) { return a < b ? a : b; }; cout<< min(12, 34) << endl; getchar(); } Output: 12 In the above example: the closure is empty that means it is not capturing anything but only two parameters are passed, this function takes a and b and returns the minimum. Example: Caputure by value #include #include using namespace std; #include <iostream> #include <stdio.h> using namespace std; int main(void) { int divisor = 10; auto divideAllSum = [divisor](int a, int b) { return a / divisor + b / divisor; }; cout<< divideAllSum(30, 40) << endl; getchar(); } In the above example divisor was captured by value hence this value cannot be changed inside the lambda function. Output: 7 Example: Caputure by reference #include <iostream> #include <stdio.h> using namespace std; int main(void) { int divisor = 10; auto divideAllSum = [&divisor](int a, int b) { divisor = 5; return a / divisor + b / divisor; }; cout<< divideAllSum(30, 40) << endl; cout << "divisor : " << divisor << endl; getchar(); } Output: 14 divisor : 5 In the above example no matter what the value being sent to the function the function is changing it to 5 and returning the value. Capture all the scope as reference: #include <iostream> #include <stdio.h> using namespace std; int main(void) { int divisor = 10; auto divideAllSum = [&](int a, int b) { divisor = 5; return a / divisor + b / divisor; }; cout<< divideAllSum(30, 40) << endl; cout << "divisor : " << divisor << endl; getchar(); } In the above case entire the calling function scope has been captured as a reference hence it is possible to modify the divisor variable to 5. Capture whole scope as a value: #include <iostream> #include <stdio.h> using namespace std; int main(void) { int divisor = 10; auto divideAllSum = [=](int a, int b) { return a / divisor + b / divisor; }; cout<< divideAllSum(30, 40) << endl; getchar(); } In the above example “=” is part of closure signifies that the whole scope has been passed as a value hence can use all the variables part of the calling scope. Using lambda functions as an argument to functions: #include <iostream> #include <stdio.h> #include <vector> #include <algorithm> #include <functional> using namespace std; int main(void) { vector<int> l; for (auto i = 11; i >= 0; i--) l.push_back(i); sort(l.begin(), l.end(), [](int a, int b) { return a >= 5? (a < b) : a > b; }); auto print = [](vector<int> myl) {for (auto i = myl.begin(); i != myl.end(); i++) cout << *i << " "; }; print(l); getchar(); } Output: 5 6 7 8 9 10 11 4 3 2 1 0 In the above example: To sort function the lambda function is passed as our own comparator. Here documents in bash In my day to day work oftentimes I work with C, Bash scripting and Cross compiling for ARM and other interesting stuff. I feel programming is an art so as bash scripting. There are many things we come across after seeing the scripts written by highly experienced bash programmers. One of such concept was a here document. In simple terms, here documents can be treated as multiple line input provided to a command in a shell script. Let us discuss here documents with simple examples: Example 1: wc -l << COUNT_LINES Vasanth Raja Chittampally !! COUNT_LINES In the above example the start and end of the here document are specified with “COUNT_LINES“. Input to “wc -l” command is the three lines in between the COUNT_LINES. As expected the output to this is “3“. Example 2: Oftentimes I see the here documents used in conjunction with cat. This is because when we want to print set of lines to the terminal, this comes in handy. For example cat << USAGE_HELP The invoked script name: $0 This invoked script direcotry: `dirname $0` This is a second line USAGE_HELP Few important points are 1. In here documents we can substitute command outputs. In the above example $0 and `dirname $0` are replaced with the command outputs. 2. All the text in the here document will be printed exactly to the standard output. That means leading tabs, spaces are preserved exactly. Suppose if you wish to remove the leading tabs from the here document, the same can be achieved in the following manner cat << – USAGE_HELP The invoked script name: $0 This invoked script direcotry: `dirname $0` This is a second line USAGE_HELP Just use a ‘-‘ symbol just after the symbol “<<“. In a nutshell, here documents comes very hand when you want to give multiple line input to a command. Moreover, here documents are simple to use and easy to understand. vim backspace issue quick fix vim is one of the extraordinary editors that I’ve ever used. One of the best thing I enjoy is the total customization it offers from fonts to key bindings, backgrounds and a lot more. Recently I’ve came across backspace not working on vim editor even after the backspace is set in ~/.vimrc. When I first started using vim I’ve encountered this issue and the quick thing which fixed is the following line. set backspace=2 you can find more description in the vim tips here. Even after adding this I’ve found my colleagues facing the same issue. Then the quick fix was to add the following line in your .vimrc. The above line has proved proper solution for many. But I’ve seen cases where even after having those lines backspace prints “^?”. This was rather annoying to many. one other quick fix for this is as follows stty erase ^? Run the above command on the shell and try appending the above two lines in your ~/.vimrc as follows set backspace=2 set backspace=indent,eol,start Enjoy editing in vim. If you find any other better fix for this issue please leave a comment below. Thanks for reading this post. Different types of data types There actually different types of datatypes as the programming languages specify them. Here I’m going to specify four types of them, Almost all predominant languages should have datatypes belonging to the following four kinds of datatypes 1) Statically typed language 2) Dynamically typed language 3) Strongly typed language 4) Weakly typed language Statically typed language: In this kind of languages variables should be assigned types at the compile time itself( and fixed at compile time). All the variables must be used before using them. ex: Java, C, C++ ect., Dynamically typed language It is exactly opposite to that of statically typed language. The type of the variable is decided at the runtime rather than at compile time. ex:VBscript, Python Strongly typed language These kind of typed language don’t allow you to change the type of a variable once the type is decided(at runtime or at compile time). Once a variable is declared as String in Java during runtime you cannot convert the type of the variable without explicit type conversion. ex: Python, Java Weakly typed language This type of languages are exactly opposite to that of Strongly typed languages, you can change the type of the variable at anytime during the execution of the program. ex: VBScript) Sorting is easy!!! One of the most common questions in many interviews is the sorting techniques. There are many sorting techniques, I am going to explain some of them in the coming posts. We will start off with, the basic and simplest of all the sorting techniques, Bubble sort Bubble Sort Method: Bubble sort compares two adjacent elements and swaps if the first element is greater than the second one. After every pass the largest element in that pass bubbles up to the last position. Example: suppose the array elements are 5 7 8 3 2 pass1: 5 7 8 3 2 (compare 5 & 7) (No Exchange) 5 7 8 3 2 (compare 7 & 8 ) (No Exchange) 5 7 8 3 2 (compare 8 & 3) (Exchange) 5 7 3 8 2 (compare 8 & 2) (Exchange) 5 7 3 2 8 (End of the pass1, the largest element is in its corresponding position the array) pass2: 5 7 3 2 8 (compare 5 & 7) (No Exchange) 5 7 3 2 8 (compare 7 & 3) (Exchange) 5 3 7 2 8 (compare 7 & 2) (Exchange) 5 3 2 7 8 (End of the pass2, the largest element in the pass2 is bubbled up to its corresponding position) pass 3: 5 3 2 7 8 (compare 5 & 3) (Exchange) 3 5 2 7 8 (compare 5 & 2) (Exchange) 3 2 5 7 8 (End of the pass3, the largest element in this pass is bubbled up to its corresponding position) pass 4: 3 2 5 7 8 (compare 3 &2 ) (Exchange) 2 3 5 7 8 (End of the pass4, all elements sorted) Algorithm #include<stdio.h>#include<conio.h>int main(void){int a[]={1,12,2,11,3},i,j,n=5,temp;for(i=0;i<n-1;i++){for(j=0;j<n-i-1;j++){if(a[j]>a[j+1]){temp=a[j];a[j]=a[j+1];a[j+1]=temp;}}}for(i=0;i<n;i++)printf(“%d “,a[i]);getch();return 0;} compiled on: dev c++, gcc, Turbo C Analysis: Worst Case: O(N2) Average Case: O(N2)
https://vasanthexperiments.wordpress.com/tag/programming-2/
CC-MAIN-2017-51
refinedweb
1,989
58.21
To increse the score of the model we need the dataset that has high variance, so it will be good if we can select the features in the dataset which has variance more than a fix threshold. This data science python source code does the following: 1. Uses Variance for selecting the best features. 2. Visualizes the final result So this is the recipe on how we can do variance thresholding in Python for feature selection. Get Closer To Your Dream of Becoming a Data Scientist with 70+ Solved End-to-End ML Projects from sklearn import datasets from sklearn.feature_selection import VarianceThreshold We have only imported datasets to import the inbult dataset and VarienceThreshold. We have imported inbuilt iris dataset and stored data in X and target in y. We have also used print statement to print first 8 rows of the dataset. iris = datasets.load_iris() X = iris.data print(X[0:7]) y = iris.target print(y[0:7]) Explore More Data Science and Machine Learning Projects for Practice. Fast-Track Your Career Transition with ProjectPro We have created an object for VarianceThreshold with parameter threshold in which we have to put the minimum value of variance we want in out dataset. Then we have used fit_transform to fit and transform the dataset. Finally we have printed the final dataset. thresholder = VarianceThreshold(threshold=.5) X_high_variance = thresholder.fit_transform(X) print(X_high_variance[0:7]) So in the output we can see that in final dataset we have 3 columns and in the initial dataset we have 4 columns which means the function have removed a column which has less variance. []] [0 0 0 0 0 0 0] [[5.1 1.4 0.2] [4.9 1.4 0.2] [4.7 1.3 0.2] [4.6 1.5 0.2] [5. 1.4 0.2] [5.4 1.7 0.4] [4.6 1.4 0.3]]
https://www.projectpro.io/recipes/do-variance-thresholding-in-python-for-feature-selection
CC-MAIN-2021-39
refinedweb
318
71.75
Regex ure library is broken? These 3 lines blow up my Wipy due perhaps to a bug in the ure module? The WiPy does an enormous stack trace / memory dump. I managed to hit reset quickly enough to see the output, and after a long time mucking around with my code, I'm pretty sure the problem is the built-in "re" or "ure" module. [This link is one line of an RSS feed for Car Talk, by the way] import re line = b' <enclosure url="" length="26203572" type="audio/mpeg"/>\n' re.search(r'"http(s)?://.*.mp3"', line) +++++ /Users/danicampora/Code/Pycom/esp-idf/components/freertos/./heap_regions.c:368 (vPortFreeTagged)- assert failed! abort() was called at PC 0x400841ef Guru Meditation Error of type LoadProhibited occurred on core 0. Exception was unhandled. Register dump: ..... @paul12345 : I do not know why the implementation is recursive. Maybe it always was, or it's the most elegant solution. About the double quote: No, it does not work on WiPy if you drop the double quote at the end of the search string. But you will have a match on machines with more memory, simply because the sample string (line) you are searching at does not contain the pattern mp3". B.T.W.: Shortening the string in line works on WiPy. Regards, Robert Haha, that's unfortunate. Why is there a recursive implementation for a memory-limited platform? I suppose the answer to that is too involved for this forum. Are you suggesting I omit the final double-quote and it will work? Because it seems like even that might fail for some strings or runtime conditions if there is less free memory available. Is that a correct understanding? Probably I'll just omit the re module entirely and do a work around with more basic string operations. Thanks, Paul Hello @paul12345. You example obviously fails, and the reason is most likely a stack overflow. The string is too long for that kind of search expression. If you look around about this topic, you'll find these things reported several times. Trying that example on linux micropython returns a match, at least if you change your search expression to: re.search(r'"http(s)?://.*.mp3', line) The stack on these machines is limited, and the ure lib turns out to be recursive. I agree that it should run in a proper error state instead of drifting away. Look also here:
https://forum.pycom.io/topic/775/regex-ure-library-is-broken
CC-MAIN-2018-26
refinedweb
406
75.4
Create a Bluetooth HC-06 Module With Arduino Create a Bluetooth HC-06 Module With Arduino Learn how to wire and program a module to connect to Bluetooth so you can send and receive data. Join the DZone community and get the full member experience.Join For Free A little bit ago I grabbed a cheap HC-06 Bluetooth transceiver for $6 on Amazon for my electronics project. It was fairly simple to set up but I did run into a few hitches and a (lack of information to fix them), so I’m going to detail some of my experience for you so you can hopefully avoid the same pitfalls. Here’s a cheap one for $6.50 on Amazon. Note that there are many sellers on Amazon and Ebay selling HC-06s as HC-05s so if you can’t get AT commands to work or only basic ones work then you probably don’t have a HC-05 module. Step #1: Wiring it Up The first step, of course, is to wire up the Bluetooth leads to your Arduino pins. RX goes to your Arduino TX and TX goes to your Arduino’s RX — remember, they’re opposite because the Bluetooth chip is sending on TX, so the Arduino receives that on a RX pin. VCC/3.3V goes to 3.3V — not 5V: Using 5V is likely to damage your Bluetooth chip, but it could probably stand it for a brief moment if you do accidentally connect it to 5V. Once it’s connected and you turn on your Arduino, an LED on the Bluetooth board should start blinking. Step #2: Setup Code and AT Command Configuration Next, we need to write some code so we can use this thing. Bluetooth devices can be configured with these various AT commands and SoftwareSerial, a standard Arduino library takes care of communication for us quite neatly. Note the cheaper Bluetooth boards like the one I’m using are usually HC-06, not HC-05. HC-05 has more AT commands — and also some different ones — that will not work for yours. HC-06 will also not work if you include new line characters. There’s a bunch of example code out there that includes newline characters and also unsupported AT commands that will not work at all for your HC-06 firmware. So below is some simple setup code that initializes the Bluetooth device and tells it to change its broadcast name to My-Sweet project. Note the delays in sending the AT config commands. In a HC-05 user guide I ran into, the delays are stated as necessary, and it appeared to be flakier without them. #include <SPI.h> #include <EEPROM.h> #include <TouchScreen.h> #define BLUETOOTH_RX 10 #define BLUETOOTH_TX 11 SoftwareSerial BT(BLUETOOTH_RX, BLUETOOTH_TX); void setup(void) { // int passed here should match your Bluetooth's set baud rate. // Default is almost always 9600 BT.begin(9600); delay(500); BT.print("AT"); delay(500); BT.print("AT+VERSION"); delay(500); // renames your BT device name to My-Sweet-Project String nameCommand = "AT+NAME" + "My-Sweet-Project"; BT.print(nameCommand); delay(500); if (BT.available() > 0) Serial.println(BT.readString()); } Here’s a table of HC-06 AT commands (which is quite limited compared to HC-05) Remember, HC-06 AT commands have no line endings at the end and no spaces between the command name and the input. It's AT+NAMEYOUR_NAME, not AT+NAME YOUR_NAME. If you see a command like AT+ORGL or AT+UART, the code you have is for HC-05. Also note that the higher baud rates are unlikely to work well enough for these cheap devices and are not necessary for many types of projects. Furthermore, the cheap ones have a tendency of getting stuck on the baud rate if you try to change it and won’t let you change the baud rate back to the original. I got mine stuck this way when testing out AT+BAUD, and I also initially forgot to change the software serial baud rate to the new one. Remember to change it if you do use AT+BAUD. Step #3: Sending Data Back and Forth Next, we should test out sending and receiving data. I suggest grabbing BlueTerm from the Android Play Store. It’s a good ad-free app for sending and receiving raw Bluetooth data and is good for debugging. After you connect your device to a phone or something else, the LED should stop blinking and just remain steady. Note: It will only do this if connected in an app, not if it’s just simply paired. So to send data, you just call print on your SoftwareSerial object — perhaps in your main loop: void loop(void) { BT.print("Hello Bluetooth!"); BT.flush(); delay(2000); } If you have your device paired with BlueTerm correctly, you should start seeing, “Hello Bluetooth!” repeated every 2 seconds. To receive, use code like that below and type in a message in BlueTerm or other Bluetooth app: void loop(void) { BT.print("Hello Bluetooth!"); BT.flush(); delay(2000); if (BT.available() > 0) { Serial.println("Message recieved!"); Serial.println(BT.readString()); } } Other Notes For a more complete implementation, you can take a look at my Bluetooth project on GitHub here. The project includes methods for sending data in JSON format with the Bluetooth transceiver in the BluetoothUIController.cpp file and also includes a companion Android app that receives those JSON messages and can send commands back down. Thank you for reading! Published at DZone with permission of Maddie Abboud . See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/bluetooth-hc-06-module-with-arduino
CC-MAIN-2018-30
refinedweb
958
72.46
Difference between revisions of "DDC/ReleaseNotes-Alpha1.1" From HaskellWiki Revision as of 08:32, 3 July 2008 New features vs Alpha1 - Support for linux-x86_64 and darwin-x86_64 build targets, thanks to Jared Putnam. - The parser has been completely rewritten using parser combinators. - The -make flag now does a full dependency driven build/rebuild. This also descends into the base libraries, which makes them much easier to work on. - Preliminary support for constructor classes, which is used to implement Monad, Functor and Foldable. many1 :: Parser tok a -> Parser tok [a] many1 parser = do x <- parser rest <- many parser pReturn (x : rest) - Unboxed boolean type Bool#and constants true#and false#. - Field punning makes adding projections to a data type a breeze: data Set a = .. project Set with { size; toList; ... } size = ... toList = ... main = ... someSet.size ... - The offside rule now applies to import lists, ie: import Data.List Data.Maybe Data.Set - More example code, including a 2D collision detection system and a simple raytracer Known Issues Being an alpha release there is enough implemented to write some programs, but there are also a few missing features and some other things to look out for: -)
https://wiki.haskell.org/index.php?title=DDC/ReleaseNotes-Alpha1.1&diff=21567&oldid=prev
CC-MAIN-2020-34
refinedweb
194
54.73
Key:name The primary tag used for naming an element. Note that OSM follows the On the Ground Rule. Names recorded in name=* tag are ones that are locally used, especially ones typically signposted.[1]] Values See below for the main values. This table is a wiki template with a default description in English. Editable here. Key variants can be suffixed with date namespace suffix (such as "old_name:en:1921-1932"). See also - name:etymology=* - Multilingual names - noname=* - strapline=* - unnamed=* - Key:description to describe a feature. Footnotes - ↑ - OSMF official statement summarizing the situation of mapping disputed boundaries, borders, names and descriptions - ↑ For example gift shops in Bethlehem may have names in English, but name tag for Bethlehem town will certainly not be in English
https://wiki.openstreetmap.org/wiki/Key:name:kk?uselang=en-US
CC-MAIN-2020-10
refinedweb
124
56.66
Stores geometry of ray-tracing scene. More... #include <OpenGl_SceneGeometry.hxx> Stores geometry of ray-tracing scene. Creates uninitialized ray-tracing geometry. Releases resources of ray-tracing geometry. Makes the OpenGL texture handles resident (must be called before using). Adds new OpenGL texture to the scene and returns its index. Returns maximum depth of bottom-level scene BVHs from last build. Clears ray-tracing geometry. Reimplemented from BVH_ObjectSet< Standard_ShortReal, N >. Clears only ray-tracing materials. Returns offset of triangulation elements for given leaf node. If the node index is not valid the function returns -1. Checks if scene contains textured objects. Releases OpenGL resources. Makes the OpenGL texture handles non-resident (must be called after using). Returns array of texture handles. Returns triangulation data for given leaf node. If the node index is not valid the function returns NULL. Updates unique 64-bit texture handles to use in shaders. Returns offset of triangulation vertices for given leaf node. If the node index is not valid the function returns -1. Global ambient from all light sources. Value of invalid offset to return in case of errors. Array of 'front' material properties. Maximum number of textures used in ray-tracing shaders. This is not restriction of the solution implemented, but rather the reasonable limit of the number of textures in various applications (can be increased if needed). Maximum depth of bottom-level scene BVHs from last build. Array of unique 64-bit texture handles obtained from OpenGL. Array of texture maps shared between rendered objects. Sampler object providing fixed sampling params for texures. Array of properties of light sources.
https://www.opencascade.com/doc/occt-7.1.0/refman/html/class_open_gl___raytrace_geometry.html
CC-MAIN-2020-16
refinedweb
266
54.69
User talk:Doru001/Pacman - An Introduction Rolling release terms When I started with arch linux I was disappointed by the lack of a friendly introduction to pacman. As a result, I began to complete the wiki on pacman with these simple statements which can help a beginner to find his way. They have been readily removed from there, because they were for beginners. We came to a compromise, and I wrote this page, which I wish I could find easily myself when I started using pacman. There is no explanation on the fundamental "rolling archive" concept in the main Pacman article or in the pacman manual. I don't know if this is on purpose or not, but it renders the arch package management system incomprehensible to a newcomer. The words "rolling" and "archive" do not even appear in the pacman manual. The first "hint" on the wiki page is found in paragraph 2.3 (close to the end), which mysteriously mentions "the nature of Arch's rolling release approach". You have been warned. The wiki page begins with a long introduction about pacman configuration, which I still did not need until today, after years of usage. The discussion which led to this compromise of course has been deleted. My introductory article of course is not mentioned in the list of related articles on Pacman page. And now of course you want to delete it all, because it doubles the main Pacman page. In my opinion, when one presents geometry, it begins with assumptions like "two different points determine a straight line". It is for beginners, and that is the precise reason why it should stand at the beginning of the presentation. And when something is more complicated, it is emphasized, not overlooked. The consequences of using -Syyuu are still unclear to me today. Doru001 (talk) 11:41, 28 July 2015 (UTC) - After a discussion was closed, it is deleted three days after for maintenance purposes, not censorship. See Help:Discussion. Nothing prevents you from opening a new one (or even restore the old one from history). - As to "rolling release", this term is explained on Wikipedia: Wikipedia:Rolling release. There is an iniative to merge the defining points of Arch (including "Rolling release") in one article; feel free to voice your opinion in Talk:The Arch Way. - If the pacman article structure is unclear, then improve the article, not a separately maintained duplicate. -- Alad (talk) 15:23, 28 July 2015 (UTC) - I can't continue a discussion forever. Go to history, and if you have new arguments, recover the history and answer. - The rolling release should be the first paragraph in pacman. If you want to use an external reference, that is great, but it should be very visible, because any newcomer comes to pacman wiki to understand the manual. - The structure of pacman is as it is as a result of the fact that experienced users won the argument against beginners like me. They were not interested in clarity or logic, but in some details which they knew less. There was a danger of an editing war, and I just backed down. Doru001 (talk) 12:00, 1 August 2015 (UTC) - > The rolling release should be the first paragraph in pacman. - No, it shouldn't. See my answer below, this belongs at most to Official repositories. -- Lahwaacz (talk) 13:48, 1 August 2015 (UTC) - It should, because the way pacman functions can not be understood or even explained without knowing the way the repository is maintained. The statements about pacman do not make sense unless you know how the repository is maintained. The pacman page has no meaning unless you know how the repository is maintained. So you should mention at the beginning how the repository is or could be maintained. And you should discuss the strange possible consequences of the way the repository is maintained. People should know that before they use pacman. A system can become inconsistent if you just install one package without updating the whole system. And the structure of the page should be logical, it should explain how pacman works: install, remove, list packages. The config should come last. And the link to my page should be removed after we discuss it. Doru001 (talk) 22:54, 2 August 2015 (UTC) - I'm sure that you can notice that there is some intimate relation between pacman and the arch repository. The pacman page is not there to describe pacman alone, but to help people to learn fast and easy how to use pacman on arch. This explanation has sense only by describing the interaction between pacman and the arch repository. pacman alone has no meaning - it is just meaningless code. When you explain how something works you need to explain how does it interact with other things around it. And you should emphasize relevant information, like the danger of using pacman -Sy firefox instead of pacman -Syu firefox. The article hides this after screens of configuration and before screens of troubleshooting. And it does not explain it. Of course the way pacman actually works is explained somewhere on the Internet, but I believe that it should be explained here. For example, basic things like how to install, remove, or list packages. Doru001 (talk) 23:03, 2 August 2015 (UTC) - Explaining geometry or anything to beginners/students is usually done by presenting from a textbook, which is a monolithic compilation of (sub)topics related to the main topic, with very few links to other resources. This is not what ArchWiki tries to be, there are few pages of the "monolithic compilation" style, and more importantly, the pages (corresponding to e.g. chapters of the textbook) are not ordered. Instead, there are many links to other pages and external resources when it is necessary to reference other topic. - The pacman page is about pacman, nothing else. The role of Arch's official repositories is described in a separate article, as well as the meaning of "rolling release" concept and other core principles -- these are described in The Arch Way and Arch Linux, both linked from the Main page. I think you agree that we can't link these two from the "Related articles" box of every other page on the ArchWiki. - Further, the "rolling release" model is not relevant for the description of a package manager, it can be implemented using any package manager (see this list), and reversely, pacman can be used in non-rolling distributions (e.g. LinHES). Moreover, it seems that the "rolling archive" term was made up by yourself (otherwise please share a reference). - -- Lahwaacz (talk) 18:06, 28 July 2015 (UTC) - The fact that the information is no longer monolithic is very good. The fact that the logic is no longer maintained, and you can't see why the concept of "rolling archive" should be referenced from the pacman wiki page, is very bad. Doru001 (talk) 12:00, 1 August 2015 (UTC) - No, it is not explained on The Arch Way or Arch Linux. It is explained here. There there are some vague comments which have nothing to do with the "rolling nature" of arch repositories. Not to mention its strange consequences, which should be discussed. Also, no beginner in trouble when he wants to use pacman will go to those pages, or even to the main page. He will go to Pacman. Doru001 (talk) 12:35, 1 August 2015 (UTC) - I never ever could possibly invent terminology like "rolling archive". In fact, "rolling" is used on the pacman page, and arch comes from archive. I simply do not speak English well enough, and I just did not know what hit me when I moved to arch, to imagine such phrases. And I was not alone. People using arch for years were in trouble, with dysfunctional systems, because of this. I believe that I found it somewhere in bbs. In fact, I remember that I posted a lot on forums to understand what I wrote here, but I can't find those discussions now. Anyway, probably back then the terminology was not established, and I was prevented from defining it on the pacman page (but nobody contested it up to you), so today maybe some changes appeared, and it is a good idea to blame the lack of clarity on me ... :) Doru001 (talk) 12:20, 1 August 2015 (UTC) - I believe that you can find it here: rolling nature of the arch archive -> rolling archive. - The "rolling nature of the arch archive" occurs only in your post. The term repository is not synonymous to "archive", although certain repositories such as CPAN act as such. Arch repositories are certainly not an archive, there is only the latest version of a package in a repository. On the other hand, there is also the Arch Linux Archive, but it is unrelated to "rolling [release]" (and it is an unofficial project). - If you want this discussion to get somewhere, let's start from the beginning. So far you've only indicated that there is something missing, because you (or others who "used Arch for years") had some problems some time ago, but don't remember when or where you (or the others) posted about it. This post from the only thread you provided us indicates that everything is in order with respect to the two-year-old "problem". - -- Lahwaacz (talk) 13:48, 1 August 2015 (UTC) - I'm sure that you have a better name for "rolling archive" but you keep it secret. - - You should read back from there because the discussion was long. - Doru001 (talk) 22:37, 2 August 2015 (UTC) Moderation note The former Pacman - An Introduction page has been moved to User:Doru001/Pacman - An Introduction on 11 October 2015 and reverted to the last personal revision pinpointed by User:Doru001 in this discussion. All contributions until 1 October 2015 have been merged into the pages in the Main namespace. Please revise the proposal with respect to the arguments of the Maintenance Team (see #Rolling release terms and Talk:Pacman#Don.27t_rush_updates) before pushing the changes in the future. -- Lahwaacz (talk) 21:17, 11 October 2015 (UTC)
https://wiki.archlinux.org/index.php?title=User_talk:Doru001/Pacman_-_An_Introduction&oldid=545029
CC-MAIN-2019-35
refinedweb
1,694
61.97
Microsoft .NET Glossary Note that terms that the terms "Microsoft" and ".NET" were generally not included when alphabetizing terms. | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | Other | A Abstract IL (ILX)A toolkit for accessing the contents of .NET Common IL binaries. Among its features, it lets you transform the binaries into structured abstract syntax trees that can be manipulated. Acceleration Server 2000See Internet Security and Acceleration Server 2000. Access modifiersLanguage keywords used to specify the visibility of the methods and member variables declared within a class. The five access modifiers in the C# language are public, private, protected, internal, and protected internal. Acrylic Codename for an innovative illustration, painting and graphics tool that provides creative capabilities for designers working in print, web, video, and interactive media. Active Server Pages (ASP)A Microsoft technology for creating server-side, Web-based application services. ASP applications are typically written using a scripting language, such as JScipt, VBScript, or PerlScript. ASP first appeared as part of Internet Information Server 2.0 and was code-named Denali. ADO (ActiveX Data Objects)A set of COM components used to access data objects through an OLEDB provider. ADO is commonly used to manipulate data in databases, such as Microsoft SQL Server 2000, Oracle, and Microsoft Access. ADO.NET (ActiveX Data Objects for .NET)The set of .NET classes and data providers used to manipulate databases, such as Microsoft SQL Server 2000. ADO.NET was formerly known as ADO+. ADO.NET can be used by any .NET language. AeroThe code name for the user experience provided by Microsoft's Longhorn Operating System. API (Application Program Interface)A set of programs, code libraries, or interfaces used by developers to interact with a hardware device, network, operating system, software library, or application. Calls to the methods of an API are typically synchronous, but may also be asynchronous through the use of callback methods. Application assembly cacheSee Local assembly cache. Application baseThe directory where a .NET application's assembly files are stored. Also called the application folder or application directory. Application Center 2000A deployment and management package for Web sites, Web services, and COM components. Application Center is a key B2B and B2C component of the .NET Enterprise Server product family. Application domainThe logical and physical boundary created around every .NET application by the. Application ManifestThe part of an application that provides information to describe the components that the application uses. ArrayA collection of objects of the same type, all of which are referenced by a single identifier and an indexer. In the .NET Framework, all arrays inherits from the Array class that is located in the System namespace. AsmLAn Abstract State Machine Language. ASP.NET (Active Server Pages for .NET)A set of .NET classes used to create Web-based, client-side (Web Form) and server-side (Web Service) applications. ASP.NET was derived from the Microsoft Active Server Pages (ASP) Web technology and adapted for use in the .NET Framework. Also called managed ASP and formerly known as ASP+. AssemblyAll of the files that comprise a .NET application, including the resource, security management, versioning, sharing, deployment information, and the actual MSIL code executed by the CLR. An assembly may appear as a single DLL or EXE file, or as multiple files, and is roughly the equivalent of a COM module. See assembly manifest, private assembly, shared assembly. Assembly Binding Log ViewerA .NET programming tool (Fuslogvw.exe) used to view and manipulate the log of binding information that is updated at run-time when an assembly is loaded by the CLR. This log viewer is primarily used to discover why an assembly (or satellite assembly) can't be located at runtime, and to verify that the correct assemblies are being loaded by a .NET application. Assembly cacheA reserved area of memory used to store the assemblies of a .NET applications running on a specific machine. See Global Assembly Cache, Local assembly cache, Download Cache. Assembly Cache ViewerA .NET programming tool (Shfusion.dll) used to view, add, remove and configure information in the Global Assembly Cache using Windows Explorer. This viewer is used by clicking on the %WINDIR\Assembly folder in Windows Explorer. See Global Assembly Cache Utility. Assembly Dependency ListA .NET programming tool (ADepends.exe) used to display all of the assemblies that a specific assembly is dependent upon. Assembly informational versionA custom attribute that attaches version information to an assembly in addition to the assembly's version number. The informational version is a string that typically contains marketing information, such as the product's name and release number (e.g., "Windows 2000 Server" or "FantastiWidget 3.0"). Assembly Linking UtilityA .NET programming tool (al.exe) used to create an assembly manifest from the specified MSIL modules or resource files. Also call the Assembly Linker and Assembly Generation Utility. Assembly manifestA detailed description of the contents of an assembly. A manifest contains metadata describing the name, version, types, and resources in the assembly, and the dependencies upon other assemblies. The manifest allows an assembly to be self-describing, easily deployed, and not bound to a particular system by storing information in the Windows registry. Assembly metadataThe metadata stored in assembly files. Assembly Registration ToolA .NET programming tool (RegAsm.exe) used to register an assembly in the Windows registry. Registration is required if COM clients need to call managed methods residing in a .NET assembly. This tool can also be used to generate a registry (.reg) file containing the necessary registration information. Registration typically only occurs once when the assembly is installed. Assembly version numberPart of an assembly's identity, and used to indicate the version, revision, and build of an assembly. The version is expressed in dot notation using four, 32-bit integers in the format "<major version>.<minor version>.<build number>.<revision>". The version number is stored in the assembly manifest and only refers to the contents of a single assembly Two assemblies that have version numbers which differ in any way are considered by the CLR to be completely different assemblies. See Assembly informational version. "Atlanta"The code-name of an antivirus product being developed by Microsoft. (Named after the home town of one of the product's developers). "Asta"A project investigating algorithms for detecting cloned code. Attribute-based programmingA programming model that allows flexibility in the behavior of a program not possible in traditional API call-based programming. Custom attributes add metadata to give classes extra information that extend the definition a types' behavior. The attribute's values are determined by programmers at design time, and can be reconfigured at runtime by users and other programs without the need for code changes or recompilation. See Reflection. AttributesLanguage constructs that are used by programmers to add additional information (i.e., metadata) to code elements (e.g., assemblies, modules, members, types, return values, and parameters) to extend their functionality. See Custom Attributes. AvalonThe code name for Windows Presentation Foundation (WPF), which is the graphical subsystem (User Interface framework) of Longhorn. It is worth noting that this will be a vector-based system. B B2BBusiness-to-Business. The exchange of information between business entities. B2CBusiness-to-Consumer. The exchange of information between business and consumer (i.e., customer) entities. BackOffice Server 2000A suite of Microsoft servers applications used for B2B and B2C services. Included in this suite are Windows 2000 Server, Exchange Server 2000, SQL Server 2000, Internet Security and Acceleration Server 2000, Host Integration Server 2000, and Systems Management Server 2.0. These server applications are now referred to as the .NET Enterprise Server product family. Base classThe parent class of a derived class. Classes may be used to create other classes. A class that is used to create (or derive) another class is called the base class or super class. See Derived Class, Inheritance. Behave!A project for building tools to checking things such as deadlock freedom, invariant checking, and message-understood properties in behavior properties of asynchronous, message-passing programs. BizTalk Server 2000A set of Microsoft Server applications that allow the integration, automation, and management of different applications and data within and between business organizations. BizTalk Server is a key B2B component of the .NET Enterprise Server product family. BoxingConversion of a value type to a reference type object (i.e. System.Object). Value types are stored in stack memory and must be converted (i.e., boxed) to a new object in heap memory before they can be manipulated as objects. The methods, functions, and events of the new object are invoked to perform operations on the value (e.g., converting an integer to a string). Boxing is implicitly performed by the CLR at runtime. See Unboxing. Built-in TypesSee Pre-defined types. BurtonThe codename for Microsoft Visual Studio 2005 Team System. C Cω (C-Omega)An experimental programming language actually an extension to C# that focuses on distributed asynchronous concurrency and XML manipulation. This is a combination of research projects that were formally known as polymorphic C# and Xen (and X#). C# (C-Sharp)An object-oriented and type-safe programming language supported by Microsoft for use with the .NET Framework. C# (pronounced "see-sharp") was created specifically for building enterprise-scale applications using the .NET Framework. It is similar in syntax to both C++ and Java and is considered by Microsoft as the natural evolution of the C and C++ languages. C# was created by Anders Hejlsberg (author of Turbo Pascal and architect of Delphi), Scot Wiltamuth, and Peter Golde. C# is defined by the standard ECMA-334. Callback MethodA method used to return the results of an asynchronous processing call. Typically, methods are called in a synchronous fashion, where the call does not return until the results (i.e., the output or return value) of the call are available. An asynchronous method call returns prior to the results, and then sometime later a callback method is called to return the actual results. The callback method itself contains program statements that are executed in response to the reception of the results. Also referred to as a callback function under the Win32 API. See Event. Casting). CatchingTo trap a program exception. See try/catch block. ClassIn .NET languages, classes are templates used for defining new types. Classes describe both the properties and behaviors of objects. Properties contain the data that are exposed by the class. Behaviors are the functionality of the object, and are defined by the public methods (also called member functions) and events of the class. Collectively, the public properties and methods of a class are known as the object interface. Classes themselves are not objects, but instead they are used to instantiate (i.e., create) objects in memory. See structure. Class membersThe elements of a class which define it behaviors and properties. Class members include events, member variables, methods, constructors, and properties. Also called type members. ClickOnceA deployment technology introduced with the release of Whidbey that allows client program to be used and installed as seamless as Web applications. This includes the ability to download files to be installed, versioning, side-by-side installation, and more. ClientAny application that requests information or services from a server. See Client/Server architecture. Client-sideAn operation or event that occurs on a client system. Examples include client-side scripting, client-side validation, and client-side events. See Server-side. Client/Server architectureAn application architecture in which the server dispenses (or serves) information that is requested by one or more client applications. In the 2-tier client/server model, the client contain the user interface and business logic, and the server contains the database engine and information storage. In the 3-tier model, the business logic is located on a middle-tier server to reduce the processing load on the database server and to make system maintenance easier. The number of users that can be supported by a client/server system is based on the bandwidth and load of the network and processing power of the server. See Distributed architecture. CLR DebuggerA .NET programming tool (DbgClr.exe) used as a Windows-based, source-level debugging utility for MSIL applications. See Runtime Debugger. CLR Minidump ToolA .NET programming tool (Mscordmp.exe) used to produce a mini-dump image file (i.e., a core dump) of the CLR at runtime. This tool is used to examine runtime problems by taking a snapshot of the CLR as the problems occurs. Windows automatically invokes the CLR Minidump Tool prior to running the Dr. Watson utility (Drwatson.exe). Code Access Security (CAS)The common language runtime's security model for applications. This is the core security model for new features of the Longhorn Operating System. CollectionA class used to logically organize a group of identical types using a single identifier. Examples of collection types in the .NET Framework include array, arraylist, queue, and stack. COM (Component Object Model)A software architecture developed by Microsoft to build component-based applications. COM objects are discrete components, each with a unique identity, which expose interfaces that allow applications and other components to access their features. COM objects are more versatile that Win32 DLLs because they are completely language independent, have built-in interprocess communications capability, and easily fit into an Object-Oriented program design. COM was first released in 1993 with OLE2, largely to replace the interprocess communication mechanism Dynamic Data Exchanged (DDE) used by the initial release of OLE. See COM+. COM+The "next generation" of the COM and DCOM software architectures. COM+ (pronounced "COM plus") makes it easier to design and construct distributed, transactional, and component-based applications using a multi-tiered architecture. COM+ also supports the use of many new services, such as Just-in-Time Activation, object pooling, and Microsoft Transaction Server (MTS) 2.0. The use of COM, DCOM, and COM+ in application design will eventually be entirely replaced by the Microsoft .NET Framework. COM+ 2.0This was one of the pre-release names for the original Microsoft .NET Framework. See also Web Services Platform. COM Callable Wrapper (CCW)A metadata wrapper that allows COM components to access managed .NET objects. The CCW is generated at runtime when a COM client loads a .NET object. The .NET assembly must first be registered using the Assembly Registration Tool. See Runtime Callable Wrapper (RCW). Commerce Server 2000Microsoft's e-commerce server application package for developing and maintaining business Web sites. Commerce Server is a key component to creating B2C solutions using the .NET Enterprise Server product family. Common Intermediate Language (CIL)The system-independent code generated by a .NET language compiler. CIL defines a file format for storing managed code as both program instructions and metadata in a single file. Either the ILASM assembler or JIT compiler is then used to convert CIL to native machine code. CIL is also referred to as Microsoft Intermediate Language (MSIL). Common Language Infrastructure (CLI)The .NET infrastructure that allows applications written in multiple programming languages to operate many different environments without the need to modify the program code. The CLI consists of a file format (PE), a common type system (CTS), an extensible metadata system, an intermediate language (CIL), a factored base class library (FCL), and access to the underlying operating system (Win32). The CLI is defined by the standard ECMA-335. Common Language Runtime (CLR)A runtime environment that manages the execution of .NET program code, and provides services such as memory and exception management, debugging and profiling, and security. The CLR is a major component of the .NET Framework, and provides much of its functionality by following the rules defined in the Common Type System. Also known as the Virtual Execution System (VES). Common Language Specification (CLS)A set of common conventions used to promote interoperability between programming languages and the .NET Framework. The CLS specifies a subset of the Common Type System and set of conventions that are adhered to by both programming language designers and framework class library authors. Common Object File Format (COFF)See Portable Executable file. Common Type System (CTS)The .NET Framework specification which defines the rules of how the Common Language Runtime defines, declares, and manages types, regardless of the programming language. All .NET components must comply to the CTS specification. Content Management Server 2001Microsoft's server package for building, deploying, and maintaining dynamic content for both private or commercial Web sites. ConstructorA method that is automatically called when an object is created. The constructor is used to initialize the object and place it in a valid state (e.g., setting the values of member variables). The constructor method always has the same identifier as the class in which it is defined. See Destructor. CoolThe pre-release code name used for C#. CryptycA tool in the Microsoft research division for type-checking security protocols. In fact, the name stands for "Cryptographic Protocol Type Checker." CSCThe .NET C# command line compiler (csc.exe). Custom AttributesAttributes defined by a programmer to store the instance of any type in metadata. See Attribute-based programming, Reflection. D Data providerA set of classes in the .NET Framework that allow access to the information a data source. The data may be located in a file, in the Windows registry, or any any type of database server or network resource. A .NET data provider also allows information in a data source to be accessed as an ADO.NET DataSet. Programmers may also author their own data providers for use with the .NET Framework. See Managed providers. DCOM (Distributed Component Object Model)An extension of the Microsoft Component Object Model (COM) that allows COM components to communicate across network boundaries. Traditional COM components can only perform interprocess communication across process boundaries on the same machine. DCOM uses the Remote Procedure Call (RPC) mechanism to transparently send and receive information between COM components (i.e., clients and servers) on the same network. DCOM was first made available in 1995 with the initial release of Windows NT 4. DelegateA mechanism used to implement event handling in .NET Framework code. A class that needs to raise events must define one delegate per event. Types that use the class must implement one event handler method per event that must be processed. Delegates are often described as a managed version of a C++ function pointer. However, delegates can reference both instance and static (also called shared) methods, while function pointers can only reference static methods. DeploymentThe process of installing an application, service, or content on to one or more computer systems. In .NET, deployment is performed using XCOPY or the Windows Installer. More complex deployment applications, such as System Management Server, can also be used. See Installer Tool. Deployment ManifestThe part of an application that tells the system how to install and maintain an application. Derived classA class that was created based on a previously existing class (i.e., base class). A derived class inherits all of the member variables and methods of the base class it is derived from. Also called a derived type. DestructorIn traditional Object Oriented Programming, a destructor is a class method that is called when an object goes out of scope. In .NET languages, the destructor method is instead called when the object is garbage collected by the CLRwhich happens at some indeterminate time after an object goes out of scope. In C#, the destructor is actually a syntactic mapping to a Finalize method. See Constructor, Dispose. DOM (Document Object Model)A programming interface that allows HTML pages and XML documents to be created and modified as if they were program objects. DOM makes the elements of these documents available to a program as data structures, and supplies methods that may be invoked to perform common operations upon the document's structure and data. DOM is both platform- and language-neutral and is a standard of the World Wide Web Consortium (W3C). DISCOAn Microsoft-created XML protocol used for discovering Web Services. Much of DISCO is now a subset in the newer, more universal protocol UDDI. It is expected that DISCO will become obsolete in favor of UDDI. DisposeA class-only method used to implement an explicit way to release the resources allocated by an object. The dispose method is actually in implementation of the IDisposable interface, and is typically called by the destructor or Finialize method of a class. Distributed architectureAn application architecture in which the components of an application may be distributed across many computers. Although the client/server architecture is fundamentally distributed in its design, the distributed model is not limited to only two or three tiers in its design. A distributed, n-tier architecture may use many components running on dozens, hundreds or thousands of computers on a network to service a single application. This concept is reflected in Sun Microsystems' visionary phrase, "The network is the computerTM." Download CachePart of the assembly cache used to store information downloaded from a private network or the public Internet. Objects in the download cache are effectively isolated from all other assemblies loaded into other assembly caches. See Assembly Cache. DSIWas Microsoft's Distributed System Initiative. Is also Microsoft's Dynamic System Initiative. DTD (Document Type Definition)A document defining the format of the contents present between the tags in an HTML, XML, or SGML document, and how the content should be interpreted by the application reading the document. Applications will use a document's DTD to properly read and display a document's contents. Changes in the format of the document can be easily made by modifying the DTD. Dynamic Systems Initiative (DSI)A 10-year plan to simplify management of software and hardware. It includes assemssment, configuration, monitoriing, management, and development tools. These tools will communicate their status in order improve how they operate. E ECMA (European Computer Manufactures Association)The ECMA (known since 1994 as ECMA International) is an industry association founded in 1961 and dedicated to the standardization of information and communication systems. The C# and CLI specification were ratified by the ECMA on December 31, 2001 as international standards, and assigned to them the ECMA standards designations of ECMA-334 (C#) and ECMA-335 (CLI), and Technical Report TR-84. These standards are available at. Enterprise Instrumentation Framework (EIF)A feature that expands the program execution tracing capabilities found in the initial release of the .NET Framework. EIF allows the use of configurable event filtering and tracing by integrating .NET applications with the event log and tracing services built into the Windows operating system. Warnings, errors, business events, and diagnostic information can be monitored and reported for immediate, runtime analysis by developers, or collected and stored for later use by technical support personnel. Support for EIF will be included in the next release of Visual Studio.NET. EventA notification by a program or operating system that "something has happened." An event may be fired (or raised) in response to the occurrence of a pre-defined action (e.g., a window getting focus, a user clicking a button, a timer indicating a specific interval of time has passed, or a program starting up or shutting down). In response to an event, an event handler is called. Event HandlerA function or method containing program statements that are executed in response to an event. See Callback method. EverettThe pre-release code name of Visual Studio .NET 2003. Everett offers increased performance over Visual Studio .NET 1.0, integration with Windows Server 2003 and SQL Server 2003 (Yukon), extended support for XML Web services, MS Office programmability (the Visual Studio Tools for Office Development), improved migration tools for VB6 code, new managed data providers for Oracle and ODBC, and the addition of the Enterprise Instrumentation Framework (EIF) and mobile device support in the form of the .NET Compact Framework. ExceptionA signal that is generated when an unplanned or unexpected event occurs. Exceptions are typically caught by an exception handler and dealt with in an appropriate way. A fatal exception (also called a critical or catastrophic error) is an event that cannot be properly handled to allow the applicationor the operating systemto continue running. Exception HandlingThe process of trapping an exception and performing some sort of corrective procedure in response. See try/catch block. Exchange Server 2000A set of Microsoft server applications use to ingrate messaging and data storage technologies. Exchange Server's features include instant messaging, email, calendaring, real-time conferencing, and contact management. Exchange Server can also store documents, Web content, and applications that are accessible via Internet protocols, such as NNTP and HTTP. Executable fileA file containing program instructions that are executed by an operating system or runtime environment. See Portable Executable file. Extensible Markup Language (XML)See XML. F FieldsSame as member variables. FinalizeA. See Dispose. Finally blockA block of program statements that will be executed regardless if an exception is thrown or not. A finally block is typically associated with a try/catch block (although a catch block need not be present to use a finally block). This is useful for operations that must be performed regardless if an exception was thrown or not (e.g., closing a file, writing to a database, deallocating unmanaged memory, etc). Framework Class Library (FCL)The collective name for the thousands of classes that compose the .NET Framework. The services provided by the FCL include runtime core functionality (basic types and collections, file and network I/O, accessing system services, etc.), interaction with databases, consuming and producing XML, and support for building Web-based (Web Form) and desktop-based (Windows Form) client applications, and SOAP-based XML Web services. Fugue (FCL) A. G Garbage Collection (GC)The process of implicitly reclaiming unused memory by the CLR. Stack values are collected when the stack frame they are declared within ends (e.g., when a method returns). Heap objects are collected sometime after the final reference to them is destroyed. GDI (Graphics Device Interface)A Win32 API that provides Windows applications the ability to access graphical device drivers for displaying 2D graphics and formatted text on both the video and printer output devices. GDI (pronounced "gee dee eye") is found on all version of Windows. See GDI+. GDI+ (Graphics Device Interface Plus)The next generation graphics subsystem for Windows. GDI+ (pronounced "gee dee eye plus") provides a set of APIs for rendering 2D graphics, images, and text, and adds new features and an improved programming model not found in its predecessor GDI. GDI+ is found natively in Windows XP and the Windows Server 2003 family, and as a separate installation for Windows 2000, NT, 98, and ME. GDI+ is the currently the only drawing API used by the .NET Framework. Global Assembly Cache (G. Assemblies are added and removed from the GAC using the Global Assembly Cache Tool. Global Assembly Cache ToolA .NET programming tool (GACUtil.exe) used to install, uninstall, and list the contents of the Global Assembly Cache. This tool is similar in function to the Assembly Cache Viewer that run on Windows Explorer, but as a separate program it can be called from batch files, makefiles, and scripts. GlobalizationThe. Also called internationalization. See localization, satellite assembly. H Hash CodeA unique number generated to identify each module in an assembly. The hash is used to insure that only the proper version of a module is loaded at runtime. The hash number is based on the actual code in the module itself. "Hatteras"Codename for Team Foundation Version Control tool. This is the new version control in Visual Studio 2005. HeapAn area of memory reserved for use by the CLR for a running programming. In .NET languages, reference types are allocated on the heap. See Stack. Host Integration Server 2000A set of Microsoft server applications use to ingrate the .NET platform and applications with non-Microsoft operating systems and hardware (e.g., Unix and AS/400), security systems (e.g., ACF/2 and RACF), data stores (e.g., DB2), and transaction environments (e.g., CICS and IMS). HTML (HyperText Markup Language)A document-layout and hyperlink-specification language. HTML is used to describe how the contents of a document (e.g., text, images, and graphics) should be displayed on a video monitor or a printed page. HTML also enables a document to become interactive with other documents and resources by using hypertext links embedded into its content. HTML is the standard content display language of the World Wide Web (WWW), and is typically conveyed between network hosts using the HTTP protocol. See XHTML. HTTP (Hyper Text Transfer Protocol)An Internet protocol used to transport content and control information across the World Wide Web (WWW). Web content typically originates from Web servers (also called HTTP servers) that run services which support the HTTP protocol. Web clients (i.e., Web browsers) access the content on the server using the rules of the HTTP protocol. The actual Web content is encoded using the HTML or XHTML languages. I IdentifiersThe names that programmers choose for namespaces, types, type members, and variables. In C# and VB.NET, identifiers must begin with a letter or underscore and cannot be the same name as a reserved keyword. Microsoft no longer recommends the use of Hungarian Notation (e.g., strMyString, nMyInteger) or delimiting underscores (e.g., Temp_Count) when naming identifiers. See Qualified identifiers. ILASMSee MSIL Assembler. ILDASMSee MSIL Disassembler. Indigo The code name for for Windows Communication Foundation (WCF), which is the communications portion of Longhorn that is built around Web services. This communications technology focuses on providing spanning transports, security, messaging patterns, encoding, networking and hosting, and more. "Indy"The code-name for a capacity Planning tool being developed by Microsoft. This was originally a part of Longhorn, but is speculated to ship earlier. Interface Definition Language (IDL. IndexerA CLR language feature that allows array-like access to the properties of an object using getter and setter methods and an index value. This construct is identical to operator[] in C++. See Property. Installer ToolA .NET programming tool (InstallUtil.exe) used to install or uninstall one or more assemblies by executing the installer components contained within an assembly. During installation, all necessary files are saved to the application base folder and the required resources are created, including the uninstallation information. InterfaceThe set of properties, methods, indexers, and events exposed by an object that allow other objects to access its data and functionality. An object guarantees that it will support all of the elements of its interface by way of an interface contract. Interface contractThe guarantee by an object that it will support all of the elements of its interface. In C#, this contract is created by the use of the Interface keyword, which declares a reference type that encapsulates the contract. Intermediate Language (IL)See MSIL. InheritanceThe ability of a class to be created from another class. The new class, called a derived class or subclass, is an exact copy of the base class or superclass and may extend the functionality of the base class by both adding additional types and methods and overriding existing ones. Instant fieldsThe member variables in an object instance. Internet Security and Acceleration Server 2000A set of applications used to provide firewall security and Web caching services to a single Web site or to an enterprise-scale Web farm. Intrinsic TypesSee Built-in Types. Isolated storageA. Isolated storage toolA .NET programming tool (Storeadm.exe) used to list and remove all existing stores for the current user. See Isolated storage. IstanbulThe code name for the newest member of the Microsoft Office System predicted to be out in 2005 that will provide integrated communications capabilities including instant messaging, extensible presence, PC-based voice and video, and telephony integration. J J# (J-Sharp). A Microsoft-supported language for .NET. J# (pronounced "jay sharp") is Microsoft's implementation of the Java programming language. It specifically designed to allow Java-language developers to easily transition to the .NET Framework and to create .NET applications. Tools are also available that allow existing Java and Microsoft J++ code to be migrated to J#. Because J# compiles to MSIL and not Java bytecodes, J# applications are not compatible with the Java Virtual Machine (JVM) or the Java 2 platform. However, J# applications can be written using Visual Studio .NET and then compiled using third-party Java tools. See Java Language Conversion Assistant. J2EE (Java 2 Enterprise Edition)A Java-based, runtime platform created by Sun Microsystems used for developing, deploying, and managing multi-tier server-centric applications on an enterprise-wide scale. J2EE builds on the features of J2SE and adds distributed communication, threading control, scalable architecture, and transaction management. J2EE is a competitor to the Microsoft .NET Framework. J2ME (Java 2 Micro Edition)A Java-based, runtime platform created by Sun Microsystems that allows Java applications to run on embedded devices, such as cellular telephones and Personal Digital Assistants (PDA). J2ME is a competitor to the Microsoft .NET Compact Framework. J2SE (Java 2 Standard Edition)A Java-based, runtime platform that provides many features for developing Web-based Java applications, including database access (JDBC API), CORBA interface technology, and security for both local network and Internet use. J2SE is the core Java technology platform and is a competitor to the Microsoft .NET Framework. JavaA computing platform and programming language released by Sun Microsystems in 1995. A Java application has the ability to run on many different types of computers, devices, operating systems (e.g., Windows, Macintosh, Linux and UNIX), and application environments (e.g., Web browsers) without requiring any changes to its code (this technology is referred to by Sun as "Write Once, Run AnywhereTM" portability). The Java 2 platform and language is a competitor to the Microsoft .NET Framework and the J# language. See the java.sun.comWebsite. Java Language Conversion Assistant (JLCA)A tool used to convert Java-language source code into C# or J# code. JLCA aides in the migration of Java 2 applications to the Microsoft .NET Framework, and is one of the .NET Framework Migration Tools created by ArtinSoft for Microsoft. Java Virtual Machine (JVM)A component of the Java runtime environment that JITcompiles Java bytecodes, manages memory, schedules threads, and interacts with the host operating environment (e.g., a Web browser running the Java program). The JVM is the Java equivalent of the .NET Framework's CLR. JScript .NETA Microsoft-supported language for .NET. JScript .NET (pronounced "jay script dot net") is Microsoft's "next generation" implementation of the JavaScript programming language. JScript .NET includes all of the features found in the JScript language, but also provides support for true object-oriented scripting using classes and types, and adds features, such as true compiled code, packages, cross-language support, and access to the .NET Framework. Just In Time (JIT)The concept of only compiling units of code just as they are needed at runtime. The JIT compiler in the CLR compiles MSIL instructions to native machine code as a .NET application is executed. The compilation occurs as each method is called; the JIT-compiled code is cached in memory and is never recompiled more than once during the program's execution. K KeywordsNames that have been reserved for special use in a programming language. The C# language defines about 80 keywords, such as bool, namespace, class, static, and while. The 160 or so keywords reserved in VB.NET include Boolean, Event, Function, Public, and WithEvents. Keywords may not be used as identifiers in program code. L "Ladybug"Code-name for product officially known as the Microsoft Developer Network Product Feedback Center where testers can submit online bug reports and provide product suggestions via the Web. License CompilerA .NET programming tool (lc.exe) used to produce .licenses files that can be embedded in a CLR executable. LifetimeThe duration from an objects existence. From the time an object is instantiated to the time it is destroyed by the garbage collector. Local assembly cacheThe assembly cache that stores the compiled classes and methods specific to an application. Each application directory contains a \bin subdirectory which stores the files of the local assembly cache. Also call the application assembly cache. See Global Assembly Cache. Local VariableSame as a member variable. LocaleA collection of rules and data specific to a spoken and/or written language and/or a geographic area. Locale information includes human languages, date and time formats, numeric and monetary conventions, sorting rules, cultural and regional contexts (semantics), and character classification. See Localization. LocalizationThe practice of designing and developing software that will properly used all of the conventions defined for a specific locale. See Globalization. LonestarThe codename for Windows XP Tablet PC Edition 2005. LonghornThe "next generation" release of Windows Server after Windows Server 2003 and named Microsoft Windows Vista. Longhorn APIThe application programming interface for the Longhorn operating system. M "Magneto"The code-name for Windows Mobile 5.0. This version is to unify the Windows CE, PocketPC, and SmartPhone platforms. This platform includes a new user interface, improved video support, better keyboard support, and more. Make UtilityA .NET programming tool (nmake.exe) used to interpret script files (i.e., makefiles) that contain instructions that detail how to build applications, resolve file dependency information, and access a source code control system. Microsoft's nmake program has no relation to the nmake program originally created by AT&T Bell Labs and now maintained by Lucent. Although identical in name and purpose these two tools are not compatible. See Lucent nmake Web site. Managed ASPSame as ASP.NET. Managed C++Same as Visual C++ .NET. Managed codeCode that is executed by the CLR. Managed code provides information (i.e., metadata) to allow the CLR to locate methods encoded in assembly modules, store and retrieve security information, handle exceptions, and walk the program stack. Managed code can access both managed data and unmanaged data. Managed dataMemory that is allocated and released by the CLR using Garbage Collection. Managed data can only be accessed by managed code. Managed executionThe. Managed Extensions for C++Language extensions added to the C++ language that enable developers to write code that makes use of the .NET Framework's CLR. See as Visual C++ .NET. Managed pointer typesAn object reference that is managed by the CLR. Used to point to unmanaged data, such as COM objects and some parameters of Win32 API functions. Managed pointersA pointer that directly references the memory of a managed object. Managed pointers may point to the field of an object or value type, an element of an array, or the address where the next element just past the end of an array would be stored. Managed providers.NET objects that provide managed access to services using a simplified data access architecture. The functionality of a provider is accessed via one or more object interfaces. The most common examples of managed providers are the data providers, such as SQL Server Managed Provider ( System.Data.SqlClient), OLE DB .NET Data Provider ( System.Data.Odbc), and ADO Managed ( System.Data.ADO). .NET managed providers operate completely within the bounds of the CLR and require no interaction with COM interfaces, the Win32 API, or other unmanaged code. Managed resourcesA resource that is part of an assembly. ManifestSee Assembly manifest. MarshalingThe process of preparing an object to be moved across a context, process, or application domain boundary. See Remoting. MembersSee Class members. Member variablesTyped memory locations used to store values. Also called fields. MetadataAll information used by the CLR to describe and reference types and assemblies. Metadata is independent of any programming language, and is an interchange medium for program information between tools (e.g., compilers and debuggers) and execution environments. See MSIL. MethodA function defined within a class. Methods (along with events) defined the behavior of an object. "Metro"Code-name for a set of print document specifications along with the set of printer drivers. This is being built as a part of Longhorn. It appears that it could become a competitor to PDF and Adobe's PostScript. MIDL (Microsoft Interface Definition Language) CompilerThe program used to compile Interface Definition Language (IDL) files into type libraries. Mobile Information Server 2002A set of applications used for extending Microsoft .NET applications, enterprise data, and intranet content to mobile client devices such as cell phones and Personal Digital Assistants (PDA). Features include network gateway, notification routing, security (SSL, IPSec, VPN), mobile device support (WAP, SMS) and integration with Windows 2000. ModuleA subunit of an assembly. Assemblies contain one or more modules, which are DLLs that must be combined into assemblies to be used. The assembly manifest (sometimes called a module manifest) describes all of the modules associated with an assembly. MSDE 2000 (Microsoft Data Engine)A light weight release of the SQL Server 7.0 data engine. The MSDE is used as a relational data store on many Microsoft products, including BizTalk Server 2000, Host Integration Server 2000, SQL Server 2000, Visual Studio.NET, and the .NET Framework. The MSDE a modern replacement for the older Microsoft Jet database technology. MSIL (Microsoft Intermediate Language)The machine-independent language into which .NET applications are compiled using a high-level .NET language compiler (e.g., C# and VB.NET). The MSIL output is then used as the input of the Just-In-Time (JIT) compiler, which compiles the MSIL instructions to machine language just prior to its execution. MSIL can also be converted to native machine object code using the Native Image Generator utility. MSBuildThe build tool (MSBuild.exe) for Longhorn applications. MSIL AssemblerA .NET programming tool (ILAsm.exe) used to create MSIL portable executable (PE) files directly from MSIL code. MSIL DisassemblerA .NET programming tool (ILDAsm.exe) used to translate a portable executable (PE) file containing MSIL code to an an MSIL file that can be used as input to MSIL Assembler. Multi-module AssemblyA .NET program which is contained in many modules and resource files. The use of an assembly manifest to identify all of the files in a multi-module assembly is required. N NamespaceA logical grouping of the names (i.e., identifiers) used within a program. A programmer defines multiple namespaces as a way to logically group identifiers based on their use. For example, System.Drawing and System.Windows are two namespaces containing each containing types used for for different purposes. The name used for any identifier may only appear once in any namespace. A namespace only contains the name of a type and not the type itself. Also called name scope. Native codeMachine-readable instructions that are created for a specific CPU architecture. Native code for a specific family of CPUs is not usable by a computer using different CPU architectures (c.f., Intel x86 and Sun UltraSPARC). Also called object code and machine code. Native Image GeneratorA .NET programming tool (Ngen.exe) used to compile a managed assembly to native machine code and install it in the local assembly cache. During execution the native image will be used each time the assembly is accessed rather than the MSIL assembly itself. If the native image is removed, the CLR reverts to using the original MSIL assembly by default. Native images are faster to load and execute than MSIL assemblies, which must be Just-In-Time (JIT) compiled by the CLR. Using Ngen to create a native image file is often referred to as pre-JITting, because it makes JIT-compiling the assembly unnecessary. .NET Compact FrameworkA port of the .NET Framework to Windows CE, allowing embedded and mobile devices to run .NET applications. See Smart Device Extensions. .NET Data ProviderSee Data provider. .NET Enterprise Server product familyThese products include Application Center, BizTalk Server, Commerce Server, Content Management Server, Exchange Server, Host Integration Server, Internet Security and Acceleration Server, SQL Server 2000 , and Windows 2000 Server. Formerly known as BackOffice Server 2000. .NET FrameworkA programming infrastructure created by Microsoft for building, deploying, and running applications and services that use .NET technologies, such as desktop applications and Web services. The .NET Framework contains three major parts: the Common Language Runtime (CLR), the Framework Class Library, and ASP.NET. See .NET Compact Framework. .NET Framework Class Library (FCL)The foundation of classes, interfaces, value types, services and providers that are used to construct .NET Framework desktop and Web-based (i.e., ASP.NET) applications. The fundamental elements of the FCL are defined as classes located in the System namespace. All of the most primitive aspects of .NET are stored in System, including built-in value types, the Object type, and support for exception handling and garbage collection. Thousands of more classes are located in second- and third-level namespaces that include support for network and file I/O, graphics, security, configuration management, and Web services. All CLS-compliant compilers can use the FCL. .NET Framework Configuration ToolA .NET programming tool (Mscorcfg.msc) used to adjust code access security policy at the machine, user, and enterprise security policy levels. This tool can also be used to configure remoting services, and add, configure, and delete assemblies in the Global Assembly Cache. See Global Assembly Cache Utility. .NET Services Installation ToolA .NET programming tool (Regsvcs.exe) used to add managed classes to Windows 2000 Component Services. This tool loads and registers an assembly, generates, registers, and installs a type library into a specified COM+ 1.0 application. NGSCBNext-Generation Secure Computing BaseA virtual vault residing within each computer that lets users store encrypted information and only authorize certain entities to see it. It also provides protection for critical data against virus attacks, Trojan horses and spyware and could double as a Digital Rights Management tool to authenticate who is allowed to see a file or use a program. NGWSNext Generation Web ServiceThis was one of the pre-release names for .NET before its release. O ObjectThe instance of a class that is unique and self-describing. A class defines an object, and an object is the functional, realization of the class. Analogously, if a class is a cookie cutter then the cookies are the objects the cutter was used to create. Object typeThe most fundamental base type ( System.Object) that all other .NET Framework types are derived from. OLE (Object Linking and Embedding)A Microsoft technology that allows an application to link or embed into itself documents created by another type of application. Common examples include using Microsoft Word to embed an Excel spreadsheet file into a Word document file, or emailing a Microsoft Power Point file as an attachment (link) in Microsoft Outlook. OLE is often confused with the Component Object Model (COM), because COM was released as part of OLE2. However, COM and OLE are two separate technologies. OrcasThe code name for the version of Visual Studio .NET to be released near the time Microsoft Longhorn is released. This follows the release of Visual Studio .NET Whidbey. OverloadingUsing a single identifier to refer to multiple methods that differ by their parameters and/or return type. OverridingTo supercede an instance field or virtual method in a base class with a new definition of that field or method in the derived class. P PalladiumFormer code name for Microsoft's Next-Generation Secure Computing Base (NGSCB) project. "Pheonix"A software optimization and analysis framework that is to be the basis for all future Microsoft compiler technologies. "Photon"A feature-rich upgrade to Windows Mobile that includes features such as battery life. This version will follow Windows Mobiles 2005 (code-named "Magneto"). PinnedA. Pre-JIT compilerAnother name for the Native Image Generator tool used to convert MSIL and metadata assemblies to native machine code executables. Private assemblyAn assembly that is used only by a single application. A private assembly will run only with the application with which it was built and deployed. References to the private assembly will only be resolved locally to the application directory it is installed in. See Shared assembly. PointerA variable that contains the address of a location in memory. The location is the starting point of an allocated object, such as an object or value type, or the element of an array. Pointer typesSee Managed pointer types, Unmanaged pointer types. Portable Executable (PE) fileThe. Portable Executable VerifierA .NET programming tool (PEVerify.exe) used to verify that a .NET compiler has created type-safe metadata and MSIL code. Because Microsoft .NET compilers always generate type-safe code, this tool is used primarily with third-party ILASM-based compilers to debug possible code generation problems. Pre-defined typesTypes defined by the CLR in the System namespace. The pre-defined values types are integer, floating point, decimal, character, and boolean values. Pre-defined reference types are object and string references. See User-defined types. Primary Interop Assemblies (PIAs)Assemblies that come with Microsoft Office 2003 that allow managed code (VB .NET, C#, etc.) to call Office code. Project Green code name for Microsoft's next-generation ERP product code base. PropertyA CLR language feature that allows the value of a single member variable to be modified using getter and setter methods defined in a class or structure. See Indexer. Q Qualified identifiersTwo or more identifiers that are connected by a dot character (.). Only namespace declarations use qualified identifiers (e.g., System.Windows.Forms). R R2The codename for the Windows Server 2003 Update due in 2005. Register Assembly ToolSame as Assembly Registration Tool. Register Services UtilitySame as .NET Services Installation Tool. Reference typesA variable that stores a reference to data located elsewhere in memory rather than to the actual data itself. Reference types include array, class, delegate, and interface. See Value types, Pointer types. ReflectionA). See Attribute-based programming. RemotingA .NET technology that allows objects residing in different application domains to communicate. Objects in different application domains are said to be separated by a remoting boundary. Objects using remoting may be on the same computer, or on different computers connected by a network. Remoting is the .NET replacement for Distributed COM (DCOM). See Marshaling. ResourceAn addressable unit of data that is available for use by an application. Resources include text strings, files, documents, vector drawings, bitmapped images, binary data, data streams, message queues, and query result sets. In some contexts, application services themselves, such as Web services, are referred to as resources. Resource File Generator ToolA .NET programming tool (Resgen.exe) used to convert the resource information stored in text files or XML .resx files to .resource files that can be embedded in a runtime, binary executable, or compiled into satellite assemblies using the Assembly Linking Utility. Runtime Callable Wrapper (RCW)A metadata wrapper that allows COM components to be called from .NET applications. For OLE automation interfaces, an RCW is a managed .NET assembly that is generated from the COM component's type library using the Type Library Importer tool. For non-OLE automation interfaces, a custom RCW must be written that manually maps the types exposed by the COM interface to .NET Framework-compatible types. See COM Callable Wrapper (CCW). Runtime DebuggerA .NET programming tool (CorDbg.exe) used as a command-line, source-level debugging utility for MSIL programs. See CLR Debugger. Runtime hostA runtime environment used to manage the execution of program code. Examples include the .NET Common Language Runtime and the Java Virtual Machine (JVM). S Satellite assemblyAn assembly that contains only resources and no executable code. Satellite assemblies are typically used by .NET application to store localized data. Satellite assembles can be added, modified, and loaded into a .NET application at runtime without the need to recompile the code. Satellite assemblies are created by compiling .resource files using the Assembly Linking Utility. Saturnthe code name for the original ASP.NET Web Matrix product. Seamless ComputingA term indicating that a user should be able to find and use information effortlessly. The hardware and software within a system should work in an intuitive manner to make it seamless for the user. Seamless computing is being realized with the improvements in hardware (voice, ink, multimedia) and software. Secure Execution Environment (SEE)A secure, managed-code, runtime environment within the Microsoft Longhorn Operating System that helps to protected against deviant applications. This is a part of Microsoft's "Trustworthy Computing" initiative. SerializationThe conversion of an object instance to a data stream of byte values.. ServerA computer program or system that provides information or services requested by a client. See Client/Server architecture. Server-sideAn operation or event that occurs on a server system. Examples include server-side scripting, server-side objects, and server-side processing. See Client-side. ServiceAn application that provides information and/or functionality to other applications. Services are typically non-human-interactive applications that run on servers and interact with applications via an interface. A service may expose a synchronous, programmatic interface (i.e., an API), allowing it to be tightly-coupled with a consumer, or use asynchronous, message-based communications (e.g., HTTP, XML, and SOAP) to remain very loosely-coupled with consumers. Services are an essential part of distributed architecture program design. SGML (Standard Generalized Markup Language)The standard markup language used by the publishing industry to specify the format and layout of both paper and electronic documents. SGML is very flexible and feature-rich, and it is very difficult to write a full-featured SGML parser. As a result, newer markup languages requiring fewer features (e.g., HTML and XML) are subsets of SGML. SGML is defined by the international standard ISO 8879. Shared assemblyAn assembly that can be referenced by more than one application. Shared assemblies must be built with a strong name and are loaded into the Global Assembly Cache. See Private assembly. Shared nameSame as a strong name. Also called published name. Shared name utilityA .NET programming tool (Sn.exe) used to verify assemblies and their key information and to generate key files. This utility is also used to create strong names for assemblies. Side-by-Side ExecutionRunning multiple versions of the same assembly simultaneously on the same computer, or even in the same process. Assemblies must be specifically (and carefully) coded to make use of side-by-side execution. Single-module assemblyA .NET program in which all components are combined into a single DLL or EXE file. Such an assembly does not require an assembly manifest. SLAMA project for investigating the relationships between software Specifications, Languages, Analysis, and Model checking. Smart Device Extensions (SDE)An installable SDK that allows Visual Studio .NET 1.0 to be used for developing .NET application for the Pocket PC and other handheld devices that support the Microsoft Windows CE .NET operating system and the Microsoft SOAP . SoapSuds ToolA .NET programming tool (SoapSuds.exe) used to create XML schemas for services in a .NET assembly, and to create an assembly from an XML schema. This tool is used primarily to compile client applications that communicate with XML Web services using remoting. SQL Server 2000Microsoft's enterprise-scale relational database and member of the .NET Enterprise Server product family. StackAn area of program memory used to store local program variables, method parameters, and return values. In .NET languages, value types are allocated on the stack. See Heap. Static fieldsTypes that declare member variables which are associated with a type rather than an instance of the type. Static fields may be access without first instantiating their associated type. StarliteA code name for the original Microsoft .NET Compact Framework Static methodsTypes that declare methods which are associated with a type rather than an instance of the type. Static methods may be called without first instantiating their associated type. Strong nameAn assembly name that is globally unique among all .NET assemblies. A public key encryption scheme is used to create a digital signature to insure that the strong name is truly different than all other names created at anytime and anywhere in the known universe. The digital signature also makes it easy to encrypt the assembly, authenticate who created the assembly, and to validate that the assembly hasn't been corrupted or tampered with. Strong names are created using the Shared name utility. Strongly-typedA programming language is said to be strongly-typed when it pre-defines specific primitive data types, requires that all constants and variables be declared of a specific type, and enforces their proper use by imposing rigorous rules upon the programmer for the sake of creating robust code that is consistent in its execution. StructureIn .NET languages, structures are light-weight classes that are simpler, have less overhead, and are less demanding on the CLR. Structures are typically used for creating user-defined types that contain only public fields and no properties (identical to structures in the C language). But .NET structures, like classes, also support properties, access modifiers, constructors, methods, operators, nested types, and indexers. Unlike classes, however, structures do not support inheritance, custom constructors, a destructor (or Finalize) method, and no compile-time initialization of instance fields. It is important to note that a structure is a value type, while classes are a reference type. Performance will suffer when using structures in a situation where references are expected (e.g., in collections) and the structure must be boxed and unboxed for it to be used. StylesheetsData files used to express how the structured content of a document should be presented on a particular physical medium (e.g., printed pages, Web browser, hand-held device, etc.). Details of the presentation include font style, lay out, and pagination. Also called templates. System Definition Model (SDM)An XML document that follows a system throughout its life and is kept updated as a system moves from the initial design and development stages through its lifecycle and into maintenance. The SDM defines a system, which includes hardware and software resources. It. (From An Overview of Microsoft's Whitehorse.) System CenterA brand name for Microsoft's systems management products (it is no longer a 'bundled' product). System Center Capacity ManagerA. T "Talisker"The pre-release code name for Windows CE .NET (a.k.a., Windows CE 4.x). ThrowingWhen an abnormal or unexpected condition occurs in a running application, the CLR generates an exception as an alert that the condition occurred. The exception is said to be thrown. Programmers can also programmatically force an exception to be thrown by the use of the throw statement. See Exception Handling. TrustbridgeA directory-enabled middleware that supports the federating of identities across corporate boundaries. Try/Catch block.). See Finally block. Type-safeCode. The PEVerify tool can also be used to verify if code is type-safe. TypesA set of data and function members that are combined to form the modular units used to build a .NET applications. Pre-defined types exist within the CLR and user-defined types are created by programmers. Types include enumerations, structures, classes, standard modules, interfaces, and delegates. See Type members. Type libraryA compiled file ( .tlb) containing metadata that describes interfaces and data types. Type libraries can be used to describe vtable interfaces, regular functions, COM components, and DLL modules. Type libraries are compiled from Interface Definition Language (IDL) files using the MIDL compiler. Type Library ExporterA .NET programming tool (TlbExp.exe) used to create a COM type library file based on the public types defined within a specified .NET assembly. Type Library ImporterA .NET programming tool (TlbImp.exe) used to create a managed .NET assembly from a COM type library by mapping the metadata-encoded definitions to the appropriate .NET types. Type membersSame as class members. U UDDI (Universal Description, Discovery, and Integration)An XML- and SOAP-based lookup service for Web service consumers to locate Web Services and programmable resources available on a network. Also used by Web service providers to advertise the existence of their Web services to consumers. UnboxingConversion of a reference type object (i.e. System.Object) to its value type instance. Unboxing must be explicitly performed in code, usually in the form of a cast operation. See Boxing. UnmanagedAn adjective generally applied to any code or data that is outside of the control of a runtime host environment. In .NET, any objects or resources not allocated and controlled by the CLR are considered unmanaged (e.g., Windows handles and calls to the Win32 API). Unmanaged codeAny code that executes outside of the control of the .NET Common Language Runtime.. Also called unsafe code. Unmanaged dataData (i.e. memory) that is allocated outside of the control of the CLR. Unmanaged data can be access by both managed and unmanaged code. Unmanaged pointer typesAny pointer type that is not managed by the CLR. That is, a pointer that store a reference to an unmanaged object or area of memory. Unmanaged resourcesObjects created and manipulated outside of the control of the CLR. Examples includes file handles opened using the Win32 API, and database connections obtained using ODBC. UnsafeSame as unmanaged. User-defined typesReference (object) types defined in code by a programmer. See Pre-defined types. V Value typesA variable that stores actual data rather than a reference to data, which is stored elsewhere in memory. Simple value types include the integer, floating point number, decimal, character, and boolean types. Value types have the minimal memory overhead and are the fastest to access. See Reference types, Pointer types. VariableA typed storage location in memory. The type of the variable determines what kind of data it can store. Examples of variables include local variables, parameters, array elements, static fields and instance fields. See Types. Version numberSee Assembly version number. ViennaCode name for the Microsoft Office Live Communications Server 2005 (LCS 2005) beta. Visual Basic .NET (VB.NET)A Microsoft-supported language for the .NET Framework. VB.NET is the "next generation" release of the very popular Visual Basic programming language (a.k.a., VB7). Visual C++ .NETA Microsoft-supported language for .NET Framework. Visual C++ .NET allows developers to use the C++ language to write managed applications, and to easily migrate legacy C++ code to the .NET Framework. Code written in Visual C++ .NET is also referred to as managed C++; code written in the legacy Visual C++ language is sometimes referred to as unmanaged C++. Visual Studio .NET (VS .NET)A full-featured, Interactive Development Environment (IDE) created by Microsoft for the development of .NET applications. VS .NET makes a better alternative to Visual Notepad for creating .NET applications. Officially called Microsoft Visual Studio .NET 2002. Visual Studio .NET 2003 (VS .NET)The second version of Visual Studio .NET that was also known by the code name Everett. Visual Studio 2005(VS .NET)The third version of Visual Studio .NET that was also known by the code name Whidbey. This version is due to release in 2005. Visual Studio Team System 2005 (VS .NET)A high-end skew for Visual Studio 2005. This version includes enterprise-level tools and more. Codename for this product was known as "Burton". W Web FormA .NET Framework object that allows development of Web-based applications and Web sites. See Windows form. The Web Matrix ProjectA free WSIWIG development product (IDE)for doing ASP.NET development that was released as a community project. The most recent versionThe Web Matrix Project (Revisited)can be found here. Web serviceAn application hosted on a Web server that provides information and services to other network applications using the HTTP and XML protocols. A Web service is conceptually an URL-addressable library of functionality that is completely independent of the consumer and stateless in its operation. Web service consumerAn application that uses Internet protocols to access the information and functionality made available by a Web service provider. See Web service. Web Service ProtocolsOpen communication standards that are key technologies in the .NET Web Services architecture. These protocols include WSDL, HTTP, XML, SOAP, and UDDI. Web Service PlatformThis was one of the pre-release names for the original Microsoft .NET Framework. See also COM+ 2.0. Web service providerA network application that uses Internet protocols to advertise and provide services to Web service consumers. See Web service. Web Services Description Language (WSDL)An XML-based contract language for describing the network services offered by a Web service provider via UDDI. WSDL describes a Web service to Web service consumers by its public methods, data types of all parameters, return value, and bindings. WSDL will eventually replace Microsoft's earlier Web Services discovery protocol, DISCO. See the document Web Services Description Language (WSDL) 1.1. Web Services Description Language ToolA .NET programming tool (Wsdl.exe) used to create service descriptions and generate proxies for ASP.NET Web service methods. Web Services Discovery ToolA .NET programming tool (Disco.exe) used to locate the URLs of XML Web services located on a Web server, and save the information related to the resources of each XML Web service to a set of files. These files can be used as input to the Web Services Description Language Tool to create XML Web service clients. See DISCO. WhidbeyThe pre-release code name for the "next generation" release of Visual Studio after Everett and prior to Longhorn. WhistlerThe pre-release code name used for Windows XP. WhitehorseThe code name for the set of modeling tools included in Micrsoft Visual Studio 2005 ("Whidbey"). See An Overview of Microsoft's Whitehorse. Windows 2000 ServerThe central server operating system of the Microsoft BackOffice Server 2000 product family. Windows 2000 Sever (also known as Windows NT 5) is the successor of Windows NT Server 4.0 and will be replaced by Windows Server 2003. Windows Communication Foundation (WCF)Previously code named "Indigo", WCF is the communications portion of Windows Vista that is built around Web services. This communications technology focuses on providing spanning transports, security, messaging patterns, encoding, networking and hosting, and more. Ultimately, this WCF will deliver a consistent experience bringing together technologies ranging from Web Services to .NET Remoting to Windows Services for building connected systems. Windows Forms Class ViewerA .NET programming tool (WinCV.exe) used to search for and display the namespace and class information within an assembly. Windows Forms Resource EditorA .NET programming tool (Winres.exe) used to help a programmer modify localization information in a Windows Form. Windows FormA .NET Framework object that allows the development of "traditional" Windows desktop applications. Also called WinForms. See Web Form. Windows InstallerThe software installation and configuration service for Windows 2000 and Windows XP. Most .NET applications can be successfully deployed using XCOPY. However, if a deployment requires complex tasks, such as changes in system configuration, creation of user, groups, or folders, etc., Windows Installer must be used. Windows Installer 2.0 is required for use by the .NET Framework; it is also available for Windows 9x and Windows NT. Windows .NET Server 2003The original name of Windows Server 2003. The ".NET" was dropped as part of an attempt to remarket the concept of .NET not as a product, but instead as a business strategy. Windows Presentation Foundation (WPF)Previously code named "Avalon", WPF is often referred to as the graphical subsystem of Windows Vista. More correctly stated, it is the way in which Windows Vista will create, display, and manipulate documents, media, and user interfaces. This system is expected to use vector graphics, allow for better transparency, and more. Windows Server 2003The next generation of Windows 2000 Server that offers tighter integration with the .NET Framework, and greater support for Web services using Internet Information Server 6.0 and XML and UDDI services. This product was formerly known as Windows .NET Server 2003. Windows Update A site maintained by Microsoft for patch updates. Due out by end of 2005. Windows VistaThe "next generation" client release of the Windows Operating System after Windows Server 2003, which was code named Longhorn. WinFS("Windows Future System") The code name for the new type-aware, transactional,. WinFXThe new Windows API that will be released with the Microsoft Longhorn Operating System. This will include features for Avalon, Indigo, and WinFS as well as a number of fundamental routines. WPOWhole Program OptimizationThis is an optimization that can be done by the C++ compiler. All object modules are viewed at once before generating code, which allows for additional optimizations to be performed. X XAML(Extensible Application Markup Language) The declarative markup language for Longhorn that allows an interface to be defined. Longhorn applications can be created by using XAML for the interface definition and managed procedure code for other logic. XCOPYAn MS-DOS file copy program used to deploy .NET applications. Because .NET assemblies are self-describing and not bound to the Windows registry as COM-based application are, most .NET applications can be installed by simply being copied from one location (e.g., directory, machine, CD-ROM, etc.) to another. Applications requiring more complex tasks to be performed during installation require the use of the Microsoft Windows Installer. XDR (XML Data-Reduced)A reduced version of XML Schema used prior to the release of XML Schema 1.0. XDAA consolidated development environment that allows programs to be created for Windows, XBoxes, and more. XHTML (eXtensible HyperText Markup Language)The next generation of HTML. HTML was originally designed to display data; XML was specifically designed to describe data. XHTML is a combination of all the elements in HTML 4.01 with the the syntax of XML. Although nearly identical to HTML, XHTML has much stricter rules and is cleaner in its syntax, thus resulting in well-formed Web pages that are more portable across a wide range of Web browsers. Xlink (XML Linking Language)A language that allows links to other resources to be embedded in XML documents, similar to the hyperlinks found in HTML Web pages. See the document XML Linking Language (XLink) Version 1.0. XML (eXtensible Markup Language)A meta-markup language that provides a format for describing almost any type of structured data. XML is a subset of SGML and has become the standard language for storing, viewing, and manipulation Web-based data. XML allows the creation of custom tags to describe both proprietary data and business logic. See the document Extensible Markup Language (XML) 1.0 (Second Edition). XML SchemaA description of the structure of an XML document. Schemas are written in XSD and support namespaces and data types. XML Schema Definition Tool A .NET programming tool (Xsd.exe) used to generate XML schemas (XSD files) from XDR and XML files, or from class information in an assembly. This tool can also generate runtime classes, or DataSet classes, from an XSD schema file. XML Web servicesWeb-based .NET applications that provide services (i.e., data and functionality) to other Web-based applications (i.e. Web service consumers). XML Web services are accessed via standard Web protocols and data formats such as HTTP, XML, and SOAP. XPath (XML Path Language)A language that uses path expressions to specify the locations of structures and data within an XML document. XPath information is processed using XSLT or XPointer. See the document XML Path Language (XPath) Version 1.0. XPointer (XML Pointer Language)A language that supports addressing into the internal structures of XML documents. XPointer allows the traversals of an XML document tree and selection of its internal parts based on element types, attribute values, character content, and relative position. XPointer is based on the XML Path Language (XPath). See the document XML Pointer Language (XPointer). XSD (XML Schema Definition)A language used to describe the structure of an XML document. XSD is used to defined classes that are in turn used to create instances of XML documents which conform to the schema. See the document XML Schema Part 0: Primer. XSL (eXtensible Stylesheet Language)A language used for creating stylesheets for XML documents. XSL consists of languages for transforming XML documents (XPath and XSLT) and an XML vocabulary for specifying formatting semantics. See the document Extensible Stylesheet Language (XSL) Version 1.0. XSLT (eXtensible Stylesheet Language Transformation)A language for transforming XML documents into other XML documents based of a set of well-defined rules. XSLT is designed for use as part of XSL. See the document XSL Transformations (XSLT) Version 1.0. XQL (XML Query Language)A query language used to extract data from XML documents. XQL uses XML as a data model and is very similar to the pattern matching semantics of XSL. See the document XML Query Language (XQL). Y YukonThe code name for the release of Microsoft SQL Server 2003 (a.k.a., SQL Server 9). Yukon offers a tighter integration with both the .NET Framework and the Visual Studio .NET IDE. Yukon will include full support for ADO.NET and the CLR, allowing .NET languages to be used for writing stored procedures. Z No entries. Other No entries. Glossary compiled with the help of James D. Murray and others. For suggestions, corrections, or other changes, send an e-mail to editor@Developer.com. # # #
https://www.developer.com/net/asp/article.php/1756291/Microsoft-NET-Glossary.htm
CC-MAIN-2018-09
refinedweb
12,105
50.43
I have spent a lot of time with preparing a book about Visual Studio Extensibility, focusing on Visual Studio Package Development. I have made proposals for several book publishers, but I did not manage to get a contract, most of them found such a book is not profitable. I decided to share the four chapters of the book that I’ve already written. They are the followings: I hope, you will find these chapters useful. The majority of Visual Studio’s functions you use in your everyday work (such as programming languages, editors, designers and debuggers) are provided by Visual Studio Integration Packages, or shortly by packages. Some call them VSIP packages but the VSIP acronym is overloaded: while the first two letters means “Visual Studio” the last two may mean either “Integration Package” or “Industry Partner” and unfortunately both terms are frequently used. To avoid ambiguity hereinafter you’ll meet the term package or VSPackage. Developing packages means you can extend Visual Studio on the same way as its developer team at Microsoft. Adding new functions through packages is actually programming a new part of Visual Studio just like if you were the member of the team. You can use the full power and integrate any functionality you miss from the IDE! In this chapter you will create a very simple package called FirstLook to get a feeling that first steps are easy. Then you’ll learn the basic concepts behind packages and dive into the FirstLook project’s structure and source code to have a closer look at the implementation of those concepts. At the end of this chapter you’ll be familiar with the following: This chapter will not teach you how to build a specific functionality into a package and does not cover the API used to develop packages. The focus is on understanding the concepts, architectural considerations behind VSPackages to let you take a look behind the scenes and get acquainted with package mechanisms. These concepts will be very useful when you are about creating your own packages. There are a few important concepts to understand if you want to develop a package. In order treat them in the right context you build a very simple functional package to touch the surface of those concepts and then jump into the details. A Visual Studio package is a class library containing the types responsible for the package infrastructure and functionality. In order Visual Studio recognize the compiled class library as package, encapsulated types should have specific metadata information and some additional steps are required after compilation. So, even if you could start building a package from an empty class library, it is much easier to use the VSPackage Wizard installed with Visual Studio SDK. Start a new project with the File|New|Project command. The IDE displays the New File dialog to select the desired project type. You can find the Visual Studio Integration Package project type under the Other Project Types category in the Extensibility folder as Figure 1 illustrates. Figure 1: The New Project dialog with the Extensibility project types Should you not find this project type or many other project types in the Extensibility folder means that Visual Studio SDK is not — or not properly — installed on your machine. Install it according to the setup notes in order to go on with building the package. Give the FirstLook name to the package in order to be able to follow code details later in this chapter. Clicking the OK button starts the Visual Studio Integration Package Wizard (henceforward VSPackage Wizard is used, it is shorter) which welcomes you with the dialog in Figure 2. Figure 2: The Welcome page of the VSPackage Wizard Click the Next button to go on specifying the package parameters and you get to the Select a Programming Language page of the wizard as Figure 3 shows. Figure 3: VSPackage Wizard lets you select the programming language As it was mentioned earlier you create the code in C#. Packages are strongly named assemblies, so you need to sign the class library assembly with a key. For this project the wizard creates the signing key. Click Next and you get to the Basic VSPackage Information page as Figure 4 shows. Figure 4: The wizard asks for the basic package information The information you provide here will be used in the source code generated for the package and will be displayed in the Visual Studio About dialog. The Company name will be used in the namespace of generated types as well as VSPackage name which also names the class representing the package in code. VSPackage version is additional information to give a way for distinguishing separate package releases. Text typed in the Detailed information field will be displayed in the About dialog and can supply the user with more information about what the package does. When you click the Next button the wizard moves you to the VSPackage Options page — as can be seen in Figure 5 — to set a few more code generation options. Figure 5: You can select a few code generation options In this sample you are going to create only a menu command that pops up a message on the screen, so set the Menu Command option. Should you select the other two options the VSPackage Wizard would create some more code for a simple tool window or a rich text editor. Please, let those options remain unchecked. With the Next button the wizard goes to the page where we can specify a few details about the menu command to create. This page is shown in Figure 6. Figure 6: Command options are specified here The command will be added to the Tools menu of Visual Studio, in Command name you can specify the text to be displayed for the menu item. According to the internal command handling architecture each command has an identifier. The Command ID field supplies a name for this identifier and VSPackage Wizard will generate an ID value behind this name. With clicking Next the wizard moves to the Test Project Options page as shown in Figure 7. Figure 7: VSPackage Wizard asks for test project options The wizard can create unit test for the package which check if its functional units work properly. The wizard also can create an integration test project for you, in which packages are tested within the context of a Visual Studio instance. For the sake of simplicity here you do not create any tests, so clear the options — by default both are checked. Now you have set all parameters the wizard uses to generate the package project, click on the Finish button. In a few seconds the wizard generates the package project ready to build and run. Taste the pudding you have just cooked! With the Build ð Rebuild Solution function you can compile the package and carry out all other steps required to run the package with Visual Studio. So rebuild the project and start with Ctrl + F5 (Debug|Start Without Debugging). You might be surprised as a new instance of Visual Studio is started with “Experimental Instance” in its window caption. Note: If this is the first time you have started the Experimental Instance, the Choose Default Environment Settings dialog appears just like when you first time launch Visual Studio after installation. This is an instance of Visual Studio that hosts the FirstLook package — later you’re going to learn the concept behind. The menu command implemented by this freshly generated package can be seen in the Tools menu as Figure 8 shows it. Figure 8: The menu command item appears in the Tools menu When you click the Simple Message command, it pops up a message box telling you it was displayed from the FirstLook package. The package also registered some branding information that can be seen in the Help|About dialog as Figure 9 shows. Figure 9: Branding information of the FirstLook package Nothing call tell more about your VSPackage than its source code. But before deep-diving into it, let’s treat important concepts about packages. A VSPackage is the principal architectural unit of Visual Studio. As you already know, Visual Studio IDE itself is the Shell hosting a set of VSPackages working together with each other and with the Shell. The basic responsibility of a package is to provide a common container for extensibility objects. So, a VSPackage is a software module that is a unit not only from architectural point of view but also from deployment, security and licensing aspects. Developers — including the developers of Visual Studio — create VSPackages to provide some extension to the VS IDE and group them into modules according to their functionality. These extensions can be: It is natural in .NET that you can divide your functional units into separate assemblies where consumer assemblies will reference other service assemblies. The same principle works for VSPackages: an assembly containing a package can reference other assemblies that may contain not just helper types but even extensibility object types. An assembly — as the smallest physical deployment unit in .NET — may contain more than one VSPackage. Although generally only one package is encapsulated in an assembly, you may have many reasons to group more packages in one assembly — including deployment considerations. The previous versions of Visual Studio checked the package before loading them into the process space of the Shell. Any VSPackage should have been “sealed” with a so-called Package Load Key (PLK) and this key was verified during package load time. PLK was not a digital signature or a full hash, because it was calculated from a few information fields in the package. PLK could have been requested from Microsoft through a webpage: the developer specified a few well-defined attributes of the package and some logic calculated the PLK value. This value was embedded as a resource into the assembly representing the package. Every time the package was loaded, the Shell checked the PLK against the package attributes it had been created from. Should have been this check fail, the Shell would have refused loading the package. This PLK mechanism did not mean that a developer had to request a new PLK for each package modification. While no basic information the PLK had been generated from was changed, the package continued to load. Although this concept had been seemed useful, in the real life in had no real advantage. To be honest, in most of the cases it was the root cause of deployment issues, in many scenarios it raised more problems than it solved. In the new Visual Studio 2010 the Shell does not use the Package Load Key to check packages before loading them into the memory. You can imagine that complex packages like the C#, VB, F# or C++ languages with all of their “accessories” could consume many system resources by terms of memory and CPU. If you do not use them they do not press the CPU, but they might use the memory if they sit within the Visual Studio process space. If you create a project using F# you actually do not need services belonging to other languages, so why to load them into the memory at all? The architects of Visual Studio implemented the package load mechanism so that packages are loaded into the memory at the first time when an event requiring the presence of the package is raised. These events can be one of the followings: So, if you do not need a package during the whole IDE session, it does not consume the memory at all. Should you click on a menu item activating a command sitting in a package which has not been loaded yet the IDE will immediately load and initialize it. Should you ask for a tool window in a package not in the memory yet, the IDE will start loading it. Binding package loading with a context change is generally required where your package want to subscribe for events raised in Visual Studio. You cannot bind the event either to command activation or object or service requests, because in order your package could work you have to subscribe to events. The code to create subscriptions is generally put into the initialization code. But you cannot run any code belonging to a package while it is not loaded into the memory! In this case you declare that the best context (the latest possible) to load the package. If your package logic requires, you can specify the NoSolutionExists context. Visual Studio enters into this context immediately when the Shell is loaded and ready to function, so packages bound to this context load at Visual Studio startup time. Note: Be frugal with system resources if you have to load a package at Visual Studio startup time. Allocate only resources that are indispensable to carry out the required initialization at startup time. When you develop a package it is an independent piece of code. When it is loaded into Visual Studio it becomes an organic part of the IDE: The process of making a package physically integrated into the Shell is called siting. While the package is not sited, its functions cannot be used from outside. From the same reason the package can be only partially initialized, because it cannot touch any objects or services through the Shell. As soon as the package gets sited, it is ready to finish its initialization and be fully functional. Siting happens when Visual Studio loads the package. The object type representing your package must implement an interface called IVsPackage (you are going to learn details later in this chapter) and must have a default constructor. Although siting physically integrates your package functionally with Visual Studio, functional integration may require some additional — and sometimes complex — steps depending on what your package is intended to do. Visual Studio must keep track of packages in order to load and use them. Of course, the most flexible would be some kind of discovery where the Shell can look around only in the file system to guess out which entities represent packages. The .NET framework supports and intensively uses metadata (attributes) that can represent the information you can use for this purpose. You can even load an assembly so that only the metadata part is read from the file system and put into the memory. Although you can imagine this mechanism could work, it is not used by the Shell in this way. The reason is that the roots of Visual Studio go back to the COM era. Packages are COM objects and can be created not only in managed code but also in native code using the Win32 API with any language including C++, Delphi and others. So, not surprisingly Visual Studio uses the registry to keep information about packages. Although the registry is used to store configuration information about Visual Studio settings, developers and package users are not bothered with registration issues. The new deployment mechanism (through VSIX files) implemented in Visual Studio 2010 takes away this task. When Visual Studio is started, its discovery mechanism collects the information to be entered into the registry and does all the registration process on the fly. Developers perceive they do installation only by copying files and this resembles to the great mechanism we are using with the .NET Framework. Packages are loaded into the memory on-demand. To effectively use this approach, Visual Studio stores the registration information about packages in a way so that this information could be accessed from the direction of objects to be used. For example, one kind of object that could trigger package loading is a tool window. A tool window has its own identity. When it is about to be created in order to be displayed, Visual Studio turns into the registry and addresses the corresponding key with the tool window identity. Under that key there is information stored about the identity of the package owning and implementing the tool window. Using this identity Visual Studio finds the registry key containing the package information and loads the package into the memory accordingly. The identity of packages and objects is represented by a GUID. Figure 10 shows a few registry keys that store package-related information for Visual Studio. Figure 10: Package and object information in the registry Assume that Visual Studio is about to create a service instance. It uses the Services registry key and according to the service identity it finds the package. Figure 11 illustrates the service key of the Class Designer Service. At the left side you can see the subkey named by the GUID identifying the service. At the right side you see the corresponding package ID in the default value under the service key. Figure 11: Service information in the registry Visual Studio uses the same approach to find owner packages for any other objects. There is one important thing to treat about package information in the registry. Packages can add their own user interface items to the Visual Studio IDE in form of menus and toolbars. If this UI information were created during the package initialization time, Visual Studio would need to load the package at startup time in order to present the package-dependent UI to the users. This approach would demolish the whole concept of on-demand loading. Instead, the package uses a declarative way to define the user interface that should be displayed in the IDE at startup time. This information is encapsulated into the package assembly as an embedded resource. During the registration process Visual Studio extracts this resource information and merges it with its standard menus. Each user interface element — the commands they represent — is associated with the identity (GUID) of the owner package, so the Shell knows which package to load when a command is activated. When you started the FirstLook package a new instance of Visual Studio was launched with the “Experimental Instance” text in its caption. What is that Visual Studio instance and how did it get to the computer? Visual Studio Experimental Instance is a test bed to run and debug Visual Studio packages during the development and test phase. It is not a new installation of Visual Studio. It is the same devenv.exe file you use normally. But why you need it? As it’s been mentioned earlier the package is registered — information is written into the system registry — in order it could integrate with Visual Studio. Every time you build and run a package some information goes into the registry. When you modify the package it might affect also the information in the registry. You can imagine what an unexpected confusion it can lead that you always modify the registry under the Visual Studio instance you actually use to develop and debug a package? What if your package does not work or even worse it prevents Visual Studio start correctly or even causes a crash? How would you fix the pollution of registry? That is the point when Visual Studio Experimental Instance comes into the picture. The Experimental Instance is simply another instance of Visual Studio that picks up its settings from a different place — including configuration files and registry keys. The devenv.exe keeps its configuration settings in the system registry in the CURRENT_USER hive under the Software\Microsoft\VisualStudio\10.0_Config key. When Visual Studio runs it reads out the configuration information from these keys by default. With the /rootsuffix command line parameters the root key used by devenv.exe can be changed. The VSPackage Wizard sets up the package project so that the devenv.exe uses the /rootsuffix Exp command line parameter when running or debugging the package. By doing it so devenv.exe will use the Software\Microsoft\VisualStudio\10.0Exp_Config registry key under the CURRENT_USER registry hive. So running the package with the Start Debugging (F5) or Start Without Debugging (Ctrl+F5) functions will launch the Visual Studio Experimental Instance using this registry key. The build process of a package copies the package binaries and the so-called VSIX manifest information to a well-known location under the current user’s application data folder. When the Experimental Hive starts, it discovers the package in the folder and uses the information found there to enter the package information into the registry key consumed by the Experimental Instance. Using the Experimental Instance prevents you polluting the registry of the Visual Studio instance used for the normal development process. However, making mistakes, recompilations and faulty package registrations does not prevent putting junk or leaving orphaned information in the Experimental Instance registry. Making an appropriate cleanup could be very complex and dangerous if you would do it by delving in the registry. Cleaning up packages from the Experimental Instance’s registry is quite easy! The Visual Studio SDK installs the CreateExpInstance.exe utility and adds it to the Visual Studio 2010 SDK menu items under the Tools folder with the name “Reset the Microsoft Visual Studio 2010 Experimental Instance”. Running this utility will reset the registry key belonging to the Experimental Hive to the state after the installation of VS SDK. Note: If you develop Visual Studio packages you can get into situations several times where your package which worked before seems suddenly stop working. In majority of these cases the cause is the pollution of Visual Studio registry. There can be many orphaned objects in the registry as a result of continuous modification and rebuild of packages, and that can cause problems. If you run in such a situation, reset the Experimental Instance and then build your package with the Rebuild All function of Visual Studio. This procedure often helps. You have already created a FirstLook project with the VSPackage wizard and by now you have a good overview about the basic concepts behind packages. Now we go into details and look how those concepts and ideas are reflected in the source code. When the VSPackage Wizard generated the code according to the parameters we specified on the wizard pages, it made a lot of work at the background. The wizard carried out the following activities: Table 1 summarizes the source files in the FirstLook project. Table 1: FirstLook source files generated by the VSPackage wizard The wizard added several assemblies to the class library project. Table 2 summarizes their roles. Their names start with the Microsoft.VisualStudio, this prefix is omitted in the table for the sake of clarity. Table 2: Interoperability assembly references in the project All assemblies having Interop in their names contain only proxy type definitions to access the core Visual Studio COM service interface and object types. Now, let’s see the source code of the package! The wizard added many useful comments to the generated source files. In the code extracts listed here those commands are cut out to make the listings shorter and improve the readability of the code. The indentation has also been changed a bit for the same purpose. Listing 1 shows the source code of the most important file in our project named FirstLookPackage.cs. This file implements the type representing our package: Listing 1: FirstLookPackage.cs The FirstLookPackage class becomes a working package by inheriting the behavior defined in the Package class of the Microsoft.VisualStudio.Shell namespace and by using the attributes decorating the class definition. The Package base class implements the IVsPackage interface that is required by Visual Studio in order to take an object into account as a package. This interface provides a few methods managing the lifecycle of a package and also offers methods to access package related objects like tool windows, options pages, and automation objects. One of the most important of them is the SetSite method having the following signature: Visual Studio calls this method immediately after the package has been instantiated by its default constructor. The psp parameter is an instance of System.IServiceProvider and this object is the key in keeping contact between the package and the IDE: any time the package requests a service object from its context—from the IDE—the psp instance is used at the back, however, the implementation of Package hides it from our eyes. The overridden Initialize method is called after the package has been successfully sited. This method has to do all the initialization steps that require access to services provided by the Shell or other packages. Should you move this code to the package constructor you would get a NullReferenceException because at that point all attempts to access the Shell would fail as the package is not sited yet and actually has no contact with any shell objects. The package constructor should do only inexpensive initialization that you would put normally to a constructor. Any other kind of initialization activities should be put to the overridden Initialize method. If you have some other expensive initialization activity that can be postponed, you should do them right when there’s no more time to delay them. In this case the Initialize method binds the single menu command provided by the FirstLookPackage with its event handler method called MenuItemCallback: First it calls the Initialize method of the base class — Package in this case. Omitting the call to the base implementation would prevent the package running correctly. Look at the call of GetService in Line 7! If you could select a method that is especially very important when creating Visual Studio Extensions, probably the GetService method is that one! This method is implemented by the Package class — many other Managed Package Framework objects also implement this method — in order to request service objects from the environment. GetService has one type parameter — it’s called service address — that retrieves a service object implementing the service interface specified by the address type. So, Line 7 obtains an OleMenuCommandService instance that you can use to bind event handlers to so-called command objects. In Line 11 and 12 a CommandID instance is created to address the command to be put to the Tools menu. In Line 12 a MenuCommand instance is created to assigns the MenuItemCallback method as a response for the command specified with the CommandID instance. Line 13 entitles the menu command service to handle events related to the command. The result of this short initialization code is that your package handles the event when the user clicks the Simple Message menu item in the Tools menu by executing the MenuItemCallback method. In the next chapter you will find all nitty-gritty details about the command handling concepts used in Visual Studio and there you will learn much more about the initialization approach used here. The MenuItemCallback method uses the IVsUIShell service to pop up a message box from within the IDE. By now you know that packages are registered with Visual Studio in order to support the on-demand loading mechanism and allow merging menus and toolbars into the user interface of the IDE. The information to be registered is created during the build process from attributes assigned to the package class: Packages are COM objects and so they must have a GUID uniquely identifying them. The Guid attribute is used by the .NET framework to assign this GUID value to a type. All attributes above except Guid are derived from the RegistrationAttribute class which is the abstract root class for all attributes taking a role in package registration. Table 3 describes the attributes decorating the FirstLookPackage: Table 3: Registration attributes in FirstLookPackage There are many other registration attribute beside the ones in the table above, later in the book we are going to meet a few of them. You are not constrained to apply only the registration attributes defined by the Managed Package Framework, and you can define your own attributes. Any of them including yours are handled on the same way by the build process as the predefined ones. The wizard generated a file named FirstLook.vsct. It is an XML file and the file extension refers to the acronym coming from the “Visual Studio Command Table” expression. The schema of the XML file defines the command table owned by the package. The command table is transformed into a binary format during the build process and is embedded into the package assembly as a resource. During the registration phase the ID of this resource is put into the registry. When Visual Studio starts, loads this binary resource information and merges it with the menus of the IDE including toolbars and context menus. In order to avoid menu merges every time Visual Studio is launched, the IDE uses a cache mechanism and carries out the merge process only once for each package. The next chapter will treat this mechanism and the structure of the command table in details. Listing 2 shows you the command table described in the FirstLook.vsct file. Listing 2: FirstLook.vsct In this listing all comments placed into the generated file are omitted for saving space. However, it is worth to read those comments to have a better understanding of the command table structure. The .vsct file tells a lot about how Visual Studio is architected, how it solves the coupling of functions (commands) and user interface elements. The root element of a .vsct file is the CommandTable element. As you can see all related elements are defined by the namespace. No doubt, the most important element is Commands, because this node defines commands, their initial layout and behavior. Any command in the VS IDE must belong to the IDE itself or to a package. To assign a command to the appropriate (owning) VSPackage, the package attribute of the Commands element must name the GUID of the corresponding package. Commands node can have a few child elements; each has a very specific role. Group elements define so-called command groups; each of them is a logical set of related commands that visually stand together. In the FirstLook.vsct file we have a Group element that holds only a Button. A button represents a piece of user interface element the user can interact with, in this case a menu item that can be clicked. The Parent element defines the relationship between elements, for example the Button element defined above is parented in the Group. Toolbars and menus would be poor without icons helping the user to associate a small image with the function. The Bitmap nodes allow defining the visual elements (icons) used in menus. The Symbols section is a central place in the command table file where you can define the identifiers to be used in the other parts of the .vsct file. You can use the GuidSymbol element to define the “logical container GUID” and the nested IDSymbol elements to provide (optional) identifiers within the logical container. The name and the value attribute of these elements do exactly what you expect: associate the symbol name with its value. The VSPackage Wizard put the generated GUID values into the FirstLook.vsct file but they also can be found in the Guids.cs file. The PkgCmdID.cs file defines constant values for the IDSymbol values used by package commands. These three files must be kept consistent, so if you change a GUID or a command identifier, the changes should be tracked in the other files as well; otherwise your package will not work as expected. The FirstLook project has two resource files: Resources.resx and VSPackage.resx. They both utilize the resource handling mechanism in the .NET Framework, but have different roles. Resources.resx is to store functional resources that are consumed by the objects and services of your package. For instance, you can store error messages, prompt strings, UI elements, logos, and so on in this resource file and access them programmatically through the static members of the Resources class generated by the ResXFileCodeGenerator custom tool attached to the .resx file. VSPackage.resx can store resources just like Resources.resx, but its primary role is to embed package infrastructure resources. This resource file does not use the ResXFileCodeGenerator custom tool and so does not generate any helper class to access resources. As you remember the package is decorated with the InstalledProductRegistration attribute which refers to resource identifiers 110, 112 and 400: These IDs refer to string and icon resources in the VSPackage.resx file as shown in Figure 12. Figure 12: String resources in VSPackage.resx Package resources will be extracted from the content of the VSPackage.resx file, so if you put them in the Resources.resx file, the package will not find the resource. Although you can put functional resources into VSPackage.resx file, their recommended place is the Resources.resx file. Understanding the package build process can help a lot when you are about to debug or deploy your application. In this part you’ll learn the steps of this process in details. Building a package is not simply compiling the package source code into a .NET assembly. There are other important steps to complete in order to use the package either in the Experimental Instance or in its productive environment. When the wizard generates the package it adds new build targets to the .csproj file of the corresponding class library. You can discover these entries by first unloading the project and then editing the project file. If you want to try it, first right-click on the project file in Solution Explorer and use the Unload Project function, then activate the Edit FirstLook.csproj command also with right-click. When you scroll down to the bottom of the file you can discover the following entries: The first Import entry can be found in any C# language projects to invoke the C# compiler and all the other tools (for example the resource compiler) to create the assemblies from the source project. The second Import entry is the one added by the VSPackage Wizard. The .targets file specified here contains Visual Studio SDK related build targets. If you would like to have a look at this file, you can find it in the MSBuild\Microsoft\VisualStudio\v10.0\VSSDK folder under Program Files. This book is not about MSBUILD, so you won’t find more explanations about what the build targets describe and how they internally work, instead, here are the steps of the package build process: As a result of the build process the package is available in the Experimental Instance. The next time you start the Experimental Instance, it scans the Extensions folder, uses the .pkgdef file of your package to create the appropriate registry settings, the package’s menus gets merged into the IDE, so your package is ready to run. Anyone who develops software creates programming mistakes. Majority of them can be caught during a simple or more complex code review. Many of them are obvious and you can find them in the code after observing the faulty behavior. There are a few of them which cannot be easily caught without using a debugger. Developing VSPackages is the same story. Sooner or later you find yourself debugging a package and searching for a bug. This book does not want to go into details about debugging techniques; this is definitely not its topic. However, you will learn how easy is to debug your package and what is going behind the scenes. To debug or run a package you should set it as the startup project. If your package is the only project is the solution, it is already marked so. If you have more projects in the solution, you should mark any VSPackage project as the startup project. Independently if you run a package with or without debugging, the Visual Studio Experimental Instance is started. You can check it on the project property pages on the Debug tab as Figure 13 shows it for the FirstLook project. Figure 13: Debug properties You can see that devenv.exe is selected as the startup project and it is launched with the /rootsuffix Exp command line parameters. As you have learnt before this command line starts the Experimental Instance. When you start the project with the Start Debugging (F5) function Visual Studio attaches the debugger to the Experimental Instance and so you can set breakpoints in Visual Studio. As your package running in the Experimental Instance reaches a breakpoint, you are taken back to the debug view as Figure 14 illustrates it. In this case a breakpoint was set within the Initialize method of the FirstLookPackage class. Figure 14: The debugger in action You can use the same techniques for debugging a VSPackage as for any other applications. All debugging features of Visual Studio are accessible: you can watch variables, evaluate expressions, set up conditional breakpoints, and so on. There are cases when you would like to trace your application without a debugger using trace messages. You can follow this practice with Visual Studio. The simplest one is writing to the Debug pane of the Output window. You can learn about this topic in Chapter 3. It is very comfortable to use the Experimental Instance while developing a package. The build process takes care about setting up your package to work, so you can use either the Start Debugging or Start Without Debugging commands to try what you’ve created. However, when your package is ready for distribution you should care about deployment questions. With Visual Studio versions preceding 2010 developers had to do some extra activities to prepare packages for deployment and it has a few potential pitfalls. The two main issues were that you needed to obtain a so-called Package Load Key (PLK) through a web page and take care of entering the required entries into the registry to allow Visual Studio recognize and integrate your package. ¾ Any change in package information like name, GUID, company or version required obtaining a new PLK. The Experimental Hive (this is what is now called Experimental Instance) did not check the PLK by default, so it often happened that developers faced with a wrong (missing or not renewed) PLK only after the installation kit was built and tested in the production Visual Studio environment. ¾ While the build process automatically registered the package under the Experimental Hive, developers had to create their own registration mechanism in the installation kit. It was not difficult, but because it was not automatic, forgotten registration updates could have led to annoying issues. The new deployment mechanism built into Visual Studio 2010 removes this pain and provides an easy and straightforward way for package deployment. Generally the easiest form of deploying an artifact is if you have an installation kit. The package build process as treated earlier creates this installation kit as a .vsix file containing the package binaries and some additional information. You can distribute your package by simply distributing the .vsix file to your customers. When they receive it, the Visual Studio Extension Installer utility can be started with double clicking on the .vsix file as Figure 15 illustrates. Figure 15: Installing a VSIX file When you click install the content of the VSIX file will be installed to the specified Visual Studio instance. If you create a package for a broad set of customers or for the community, you can upload the VSIX file to the Visual Studio Gallery. The new Shell of Visual Studio contains a great tool called Extension Manager that is able to search this gallery for extensions, install or remove them, and keep track of installed extensions as well as managing their updates. Figure 16 shows a screenshot of the Extension Manager browsing the extensions available on Visual Studio Gallery. Figure 16: Browsing Visual Studio Gallery with the Extension Manager You can select any of the components while browsing, and on the right pane of the window you find some more details about the highlighted item. With the More Information link you will be directed to component’s home page on the Visual Studio Gallery. If you like this component, you can get it with the Download button just as others can obtain your uploaded components. The Extension Manager is the recommended way to obtain extensions. Because it runs within a Visual Studio instance, you can use it to install a separate set of components for your development environment and for the Experimental Instance. When using the Visual Studio Extension Installer utility your components will be installed under the normal development environment by default. It was mentioned earlier that the build process packages the binaries and some other files into the VSIX file. In order the installation process could understand your .vsix installation file you need to create a so-called VSIX manifest file that is the soul of the installation kit. This file describes the metadata that is used as the set of instructions about what, where and how should be put during the setup. The VSPackage Wizard automatically creates this manifest for you and names the file as source.extension.vsixmanifest. You are probably not surprised that the manifest is an XML file with its own schema. When the FirstLook package was generated the wizard created the manifest shown in Listing 3: Listing 3: source.extension.vsixmanifest The root element of the manifest structure is the VSIX element that uses the namespace. The manifest contains three sections: When running the Visual Studio Extension Installer or using the Extension Manager, the VSIX manifest is used to determine how the VSIX package should be set up. The VSPackage content type tells the installer that the related FirstLook.pkgdef file will contain the information to be put into the registry in order to register the COM object representing a Visual Studio package. The FirstLook.pkgdef file that has been created during the build process contains the information in Listing 4: Listing 4: FirstLook.pkgdef The content of this file resembles to the content of a .reg file that can be exported from or imported to the Windows registry. However, the .pkgdef file contains a few tokens closed between dollar signs. The values of these tokens are passed by the context of the .pkgdef file to the entity processing the file content. For example, if you use the Visual Studio Extension Installer utility to process the .vsix file, the utility extracts the payload into the Microsoft\VisualStudio\10.0\Extensions subfolder under the LocalAppData folder of your user profile. The files are put not directly into the Extension folder but into the subfolder calculated from the Author, Name and Version elements of the manifest’s Identity section. In this case the payload includes the FirstLook.pkgdef and FirstLook.dll files beside a few others. When Visual Studio starts, it recognizes that a new .pkgdef file is under the Extensions folder and processes it. It substitutes the $RootKey$ token with the corresponding registry root of Visual Studio 2010, $Windir$ with the current Windows installation folder, $PackageFolder$ with the encapsulating folder of the .pkgdef file. After Visual Studio startup finishes, all information required to find and load the package is entered into the registry. When the first action demanding the package is executed, Visual Studio can pick up and initialize it. A VSPackage is the principal architectural unit of Visual Studio, a container for extensibility objects. It is also a unit from deployment, security and licensing aspects. Packages are not loaded immediately as Visual Studio starts, they are read into the memory on-demand at the first time when any of their objects or services is about to be used. The process of integrating a package physically into the Shell is called siting. While the package is not sited, its functions cannot be used from outside. As soon as the package gets sited, it is ready to finish its initialization and be fully functional. Siting happens when Visual Studio loads the package. Visual Studio keeps track of packages installed through registration, and package information is stored in the system registry under a specific Visual Studio key. With command line parameters this registration key can be suffixed in order to use another configuration set — even with separate package registration parameters. The Visual Studio SDK sets up the Visual Studio Experimental Instance which is a test bed to run and debug Visual Studio packages during the development and test phases. The Experimental Instance is not a separate Visual Studio installation, it uses the same devenv.exe file but with different configuration settings. VSPackages use a build process that contains some additional steps in order to prepare the packages for debugging or deployment. The easiest way to create a package is running the VSPackage Wizard which sets up the build process appropriately. During this process package infrastructure resources — like to so-called command table — are embedded into the package assembly, the package installation kit is created and installed under the Experimental Instance. During the development phase packages run inside the process space of the Experimental Instance and the same debug techniques can be used for tracing and troubleshooting as for any other .NET applications. VSPackage deployment in Visual Studio 2010 became really simple related to the preceding versions. The package installation kit is represented by a VSIX file that can be distributed directly to the users of your package or — and this way opens up brand new opportunities — uploaded to the Visual Studio Gallery. The Extension Manager built into the IDE can be used to browse, install, and remove VSPackages (and many other kinds of extensions) as well as to keep track of them. Have you thought about self-publishing your book on lulu.com? I would buy a copy especially if it's color printed! Great article, but where are the other three articles about it? I couldn't find any evidence of chapter 2 here Hey Sam, the subsequent chapters are coming soon... Right now I'm really busy with the VS 2010 launch, so you can expect the next chapters after April 15. Man, I would definitely buy thisbook. All other books on the subject don't even scratch the surface. Looking forward to the next chapters. Már több, mint két éve dolgozom együtt a Visual Studio Extensibility redmondi és cambridge-i csapatával I would definitely buy your book. Thanks for publishing it here! I am MSFT and would buy this book! There's nothing like that out there. Have you reached MS Press? Also, self-publishing seems like a better idea. I plan to publish the book in August 2010, now I'm working on the last (7th) chapter. I would love to buy your book. I am from India and please let me know when it will be available in India. I will buy your book too. It is a shame that you cant publish your book. There are absolutely no books about extending VS (I know only 2 books, but one is obsolete, second is not very good). Good luck. Don't give up! Denis P.S. Maybe you shoul ask at StackOverflow where you can publish your book? :) Please create a tag for your book. It will be easier to monitor what is the state of the book. Pingback from VS 2010 Package Development ??? Chapter 1:… « ctthang's Blog I am currently writing a VS2010 extension and using some information from your site, thank you very much. However one question I still cannot find answer to: what is the best way to supply configuration information to the extension? And I mean non only something very specific, like a connection string, but a general config info which can be found in .config file. Right now I had to use ConfigurationManager.OpenMappedExeConfiguration to access custom config file. And since I am using Unity for DI and IoC, I had to do some custom assembly loading via AppDomain.CurrentDomain.AssemblyResolve. Seems a bit like overkill to me, no? Pingback from James Wiseman » VS 2010 Package Development The code running steps as follows: •C# invoke unmanaged C++ through managed C++; •C++ site will create a new child thread and trigger C# OnChange method. •C# site will tried to getService: IVsNavigationTool navTool = provider.GetService<SVsClassView, IVsNavigationTool>(); serviceProvider come from SMProject which implement System.IServiceProvider interface, and SMProject have member named SMPackage which inherit from Microsoft.VisualStudio.Shell.Package, SMProject will invoke SMPackage.GetService method to override IServiceProvider interface GetService method. C# site in new created child Thread not main thread . If on C++ site , we can't create a new thread , just the GetService segment code runing in main thread , it is works ok. And i tried to retrieve SVsClassView & IVsNavigationTool, c# site code running in child thread, get Service will failed ... Could you please give me any help?? Any updates? Very informative post.Much thanks again. Much obliged. Thanks for the article.Thanks Again. Will read on... This is one awesome post.Thanks Again. Keep writing. Enjoyed every bit of your article.Really looking forward to read more. Awesome. Thanks a lot for the article.Really thank you! Much obliged. I really enjoy the blog post.Really thank you! Fantastic. PSSFxg Thanks a lot for the article post.Really thank you! Keep writing. Thanks for sharing, this is a fantastic article post.Really looking forward to read more. Cool. Thank you for your article post.Really thank you! Cool. wow, awesome blog.Really looking forward to read more. Awesome. Major thankies for the blog article.Thanks Again. Great. Great, thanks for sharing this article post.Thanks Again. Fantastic. Very neat article post. Really Cool. Enjoyed every bit of your blog post.Much thanks again. Awesome. I am so grateful for your post.Really looking forward to read more. Keep writing. I really like and appreciate your article.Really thank you! I really like and appreciate your blog.Really looking forward to read more. Great. wow, awesome post.Much thanks again. Great. Very good article. Fantastic. I appreciate you sharing this blog article.Really looking forward to read more. Much obliged. Great, thanks for sharing this blog article. Keep writing. Im thankful for the blog article.Thanks Again. Much obliged. Muchos Gracias for your article post.Really looking forward to read more. Great. Appreciate you sharing, great article post.Really thank you! Awesome. I am so grateful for your blog article.Thanks Again. Really Great. Major thankies for the article post.Thanks Again. Will read on... Great, thanks for sharing this blog article.Really thank you! Great. Thank you ever so for you article post.Thanks Again. Cool. Very good blog article. Keep writing. Major thankies for the article post.Much thanks again. Cool. Hey, thanks for the blog post.Really thank you! Great. Looking forward to reading more. Great blog post.Thanks Again. Really Great. Thanks again for the article post.Much thanks again. Want more. Thanks so much for the article. Really Cool. Very good blog article.Really thank you! wow, awesome post. I am so grateful for your blog.Thanks Again. Will read on... Great, thanks for sharing this blog article.Much thanks again. Will read on... I really liked your post.Really looking forward to read more. Much obliged. Im grateful for the blog article.Really thank you! Really Great. I value the blog.Much thanks again. Im grateful for the blog.Really looking forward to read more. Awesome. A big thank you for your blog.Thanks Again. Great. I really like and appreciate your blog.Really looking forward to read more. Really Cool. Im obliged for the blog post.Really looking forward to read more. Really Cool. Awesome blog post. Will read on... Appreciate you sharing, great article. Keep writing. Appreciate you sharing, great article post. Will read on... Looking forward to reading more. Great post. Will read on... I truly appreciate this blog.Thanks Again. Fantastic. Thanks-a-mundo for the article post.Thanks Again. Great. Major thanks for the article post.Much thanks again. Want more. Really appreciate you sharing this article.Thanks Again. Muchos Gracias for your article post.Much thanks again. Fantastic. wow, awesome article post.Really looking forward to read more. Will read on... I value the blog. Awesome. Enjoyed every bit of your blog. A big thank you for your article post.Much thanks again. Really Cool. I am so grateful for your article.Much thanks again. Keep writing. Im grateful for the article.Really thank you! Awesome. Muchos Gracias for your blog.Much thanks again. Keep writing. I truly appreciate this blog post.Really looking forward to read more. Want more. Thanks for sharing, this is a fantastic article.Much thanks again. Keep writing. Really enjoyed this blog article. Will read on... Thanks-a-mundo for the blog article.Really thank you! Really Great. Thanks for the blog post. Really Great. VS 2010 Package Development – Chapter 1: Visual Studio Packages - DiveDeeper's blog - Dotneteers.net Pingback from Friday links 88 « A Programmer with Microsoft tools Pingback from Will installing Visual Studio 2010 side by side with VS2008 means problems? | Zibell Pingback from VS 2005 fails to build aspx files with includes | Zenisek Pingback from Visual Studio Class Designer only lets me add comments, why? | Yearling Answers Pingback from Visual Studio 2005 : Is there an easy way to indent correctly in an ASPX file? | Yow Answers Pingback from having to rebuild in VS 2010 after every modification | Yanke Answers Pingback from Adding existing .cs files to VS 2008 without VS 2008 copying the files to project | Yoder Answers Pingback from In Visual Studio (2008) is there a way to have a custom dependent file on another custom file? | Youngberg Answers Pingback from Zahra Answers Pingback from visual basic 2010 how to program - computer tutorials
http://dotneteers.net/blogs/divedeeper/archive/2010/03/02/VisualStudioPackages.aspx
CC-MAIN-2015-32
refinedweb
8,942
55.64
fame_encode_frame - encode a single frame (DEPRECATED) #include <fame.h> fame_frame_statistics_t *fame_encode_frame(fame_context_t *context, fame_yuv_t *yuv, unsigned char *shape); fame_encode_frame() encodes a single uncompressed frame from yuv to a binary stream in buffer, according to initialization parameters and optionnaly previously encoded frames.. fame_encode_frame returns a pointer to frame encoding statistics. These statistics include frame number, coding mode (I,P or B) for the frame, expected bits, actual bits used, spatial activity, and quantizer scale. The number of bytes written to the buffer is actual_bits/8. MMX arithmetic performs bad at quality > 95%. Encoding of B frames is not yet supported. Only works when slices_per_frame has been set to 1 in fame_init Usage of this function is deprecated. Use fame_start_frame , fame_encode_slice , and fame_end_frame instead. fame_open(3), fame_init(3), fame_close(3), fame_encode_slice(3)
http://huge-man-linux.net/man3/fame_encode_frame.html
CC-MAIN-2018-26
refinedweb
130
50.63
. corp1 exchange 2010 will host the entire solution? all users should be migrated to the corp1 server? If all users will have @corp.com as primary e-mail address. on phase 1 you should configure all 3 exchange environments (corp1, 2 and 3) with the corp.com domain as non authoritative. The MX records should point to the corp1 server. and you should configure the shared namespace on every exchange environment. In this scenario all mails to corp.com will go into the corp1 server, and if the recipient is not there it will send it to another mail server (corp2 or corp3). This is done for coexistence. Is that what you intend to get here? Because you have 3 environments you need to test and see if you dont get loops in the mail forwarding process. Also the current corp.com 15-25 users must be set on the corp1.com mail server. it's the best approach as they are very few users and without exchange you wont get coexistence. Currently each company; aside from the smallest that we will build mailboxes for, is capable of operating from their existing exchange server, and we assumed that for simplicity, we would keep that intact during phase 1. Ownership wants us to utilize one common domain address starting day 1 (phase 1). We are basically absorbing the domain address from the smallest company; which is not currently operating in an exchange environment. So, configure each exchange environment with @corp.com domain as non-authoritative, and point the MX records for @corp.com domain to @corp1.com exchange server, correct? Then configure the shared namespace on every exchange environment. That's exactly what i was thinking in terms of mail flow, corp1, then corp2, then corp3. This is exactly what we need initially. We have couple extra domain addresses we are going to setup for just 1 user at each location, and see how it works. That will be our test. Sound like a good idea? Thanks so much! During phase 2, all users will be migrated to the corp1 server, yes. If your organization has a need to mass-create AD user accounts, watch this video to see how its done without the need for scripting or other unnecessary complexities. yes you can test with an aditional domain. point the mx records to the corp1.com exchange, and then test the shared namespace configurations for that namespace. dont forget to test it with external and internal e-mails. Mails should flow when sent internally as well. It sounds like a very good idea. if you need some extra help or have some extra questions let me know. I've gone through many different mergers like this and wanted to share my thoughts on this. If the AD domains are going to remain separate during the migration, you will run into autodiscover issues with Exchange 2007/2010 and Outlook 2007/2010. Outlook will automatically search for autodiscover.corp.com due to the primary SMTP address on that user, when in reality his email account resides on corp1.com's exchange server. There are work arounds for that, and if the ultimate goal is to have one shared AD and Exchange environment, this would be OK for the short term. I'm currently working with a client who has two business units that are completely seperate AD domains trying to share the same SMTP namespace long term, and it's not pretty with over 1000 users. If in the long term the AD structure and companies will remain separate, I would look at using a subdomain so corp.domain.com, company1.domain.com, company2.domain.com with MX records for each subdomain pointing to the respective servers. BTW autodiscover problems can cause issues like OOF and the download of the offline address book errors, and also prevent outlook automatica configuration from taking place. But you are right, autodiscover is tightly integrated to the exchange web services, so calendaring, out of office etc. will break. The next step is configuring the shared namespace on each mail server, correct? Do we need to wait until we have established VPN connectivity with each other? No server can have the domain as authoritative. if you have it as authoritative the mail wont be sent to another mail server if the mail address doesnt exists there. What about the other mail servers then, do they just need to configure the SMTP address and domain on their server? They are using Exchange 2003, so they don't have a receive connector to configure. They just need to allow from my public IP correct? Experts Exchange Solution brought to you by Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial
https://www.experts-exchange.com/questions/27670018/Planning-Corporate-Merger-of-Exchange.html
CC-MAIN-2018-30
refinedweb
812
67.25
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo You can subscribe to this list here. Showing 4 results of 4 Heiya, I would like to say a special thank you to all of you who got involved in improving, extending and translating the contents of the homepage of SMW as well as guarding it. It is wonderful to have you around. Another big thank you goes to all the coders who make SWM and related extensions possible. Your effort is greatly appreciated too. My last thank you goes to all people on the mailing list for providing tips and inspiration. Happy New Year to all of you out there. I am looking forward to seeing you around. Cheers [[kgh]] I recently added a custom namespace (called Observation) and have been having issues with getting the properties to show up for pages in that space. Doing some searching around, I found the problem address here: I first tried adding the custom namespace in SMW_Settings.php, but then changed my approach to doing it in the LocalSettings.php where I added a line: $smwgNamespacesWithSemanticLinks[NS_OBSERVATION] = true; but it doesn't seem to make any difference. When I browse properties on the page, it still shows no properties for pages in the Observation namespace. I'm not sure if I'm doing it wrong, or if there is something else going on. There is a followup note on the troubleshooting tip that says: "If this fails, check whether you have restricted queries to certain namespaces using $smwgNamespacesWithSemanticLinks." Since that's the same variable name as before, I was a little confused as to what I should be checking. Thanks, Matt Goff Hey, This is indeed a bug. In older versions of SMW the categories got listed after the properties, and they did not have a link to SearchByProperty. My guess is that something broke the special handling of categories. Cheers -- Jeroen De Dauw Don't panic. Don't be evil. --. Cheers and to all Happy New Year, MWJames
http://sourceforge.net/p/semediawiki/mailman/semediawiki-user/?viewmonth=201112&viewday=31
CC-MAIN-2015-22
refinedweb
347
73.47
Minimum cost path in matrix : Dynamic programming Given a 2D matrix, Cost[][], where Cost[i][j] represent cost of visiting cell (i,j), find minimum cost path to reach cell (n,m), where any cell can be reach from it’s left (by moving one step right) or from top (by moving one step down). For example, to reach (3,3) would be 16 by following path = ((0,0), (1,0), (2,0), (3,0), (3,1), (3,2), (3,3)) Minimum cost path : line of thoughts This problem is similar to Finding possible paths in grid. As mentioned there, grid problem reduces to smaller sub-problems once choice at the cell is made, but here move will be in reverse direction. To find minimum cost at cell (i,j), first find the minimum cost to the cell (i-1, j) and cell (i, j-1). Take the minimum of those two and add the cost of the cell (i,j) which will be the minimum cost to reach (i,j). Solution(n) = Cost of choice + Solution(n-1). CostToMove(i,j) = Min(CostToMove(i-1,j), CostToMove (i, j-1)) + Cost(i,j) Above equation can be implemented in recursive function. What should be terminating condition for recursion? It’s obvious, at starting cell i.e i=0 and j=0. findCost(i,j, cost) = cost(i,j) + Min( findCost(i-1,j, cost), findCost(i,j-1, cost)) Minimum cost path in matrix : Recursive implementation: #include <stdio.h> #include <stdlib.h> int min( int a, int b){ if(a > b) return b; return a; } int findMinimumCostPath(int Cost[3][3], int i, int j){ if( i == 0 && j == 0 ) return Cost[0][0]; if( i == 0 ) return findMinimumCostPath(Cost,i, j-1) + Cost[i][j]; if( j == 0) return findMinimumCostPath(Cost,i-1, j) + Cost[i][j]; return min(findMinimumCostPath(Cost,i-1, j), findMinimumCostPath(Cost,i, j-1) + Cost[i][j]); } int main() { int M,N; M = N = 3; int Cost[3][3] = { 1,3,4, 5,3,2, 3,4,5 }; printf("Minimum cost of path : %d" , findMinimumCostPath(Cost, M-1, N-1)); } Another way to implement the same function in Java private int minPathSumUtilRec(int[][] grid, int i, int j){ int m = grid.length; int n = grid[0].length; if(i>m-1 || j > n-1 ){ return Integer.MAX_VALUE; } if(i == m-1 && j == n-1 ) return grid[i][j]; return Math.min(minPathSumUtilRec(grid, i+1, j), minPathSumUtilRec(grid, i, j+1)) + grid[i][j]; } This solution exceeds the time limit on leet code submission for Minimum cost path in the matrix, because we are following each possible path. The number of paths in the matrix with given conditions is exponential and hence, the complexity of the recursive method is exponential too. Can we do better than that? We saw that the problem reduces to subproblem at every cell and optimal solution to a bigger problem depends on the optimal solution of subproblem, which is known as optimal sub-structure. This is one of the conditions to apply dynamic programming. There are several subproblems which are solved again and again in recursive solution i.e there are overlapping subproblems. How can we avoid re-solving subproblems? First immediate thing we can do it to store the result of already solved problems and use them when required. Below implementation uses a two-dimensional table to store the minimum cost to reach cell(i,j) and uses it when we have to solve the problem for cell (i,j). //Top down approach private int minPathSumUtilTopDown(int[][] grid, int i, int j, int[][] table){ int m = grid.length; int n = grid[0].length; if(i>m-1 || j > n-1 ){ return Integer.MAX_VALUE; } if(i == m-1 && j == n-1 ) return grid[i][j]; //If solution is already present, return it if(table[i][j] < Integer.MAX_VALUE) return table[i][j]; table[i][j] = Math.min(minPathSumUtilTopDown(grid, i+1, j, table), minPathSumUtilTopDown(grid, i, j+1, table)) + grid[i][j]; return table[i][j]; } Above solution falls within time limit when tested on leetcode. Another approach is to use the bottom-up approach when we start with a solution to the smaller problem and try to find a solution to the bigger problem. Create a 2-dimensional array to save solutions of subproblem. Each cell M[i][j] will store the minimum cost path until cell (i,j). Top most row in array is peculiar as any cell in that row can be reached only from left cell. MinCost(0,j) = MinCost(0,j-1) + Cost[0][j] Similarly, cell in leftmost column can only be reached from top cell. MinCost(i,0) = MinCost(i-1,0) + Cost[i][0] For all other cells, MinCost(i,j) = Min( MinCost(i-1),j), MinCost(i, j-1)) + Cost[i][j] Since, the solution of (i-1, j) and (i, j-1) is prerequisite for the solution of (i,j), this filling method is called bottom up. Minimum cost path in matrix : Dynamic programming implementation #include <stdio.h> #include <stdlib.h> int min( int a, int b){ if(a > b) return b; return a; } int findMinimumCostPath(int Cost[3][3], int M, int N){ //declare the minCost matrix int MinCost[M][N]; MinCost[0][0] = Cost[0][0]; // initialize first row of MinCost matrix for (int i=1; i<N; i++){ MinCost[0][i] = MinCost[0][i-1] + Cost[0][i]; } for (int i=1; i<M; i++){ MinCost[i][0] = MinCost[i-1][0] + Cost[i][0]; } for (int i=1;i<M; i++){ for (int j=1; j<N; j++){ MinCost[i][j] = min(MinCost[i-1][j], MinCost[i][j-1]) + Cost[i][j]; } } return MinCost[M-1][N-1]; } int main() { int M,N; M = N = 3; int Cost[3][3] = { 1,3,4, 5,3,2, 3,4,5 }; printf("Minimum cost of path : %d" , findMinimumCostPath(Cost, M, N)); } Complexity of dynamic programming approach to find minimum cost path in grid is O(n2) with additional space complexity of O(n2). Extend this problem by actually finding a path that leads to the destination. The solution is simple, start from destination cell, as that will be part of final path anyways, start moving either to a cell to left or top of the cell, whichever is less till you reach origin cell. One more variant of this problem is adding flexibility that one can move from left to right, top to down and diagonally as well. Nothing changes in solution except take a minimum of three cells instead of two (left, top and diagonal). Please share if there is something wrong or missing. If you want to contribute, please write to us
https://algorithmsandme.com/tag/min-cost-path-matrix/
CC-MAIN-2019-30
refinedweb
1,129
60.75
Here's the C++ solution to Exercise: Hotter or Colder. #include <iostream>#include <stdlib.h>#include <time.h>using namespace std;int main () {// generate secret number between 1 and 100:srand(time(0));int secret_number = rand() % 100 + 1;cout << "Secret number chosen (between 1 and 100)." << endl;// use while loop and "true" to create infinite loopint guess;cout << "Start guessing! ... " << endl;while (true) {cin >> guess;// compare secret_number and guessif (guess == secret_number) {cout << "Congratulations, you did it!" << endl;break; // exit loop when guess is equal to secret number}else if (guess < secret_number) {cout << "You're too cold!" << endl;}else {cout << "You're too hot!" << endl;}}cout << "Secret number is " << secret_number << endl;return 0;} To try out this code, follow this link. Sample interaction 1: Secret number chosen (between 1 and 100).Start guessing! ...70You're too cold!85You're too cold!95You're too hot!90You're too cold!93Congratulations, you did it!Secret number is 93 Sample interaction 2: Secret number chosen (between 1 and 100).Start guessing! ...40You're too cold!80You're too hot!60You're too hot!50You're too cold!55You're too hot!53You're too hot!52Congratulations, you did it!Secret number is 52
https://www.commonlounge.com/discussion/6f36097199db40dab9b4f71b76b0f8fa
CC-MAIN-2021-17
refinedweb
200
56.21
Generating fake whiskey reviews with GPT-2 Posted on Sun 23 June 2019. OpenAI recently made headlines with their blog post Better Language Models and Their Implications, in which they described their latest general language model, dubbed GPT-2. In the post, they claim to be practicing responsible (non)disclosure by not releasing the full pre-trained model due to concerns that it is so good, it could be used for fake news generation or social media manipulation. However, they did release smaller versions of the model which are nonetheless still quite performant. You can play with one of them at talktotransformer.com. Obviously this piqued my interest. I've dabbled in purposely bad language generation, but this time I was curious about doing something vaguely believable. I decided to try to sound more impressive in my whiskey-ing by generating my very own fake whiskey reviews, fine-tuning the small version of GPT-2 on existing reviews. The data The first site on Google when I searched for "whiskey reviews" was whiskyadvocate.com, and it seemed perfect for my needs. Short, consistently formatted reviews, and all on one long page making them easy to scrape. The only snag is that you have to click a "See More" button over and over to get 6 more reviews to render each time. I know you can use sophisticated scraping tools to deal with this situation, but I decided that that would be overkill for this project. Luckily I've been JavaScripting a little bit recently so it occurred to me that you can pop open the browser console and run: > var btn = document.getElementById("loadMore") > for (var i = 0; i < 100; i++) {btn.click();}; After glitching out for a bit, the whole set of reviews is visible on the page. One ctrl+S and we have the reviews ready for parsing with beautifulsoup. Looking at the html, we see that each review is in its own <article> block, so they are easy enough to extract: from bs4 import BeautifulSoup with open("data/raw/whisky_advocate.html") as h: raw_data = h.read() soup = BeautifulSoup(raw_data) articles = soup.findAll("article") Checking out the text structure of one "article", that all we really need to do to clean up is drop empty lines and replace non-breaking spaces ( /xa0) with normal ones. reviews_clean = [] for a in articles: a_clean = [ line.replace("\xa0", " ") # breaking spaces for line in a.text.split("\n") if line is not '' # drop empty lines ] # rejoin into one formatted snippet reviews_clean.append("\n".join(a_clean)) This gives us nice clean review snippets in the following format: Real Review (input data) 95 points Uncle Nearest 1820 Single Barrel 11 year old Tennessee Whiskey (Barrel US-2), 55.1% Bourbon/Tennessee | $119. Looks (and sounds) good to me! The last step is to save our reviews back to a text file for processing by GPT-2. The model distinguishes between separate pieces of text using a special token, <|endoftext|>. So we join all the reviews back together, separated by this token, and save it to a text file: with open("data/clean/reviews.txt", "w") as h: output = "\n<|endoftext|>\n".join(reviews_clean) h.write(output+"\n") Now we're ready to train! After training the model, it turned out I was only using the Summer 2019 Buying Guide and not the full set of reviews, so there is definitely room for improvement here. Fine-tuning GPT-2 Surprisingly, this is actually the easy part, thanks to work by nshepperd. His/her fork of GPT-2 contains a super easy train script for finetuning GPT-2, with lots of options. To get started with it, we clone the repo and use the provided script to download the pre-trained model. This model has already been trained on a set of 8 million web pages, so it already has a pretty big and diverse vocabulary. We'll download the smaller model, called 117M: $ git clone $ cd gpt-2 $ pip3 install -r requirements.txt $ python3 download_model.py 117M This will save the pre-trained model in the models directory. The set of reviews isn't very big (the whole site contains about 4,000, but I only grabbed 125 during this experiment due to the mistake mentioned above), so this will be less like fine-tuning and more like clobbering the model. When we are done, it will know nothing but whiskey reviews. For that reason, we output samples frequently during training, with the expectation that the sweet-spot for interesting new reviews will come somewhere before the model is totally overfit. With our processed reviews in data/clean/reviews.txt, we run the training script on our data, setting the PYTHONPATH variable as indicated in the readme. We output 30 samples every 25 epochs, with a length of 250 words (long enough that a full review should fit in there). We also checkpoint the model at that point so that we can return to it to generate more samples later if we want: $ PYTHONPATH=gpt-2/src python3 gpt-2/train.py \ --dataset data/clean/reviews.txt \ --save_every 25 --sample_every 25 \ --sample_length 250 --sample_num 30 The script will create a samples directory and save the samples there, and also a checkpoints directory. It runs until you kill it with ctrl+C, so feel free to go get a coffee. Fake whiskey reviews The first thing to notice about the output is that the generated text is usually really coherent at first glance. After 25 epochs, the model isn't really talking about whiskey yet, so we mostly get a peek into the kind of text generation that it's capable of: GPT-2 SAMPLE: 25 Epochs We are pleased to present the third of over 1,000 stories published by the New England Journal of Medicine, from all continents. The authors have collected information on over 50,000 individuals in the United States who were examined at the National Heart, Lung and Blood Institute (NHBI) and included information on symptoms and treatments. The research was focused on coronary artery bypass grafting, cardiovascular dysfunction, cardiovascular disease, and coronary artery disease. For complete details and results, visit. The URL doesn't work, I checked. This is generated text. After 50 epochs, we're already seeing some promising output, though there's definitely work to do: GPT-2 SAMPLE: 50 Epochs The packaging is as it should be: it looks fine (with more sugar than water), but the nose gets burnt out, which can be remedied with some honey. There were a couple of more hints of red wine and chocolate, but in the nose this gives way to a more complex whisky with plenty of chocolate and dried fruit. On the palate, it's just perfect. The nose is quite dry and mouthful, but more like a whisky where, when you pour it into your mouth and add it, you find the sweetness of cherry-chardonnay, fresh orange peel, ginger, and vanilla peel. On the palate, it develops chocolate, vanilla syrup, and nutmeg, along with a light and pleasant hint of dried fruit, then is broken up by chocolate, cherry, and pepper notes. Very well-designed, although this was not finished in time. 1/5 At 100 epochs, we have a review that at first glance seems reasonable: GPT-2 SAMPLE: 100 Epochs 85 points Black Cherry Ale, 90% Blended Malt Whisky | $60 The nose was very light and crisp, with chocolate, banana candy flavor, honey, espresso, toffee, and herbal aromas. The palate was warm, with vanilla, cinnamon, and chocolate sweetness. The finish was medium in length, with plenty of vanilla and chocolate, and the finish was strong and well-balanced: chocolate, caramel, milk chocolate, milk chocolate, caramel, and sweet almond. Note that the model is even mimicking our formatting. This one ticks all the boxes of a review, but check out the tasting notes: vanilla, cinnamon, chocolate, caramel, sweet almond. There's definitely a theme: these are baking ingredients! Many food words are used in tasting notes. The model is learning this, but the text it was trained on seems to have included many recipes, as all sorts of baking instructions are also being generated. This was another sample from the same batch: GPT-2 SAMPLE: 100 Epochs 1 pint brown sugar 1 egg, beaten 1 teaspoon kosher salt 3 ounces butter, softened 3 egg whites, beaten 12 drops honey, at room temperature 2 tablespoons brown sugar/butter 3 tablespoons apple cider vinegar, at room temperature 2 tablespoons powdered pepper 1/2 teaspoon pepper 1 tablespoon black pepper Directions - In a medium bowl, beat butter, 1/2 cup apple cider vinegar, and vanilla until smooth, 2-4 hours. Add in egg whites, and gradually add more egg whites, if needed. (You may need to add more milk and so forth.) - Pour milk/soy mixture through an egg grinder; if it is hard, you might have to do it some more. If there isn't quite so much water, you're going to have to let the milk drain on the stove top. - Add water once, once or twice, until all the water is dissolved, about 3-4 hours. Add pepper, egg whites, vanilla, and water. - Whisk in egg shells and water; the water gets very sticky; add salt "Pour milk/soy mixture through an egg grinder; if it is hard, you might have to do it some more" isn't exactly sensible, but it is definitely coherent and the whole thing reads like a recipe. Just don't try it at home. After 200 epochs, the model is producing whiskey reviews that I (as a layman) can't distinguish from the real thing: GPT-2 SAMPLE: 200 Epochs 91 points Luxembourg 60 year old (Batch 2), 59% Single Malt Scotch | $235 Quite simply the most beautiful whisky in Europe at the time. Beautifully balanced spices, herbal oloroso fruits, soft earthy aromas of honey, licorice, lavender, lavender oil, and a hint of espresso. The palate is rich and richly spiced, with dark tannins, orange candy, green bananas, almond milk, espresso, brown sugar, and cream cheese louis. Floral notes and honeycomb in the finish. (12,800 bottles for U.S.) However, it is also revealing one of GPT-2's flaws: it has a tendency to abruptly switch context, which can be quite comical. From the same batch: GPT-2 SAMPLE: 200 Epochs A group of scientists led by Professor Jonny Wilkinson has found that the planet Saturn has the highest amount of formaldehyde on Earth. The concentration falls short, with only slightly more than 100 parts per million of the general background. It is 80 parts per million in the form of sisphenol A (s<3 parts per million<30 parts per million): this is also found in less-drying water like water and soda water. The taste is more palatable, with more of both honey and orange, then chocolate hazelnut, vanilla pod, chocolate chip cookies, caramel, hazelnut, coconut, and a sprinkling of orange oil. There are also more dark chocolate aromas, which are sweeter and with some spice and nutmeg addition. One might expect there to be more chocolate, but, on closer inspection, it's more fruitcake and chocolate barristers will ultimately find it all. and: GPT-2 SAMPLE: 200 Epochs 90 points Lemon Keg, 57% Japanese | $45 The standard edition features a black leather jacket with leather cuffs and white floral print buttons. The back has a floral feel, but is made from milk chocolate; the outside shell lacks much citrus or sweetness, which ultimately leads to a light taupe hue and ample mouth-drawing. This is also what I meant by clobbering the model: it is forgetting how to remain in other contexts. No matter where it starts, it is beginning to see everything as a whiskey review. By this point most (but not all) of the samples are also formatted like our whiskey reviews. This distinctive formatting proved to be a very useful tool in gauging to what extent we were overfitting the model. By about 250 epochs, most of the output is pretty believable, with the only consistent flaw being repeated flavors in the tasting notes. All in all, I was surprised at both how easy the open source community made this experiment, and also how quickly and accurately the model was able to start replicating reasonable-sounding reviews (training only took about a minute per epoch on my old-ish MacBook Pro). This post is getting long enough, but I'll end with some highlights and bloopers: Highlights Consistency in country of origin (Canadian) and location of distillery (Banff): GPT-2 SAMPLE 87 points Millstone 92 Rye, 46% Canadian | $99 The second installment in our four-part rye limited-edition comparison, this proof-bodied reissue from the Millstone Works facility in Banff presents both dry and wet grasses with great maturity. The nose shifts to vanilla, honey, cinnamon, orange oil, licorice, and crystalized halvah. The palate is light and delicate, with nuttiness, cloves, green apple, bitter chocolate, pecan pie, caramel, and bittersweet chocolate. The finish yields leather, honey, roasted nuts, orange, dried citrus, subtle oak, and dried coriander, all evocative of a wartime log. Editors' Choice Sweet and smooth: GPT-2 SAMPLE 92 points Timersley's Barrel-Aged Bourbon, 44% Blended Scotch Whiskey | $30 Sweet potato, strawberry jam, milk chocolate, candies, and brown sugar flavors on the nose, along with smoky bowers & sounders, all wrapped tightly in a viscous, creamy feel. The palate is smooth and gentle, delivering gentle, medium-bodied whiskeys with their own distinctive flavors. A light, nimble palate serves enough sweetness without too much rancor that it becomes sultry, with hot chocolate, strawberry jam, milk chocolate, plum, and cassis. Bloopers My absolute favorite: GPT-2 SAMPLE A smallpox-infected grain, this first-generation proof has a floral, sweet nose laced with blueberry jam, cherry Cask Whisky, toffee, cedar, and tangy oak. It is fragrant and flavorful beyond any description—like a symphony show from before the age of antibiotics. Whiskey, cigars, or promiscuous McDonald's enthusiasts? GPT-2 SAMPLE Twip | 18k Batch | Filch | | $75 Makes a delicious mixing bowl of Chocolate Fudge, Big Mac, Ham, Sweet and Sour Apple Cider, Peanuts, and Sour Cream, with a fondness of making out with other people. The caramel is so strong, in fact, that I could only inhale the delicious caramel trails in my cigar box. The only hint I have of burnt sugar and caramel are lost along the way. Sweet and fruity, at least for the $75 price point. Confusion between whiskey and... pop metal? GPT-2 SAMPLE the Chinese pop-metal king Ping'i Minbar began producing small batches for his mouth. Now in its 40s, with a nose of chocolate flan, hedonian, and bubble gum, it ends on sweet and savory rye, with vanilla, orange, and clove. It’s a reprise of the year loaded with floral notes, tangerine, cinnamon, black pepper, and orange soda pop. Floral, earthy, and smoky throughout, this period’s reasonably priced—$120—but well-balanced to give (or withhold) indulgence. Peanut butter jelly time: GPT-2 SAMPLE 61 points Chicken Pot Pie with Peanuts, Hot Chocolate, and More Peanuts, 50/50 Single Malt Scotch Whisky | $35 Measurable, with portmoking salt water, toasted nuts, caramel, and spices in a thin blanket of peanut butter and jelly, but the panning process was not without its pique. The peanuts and the generous whiskily cooked nuts create an intriguing, if somewhat overcomplicated, vivacious ode to peanut butter and jelly, with sweet peanut butter and jelly, along the way. What would this even taste like? GPT-2 SAMPLE Bourbon/Tennessee | $85 Tastes like a young William Shatner novel: rich with flavor, packed with fruit, honey, orange, maple syrup, and aromas of strawberry, licorice, and chocolate, balanced against a lingering tannin of the herbaceous nectar source. The palate is luxurious, richly notes a childhood home. Note: I have kept the generated reviews true to their original formatting and content, with the exception of removing the "Reviewed By" line that WhiskyAdvocate have at the end of their reviews. The model very quickly learned to use the real names of the reviewers, and I didn't want a real person being accused of describing a whiskey as "smallpox-infected". Thanks to WhiskyAdvocate for agreeing to let me use their reviews for this post!
https://johnpaton.net/posts/whiskey-reviews/
CC-MAIN-2019-30
refinedweb
2,732
59.33
NAME atanh, atanhf, atanhl - inverse hyperbolic tangent function SYNOPSIS #include <math.h> double atanh(double x); float atanhf(float x); long double atanhl(long double x); double atanh(double x); float atanhf(float x); long double atanhl(long double x); Link with -lm. BUGS For a pole error, errno is set to EDOM; POSIX.1 says it should be set to ERANGE. SEE ALSO acosh(3), asinh(3), catanh(3), cosh(3), sinh(3), tanh(3) COLOPHON This page is part of release 3.23 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
https://linux.fm4dd.com/en/man3/atanh.htm
CC-MAIN-2022-05
refinedweb
105
67.04
We have around 12 offices, each with a Domain Controller doing the following: DHCP 7 Replies Jun 6, 2017 at 5:01 UTC That is the way I would do it. Some companies do not because of the required expenses but having local DNS, DHCP and DFS space will make the end user experience much better than pointing users back to a home office. Jun 6, 2017 at 5:12 UTC It depends what kind of connections you have between offices and how likely that connection is to be broken. Jun 6, 2017 at 5:21 UTC With DFS involved I'd want to have a server at each site. Without DFS I'd just do a site-to-site VPN and keep everything at headquarters Jun 6, 2017 at 5:24 UTC A DC is required for DFS to be able to issue referrals to local resources because it uses AD sites. It prioritizes local site, then remote site. If all your data is in one spot and you don't use DFSR, then this isn't as important; but keep in mind that referrals expire after I think 30 minutes. If you lose contact with a remote DC and you don't have a local namespace server, the DFS namespace will be unavailable. (edit: erased a bunch because I misunderstood original post :p Jun 6, 2017 at 6:40 UTC It is not required but it is preferred. You can have multiple sites with their own subnets in Sites and Services with a handful of DC,DNS and File Servers (DFS) through out the main locations. This will though depend of your access to applications and how your environment is. Jun 6, 2017 at 6:44 UTC Not really required. We use site to site VPN for our 17 branches with no issues. Jun 6, 2017 at 8:56 UTC There is a little more to it. If the remote office does not really have the facilities to ensure proper server storage and management and a rock solid connection then I would never recommend putting equipment at that premises. What makes DFS nice is file services redundancy and not necessarily geographic distribution. I believe that a distributed DFS that has connection issues can cause far more harm than good to the DFS infrastructure. DFS needs stability or you will have a lot of problems with it and can even lose your namespaces.
https://community.spiceworks.com/topic/2002949-is-a-domain-controller-required-for-all-sites
CC-MAIN-2017-39
refinedweb
404
66.47
Code. Collaborate. Organize. No Limits. Try it Today. I’m impressed! And that’s not an easy thing to do. I’ve been playing with the CTP (February 2006) release for a while now, and believe it or not, I think we may actually have something here. I have not had the time to explore the other ‘Foundation’ members (of WinFX), but from what I’ve seen of the Windows Communication Foundation (WCF), it really is a very nice package. Yes, I know that each time there is a new technology released, there is always some neat features. But in the past, I always thought that we were paying more (in complexity) than what we were getting (in functionality), can you spell COM? This time, it feels like we are getting the whole thing for free. It’s possible that it may be just my naïve perspective, but hey it’s all about me anyway. I like to preface my articles with a disclaimer. The content in this article is simply my impressions and interpretation, and it should be read in that context. Even though the prose may at times seem like I know what I’m talking about, any similarity to actual facts may simply be coincidence. Enjoy the journey. The minute we stepped out of the DOS box, we realized that we were not alone any more, and we needed to learn to get along and communicate with the other inhabitants of the virtual world. DDE, OLE, COM, DCOM, and Remoting have been some of the attempts at providing mechanisms for two applications to be able to talk to each other. Remember how OLE and COM were described when first introduced? As the ‘foundation for all future products’. With hindsight, we can see that they were really just baby steps. Each one solved only a small part of the whole problem. So if they were baby steps, then WCF is certainly a giant leap. WCF provides a complete solution to the communication problem. And it does it with elegance and simplicity. Can you tell that I’m just a little enthusiastic? Whether your requirement is to communicate with another module on the same machine, or another module implemented in a different language, or you need to communicate with a module that’s on a machine on the other side of the world, or you want to communicate with a module running on a different platform, or even communicate with a module that’s not even running!, yup, you can do it under WCF. In my opinion, the beauty of WCF is that it is an ‘all-inclusive’ solution. It is the first one to provide a complete end-to-end solution covering the scope and depth of the problem. It is also the simplest from a programmer's point of view; you are always just making a method call. Yeah, I am sure that there is quite a bit of magic going on under the covers to support that simplicity of use. Now, I have not explored every nook and cranny, or option, or possibility, but from what I’ve seen, it’s an excellent solution. At least for me, and as I said before… Let’s see how much trouble I can get myself into here. I think that, in its most elemental form, a service is simply some functionality (code) running in some external process that is made available to other processes in a standard way. That’s pretty much the crux of it, except that in a standard way also encompasses platform and language neutrality. So, by the above definition, a service really has two parts. First is the code that must be running somewhere in order to provide some functionality to clients. And second, there must be some generic mechanism that can be used by any process, regardless of the platform, language, or locality, that makes the service accessible. That generic mechanism has turned out to be XML and SOAP. Of course, there are some additional facilities required in order for a client to be able to know (or discover) what functionality the service makes available. But I think of those as supporting technologies. There is also some glue that is required in order to tie the two parts of a service together. That glue is the code that will support the communication medium (transport) that is being used by the service and the client to talk to each other. Being lazy…I mean smart, we’ve come up with some generic glue also. This way, each service implementation does not have to re-invent the wheel. For Web Services, the generic glue is a Web Server. So, a Web Server provides a hosting environment for services that use HTTP as their transport mechanism. I would also like to suggest that Web Services are a special implementation of a service as defined above. Here are the things that we will be examining in the rest of this article. How do you define a service? How do you implement a service? How do you host a service? How do you access and use a service? Once we have the basics nailed down, we’ll look at some of the more complex communication options that WCF facilitates. Here is what WCF is for me: services based technology. It has, as its roots, Web Services, and thus XML and SOAP are its core technologies. WCF took the concept of Web Services and super-charged it. Much of the look and feel of WCF behaves like traditional Web Services. In fact, I like to think of WCF services as Web Services on steroids. You define WCF services much like Web Services. You can interrogate a known service for its methods with the same protocols that are available for Web Services. And you have very similar infrastructure requirements as are for Web Services. The main difference is that WCF has expanded the transports that are available to include TCP/IP, IPC (named pipes), and message queue-ing, in addition to HTTP. I think the focus of Web Services is to solve the interoperability problem, and the focus of WCF is to solve the much broader communication problem. And it has done this while still maintaining a uniform API as well providing more efficient mechanisms. The most important feature from a developer perspective is that you don’t have to be concerned with what or how you are communicating. The code is the same, no matter what the final transport mechanism or locality of service might be. Our WCF journey starts with how services define the functionality that they expose. Much of the infrastructure required to implement services under WCF is specified using declarative programming. That means, using attributes to specify functionality. The following shows how to declare an interface that will be exposed as a service: [ServiceContract] public interface ILocalTime { [OperationContract] string GetLocalTime(); } public class LocalTimeService : ILocalTime { ... } The ServiceContract attribute specifies that the interface defines the functionality of a service. OperationContract is used to decorate each method that is to be exposed as part of the service. That is all that is required to create a WCF service. Just slightly more is required to actually deploy the service, which we’ll cover later on. ServiceContract OperationContract By the way, you don’t have to use interfaces when implementing a service, just like you don’t have to use an interface to define a class. You do have to specify what you want exposed through a service, explicitly. You can define anything else you want or need as part of the interface, but only methods, and only methods that get decorated with [OperationContract], will be exposed by the service. [OperationContract] WCF also allows you to expose custom data types so that you are not restricted to simple data types of the CLR. These are simple structs with no methods associated with it. This can be a little confusing sometimes, because the same syntax is used for both services as well as for CLR definitions. Here’s an example of a DataContract that we will use. DataContract [DataContract] public class SensorTemp { [DataMember] public int probeID; [DataMember] public int temp; } DataContract specifies the data type that you are exposing and, DataMember specifies the members that are part of the data type. As is the case with ServiceContract, you have to explicitly declare which members are to be exposed to external clients, using DataMember. What that means is that you can include anything else that you may want (or need) as part of the class definition, but only the members decorated with DataMember will be visible to clients. DataMember As we saw above, one of the options available to specify functionality under WCF is to use attributes. Attributes are translated by the compiler to generate much of the infrastructure required by WCF in order for us to create and use services. The second way you can specify many of the options is through configuration files. This allows you to make changes without having to re-compile. Many of the WCF classes will automatically use default values from the config file. Here’s an example of an endpoint specified using config data (endpoints will be described shortly). First, the config file, then the code statement referencing the config file data: <endpoint name ="LocalTimeService" address="net.pipe://localhost/LocalTimeService" binding="netNamedPipeBinding" contract="ILocalTime" /> LocalTimeProxy proxy = new LocalTimeProxy("LocalTimeService"); Finally, the third way of coding functionality is, of course, programmatically. Many of the things that you can do via attributes or config files can also be done programmatically. Here is the previous endpoint, defined programmatically: Uri baseAddress = new Uri(ConfigurationManager.AppSettings["basePipeTimeService"]); baseAddress = new Uri(ConfigurationManager.AppSettings["basePipeTimeService"]); serviceHost.AddServiceEndpoint(typeof(ILocalTime), new NetNamedPipeBinding(), baseAddress); Endpoints are the ‘identity’ of a service. They define all the information that we need in order to establish and communicate successfully with a service. Endpoints are made up of three pieces of information: Address, Binding, and Contract. The address is obviously the location of the service, such as ‘net.pipe://localhost/LocalTimeService’. The binding specifies security options, encoding options, and transport options, which means a lot of options! Luckily, there is a collection of pre-defined bindings provided with WCF that we can use to make our life simpler. And finally, the contract is the actual interface that the service implements. So, a service is nothing more than a regular class that gets decorated with some special attributes. The attributes are then translated by the compiler to generate the special infrastructure code required to expose the class as a service to the world. In the following code, we first define an interface that has one method that returns the local time of where the service has been deployed. The LocalTimeService class then implements the interface, and thus exposes the functionality to the world, or at least to whomever is interested. LocalTimeService [ServiceContract] public interface ILocalTime { [OperationContract] string GetLocalTime(); } [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)] public class LocalTimeService : ILocalTime { public string GetLocalTime() { return DateTime.Now.ToShortTimeString(); } } That’s all that’s needed to create a WCF service. If you compile the above code into a DLL (a library), you will have created a service. Of course, there is a little more needed in order to have something that’s useable. We need two other pieces in order to complete the service. We need something that will be able to load the service DLL when a client requests the functionality of the service. And we need something that will be able to listen on a communication port, and look through everything that is being received to see if it matches what we are responsible for, our service contract. There are a number of ways to deploy our service. First, if we implement our service to support HTTP, then we could deploy our service just like a regular Web Service, using IIS. If we want to support some of the other transports, then we could deploy the service using Windows Activation Service (WAS), which is an enhancement available in IIS 7.0. If either of these is not suitable or we want more control over the service, then the other solution is to build our own hosting environment by using ServiceHost. ServiceHost is a class available in WCF to host services, almost like a mini IIS, just for our service. We can implement ServiceHost in any housing available under Windows, in a console app, a Windows executable, or a Windows service (formerly NT service). ServiceHost ServiceHost will listen on whatever channel we specify for our service, using whatever protocol we specify, and call our service whenever a client request for our specific service comes in. That’s a lot of bang for just a couple of lines of code. All that we need to do is tell ServiceHost the endpoint that it is responsible for and the class that it should instantiate when a matching message is received. Here’s the code that’s required to host the LocalTimeService in a console app: class TimeService { static void Main(string[] args) { Uri baseAddress = new Uri( ConfigurationManager.AppSettings["basePipeTimeService"]); ServiceHost serviceHost = new ServiceHost(typeof(LocalTimeService), baseAddress); serviceHost.Open(); Console.WriteLine("Service is running....press any key to terminate."); Console.ReadKey(); serviceHost.Close(); } } You can now compile the service. However, if you try to run it, you’ll get an error message indicating that you haven’t provided ServiceHost with an endpoint. As we saw above, you can specify endpoints either programmatically, or by using the configuration file. The nice thing about using configuration is that you can change it at any time and you don’t have to recompile. As we’ll see later, you can specify multiple endpoints for a service, depending on the clients that you want to support. And if at some point later, you decide to not support a specific transport, you just have to edit the configuration file. Here’s the config file that we’ll need for the LocalTimeService: <configuration> <appSettings> <add key="basePipeTimeService" value="net.pipe://localhost/LocalTimeService" /> </appSettings> <system.serviceModel> <services> <service name="LocalTimeService"> <endpoint address="" binding="netNamedPipeBinding" contract="ILocalTime" /> </service> </services> </system.serviceModel> </configuration> Let’s examine the entries in the config file. You should note that there could be as many entries as needed. For example, there could be several endpoints that the service supports (different transports). There is also only one service being specified in this example, but there could be several services provided in the same housing. You can see that the endpoint has three properties: address, binding, and contract. The binding indicated is referencing the standard netNamedPipeBinding provided in WCF. There are various default binding classes provided for each transport. You can see the options for each in the docs. netNamedPipeBinding I will say here that you too will encounter the “zero application (non-infrastructure) endpoints” exception at some point. There won’t be too many clues as to what exactly is not matching up, so you’ll have to scrutinize the text. Make sure that you have the correct namespaces specified. Now you can execute the application, and the service will be available to any client that knows how to communicate with it. Just by saying we want to go to New York and we are going to go by car, does not get us there. We need a car to actually get us there. Having a completely defined endpoint is not enough. We need something (code) that will actually take the endpoint as a parameter and allow us to do what we want to do, call a method. And that something is a proxy. As was the case in the past with COM, proxies take care of all the low level plumbing (serializing and packaging our parameters) so that we just need to make the call. We don’t care how they are forced through the ‘spigot’ or how they are pulled out on the other side. And as we’ve also had in the past, there is a utility that will create the proxies for us. However, this is one area of WCF that needs some improvement. And I’m guessing this functionality will become incorporated into Visual Studio in future releases. At least, I would hope so. To create a proxy, you need to use the command line utility, svcutil, which has a number of switches that are not all or well documented. But hey, I’m not complaining, it’s a small inconvenience for a whole lot of major improvements. And it’s still only Beta. So, you run svcutil against your service DLL and bam! You got your proxy class. There are other options, like if the service has a MEX endpoint, you can direct it to the service, and it will extract the service information dynamically from the service. This is essentially the same functionality provided through Studio when creating a Web Service, and we use the ‘Add Web Reference’ dialog. What I really want is for Visual Studio to automatically generate the proxies since it has all the information in the source files to begin with! But as I said, I’m not complaining. Currently then, creating the proxy is a two step process. First, you run svcutil against your service DLL, which will create the schema (XSD) and WSDL files. Then, you invoke svcutil again, but this time, you run it against the output it just created (*.xsd, *.wsdl). The svcutil will generate ‘output.cs’ as the default file name, unless you specify otherwise, I normally just rename it. There are also options to just generate DataContracts or just ServiceContracts, and also an option to generate a client config file. Here’s the proxy file for the LocalTimeService, with some portions edited for readability. There’s not much there, since all the magic occurs in ClientBase. DataContracts ServiceContracts ClientBase [ServiceContract] public interface ILocalTime { [OperationContract] string GetLocalTime(); } public interface ILocalTimeChannel : ILocalTime, System.ServiceModel.IClientChannel { } public partial class LocalTimeProxy : System.ServiceModel.ClientBase<ILocalTime>, ILocalTime { public LocalTimeProxy() { } public string GetLocalTime() { return base.InnerProxy.GetLocalTime(); } } So now, the next logical step is to build a client that knows how to consume the service that is being provided. The client code only needs two things: the proxy that allows it to communicate with the service, and the endpoint to the service. Here’s one version of a client that consumes the LocalTimeService: class Client { public bool keepClocking = true; LocalTimeProxy proxy = null; public Client() { proxy = new LocalTimeProxy(); } public void ClockingThread() { while (keepClocking) { Console.WriteLine(proxy.GetLocalTime()); Thread.Sleep(1000); } proxy.Close(); } static void Main(string[] args) { Client client = new Client(); //Create a separate thread to request the time once a second Thread thread = new Thread(new ThreadStart(client.ClockingThread)); thread.Start(); Console.WriteLine("Service is running....press" + " any key to terminate."); Console.ReadKey(); client.keepClocking = false; //wait 2 seconds Thread.Sleep(2000); } } Not much to this client. All that it's doing is making a call to the service method GetLocalTime(), once a second. As you can see, the client code has no indication as to what or where the other end of the method call is. Nor what mechanism is actually being used to make the connection. It is just a simple class method call! As we look at other examples, you'll keep seeing the simplicity of coding that is provided under WCF. And here is the config file that specifies the endpoint to the service which is required by the client. GetLocalTime() <configuration> <system.serviceModel> <client> <endpoint name ="LocalTimeService" address="net.pipe://localhost/LocalTimeService" binding="netNamedPipeBinding" contract="ILocalTime" /> </client> </system.serviceModel> </configuration> Compile and run the client. Start several instances. Just make sure that the service is started before you start the clients, otherwise nobody will be listening. That’s it for the basics in getting services listening and consuming services. In Part 2, we'll build some examples that will demonstrate WCF's support for the various communication patterns. The download includes all of the source code for the sample applications described in the
http://www.codeproject.com/Articles/14762/Communication-options-with-WCF-Part?msg=2705194
CC-MAIN-2014-15
refinedweb
3,347
55.24
OpenGL Discussion and Help Forums > OpenGL Developers Forum > OpenGL coding: beginners > Making FPS PDA View Full Version : Making FPS genetic 12-26-2001, 12:34 AM Could anybody gimme and example, how to calculate FPS in VC ++ 6 ? Thanks. Bob 12-26-2001, 01:37 AM Use the function timeGetTime (returns time in milliseconds), and calculate the time taken to render the frame. FPS = 1/time. genetic 12-26-2001, 05:33 AM Originally posted by Bob: Use the function timeGetTime (returns time in milliseconds), and calculate the time taken to render the frame. FPS = 1/time. Thanks, i`ll try. Leyder Dylan 12-26-2001, 11:56 AM Hi, On my site, I've a little example for displaying FPS. ARES 12-29-2001, 10:01 AM Here is my function. You have to call it once every frame and then it will return the number of frames per second. Don't call it more then one time a frame because them the fps won't be correct. #include "time.h" int FPS() { static int frames = 0; static int fcount = 0; static clock_t next = clock() + 1000; fcount++; if ( clock(); >= next ) { next = clock(); + 1000; frames = fcount; fcount = 0; } return frames; } genetic 12-29-2001, 11:10 AM Thanks ppl. I`ve just done it myselft trough WM_TIMER. DFrey 12-29-2001, 01:09 PM Take note that basing a timer on WM_TIMER produces a timer with somewhat variable accuracy. Why? Though the message is sent after the given delay, you don't know how much time has passed since. If you want a more accurate timer, base it on the rtc or the cpu timestamp counter. pATChes11 12-29-2001, 01:45 PM Actually, for high-resolution timing, you should use the high-performance counter. I can have triple digit framerates above 500, and still get accuracy beyond that of a float Lemme dig up my code... I forgot how to do it :eek: :P Ok, create these as global variables: LARGE_INTEGER lastQueryValue; LARGE_INTEGER timerFrequency; LARGE_INTEGER tmpValue; Then, in your init code: if ( !QueryPerformanceFrequency(&timerFrequency) ) return 0; //use a different timer or abort if you get to here... fortunately you shouldn't get to here on very many machines, but it's a good idea to have a substitute available QueryPerformanceCounter( &lastQueryValue ); And in your main loop: QueryPerformanceCounter( &tmpValue ); lastFrameTime = (float)( tmpValue.QuadPart - lastQueryValue.QuadPart ) / (float)timerFrequency.QuadPart; lastQueryValue = tmpValue; Presto, instant high-res counter. And fyi, quad ints have a maximum value of 2^64, so you won't have any problem timing pretty much anything with this code No credit required nor desired, but you can credit me as pATCheS (please get it right :P) if you'd like. [This message has been edited by pATChes11 (edited 12-29-2001).] DFrey 12-29-2001, 06:05 PM Just so that genetic doesn't get confused, what patches11 (and Microsoft) call the high performance counter is in fact based on the cpu timestamp counter. [This message has been edited by DFrey (edited 12-29-2001).] genetic 12-30-2001, 02:17 PM Thx ppl. i`ll remake it as you wish :-) Powered by vBulletin® Version 4.2.3 Copyright © 2017 vBulletin Solutions, Inc. All rights reserved.
https://www.opengl.org/discussion_boards/archive/index.php/t-143166.html
CC-MAIN-2017-17
refinedweb
537
61.97
By Alvin Alexander. Last updated: May 18, 2020 As a brief note, here are a few examples of how to implement left-trim and right-trim on strings in Scala: def ltrim(s: String) = s.replaceAll("^\\s+", "") def rtrim(s: String) = s.replaceAll("\\s+$", "") If I ever write a Scala StringUtils class, I’ll be sure to include those functions in that class. Remove all blank strings from Scala sequences In a related “string utilities” note, here’s a method that removes all blank strings from a Seq, List, or Array of strings: def removeBlankStrings(strings: Seq[String]): Seq[String] = { for { s <- strings if !s.trim.equals("") } yield s } I wrote that method when I was half-awake, then realized that I was writing a filter method. An even better approach is to use the filter or filterNot method: def removeBlankStrings(strings: Seq[String]) = strings.filterNot(_.trim.equals("")) (When I say “better”, I mean that the code does the same thing, and it’s more concise and easier to read.) How to trim all strings in a Scala sequence You can trim all the strings in a Scala sequence like this: def trim(strings: Seq[String]) = strings.map(_.trim) Finally, you can left trim and right trim all strings in a sequence like this: def removeLeadingSpaces(strings: Seq[String]) = strings.map(ltrim(_)) def removeTrailingSpaces(strings: Seq[String]) = strings.map(rtrim(_))
https://alvinalexander.com/scala/scala-trim-left-right-string-seq-strings/
CC-MAIN-2021-17
refinedweb
234
78.79
Hello I am working on a text-based rpg, using C++ and I have some trouble with std::vectors. I've created a vector in my namespace called "Initialize" , in its header. In "Initialize.cpp" I add objects of the type "Items" to the vector using push_back. I want to know how I can access the objects in my vector I've created in my namespace "Initialize" from my Item-class. Example: GetItemName(Diablo::Initialize::existingItems[1]) //GetItemName returns the name of the object it receives. // This doesn't work since I have to include "Initialize.h" // which gives linker-errors. The problem is that if I include "Initialize.h" to be able to access the vector defined in "Initialize.h", i'll get linker-errors since the the vector will be defined both in my "Initialize.h" and in my "Items.cpp". How can I declare my vector so that I can easily access from other classes etc.?
https://www.gamedev.net/topic/648916-using-objects-in-stdvectors/
CC-MAIN-2017-22
refinedweb
159
68.06
Subject: Re: [boost] [spirit2] How to get encoding-specific parsers by character type From: Andrey Semashev (andrey.semashev_at_[hidden]) Date: 2010-06-11 17:13:58 On 06/11/2010 12:35 PM, Hartmut Kaiser wrote: >> I was wondering if it was possible to get encoding-specific parsers by >> character type? Something like that: >> >> template< typename CharT> >> struct encoding_specific >> { >> ... >> }; >> >> typedef encoding_specific< char> narrow; >> narrow::char_; // equivalent to spirit::standard::char_ >> >> typedef encoding_specific< wchar_t> wide; >> wide::char_; // equivalent to spirit::standard_wide::char_ >> >> This would help a lot in generic programming, when the character type is >> not known. Is there a tool like that already? If not, could it be added? > > There isn't anything like that. The reason is that usually you want to > specify a character set in addition to the character type to use. So the > solution above is nothing I would like to see in Spirit, even if it might be > sufficient for your particular case. OTOH, if it is sufficient for you why > don't you just add it inside your namespace and be done? Well, I did so. It just so happens that I find myself replicating that code in different places. I'm not very familiar with the standard specs on the new character types but isn't there a strong relationship between the character type and encoding? Can't we be sure that e.g. char16_t is UTF-16, char is the standard narrow and wchar_t is the standard wide encoding? Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2010/06/167959.php
CC-MAIN-2019-47
refinedweb
269
67.15
LONDON (ICIS)--Further uncertainty has been thrown on the Ukrainian nitrogen industry following the arrest of Ukrainian oligarch Dmitry Firtash in ?xml:namespace> Austrian police released a statement regarding the arrest, noting that Firtash had been investigated by the US Federal Bureau of Investigation (FBI) for years, although the FBI has made no comment about the arrest at this time. The industry was already facing uncertainty following recent civil unrest in Ukraine and news Gazprom was intending to remove a discount on natural gas prices from 1 April which would significantly raise production costs. Firtash’s Group DF is the owner of the Ostchem Holding group for Ukrainian mineral fertilizer producers: Cherkassy Azot, Stirol Gorlovka, Severodonetsk Azot and Rivne Azot, which together represent the bulk of Ukrainian nitrogen fertilizer production. Ammonia, urea and ammonium nitrate (AN) from these plants are marketed by NF trading. “This adds uncertainty and doubts to the situation which was already shaky… now people are hesitant to do any business with NF Trading,” one source said. Sources said they did not know whether this was a real threat to business out of Ukraine, but the news was enough to make people more hesitant, worrying about what might happen if the company’s accounts were frozen or other measures. NF Trading was reported offering 40,000 tonnes of urea for March shipment this week, but found no buying interest and the tonnes are expected to be carried over to April. However, urea demand is slow at present and a lack of buying interest could be a factor of this slow demand rather than unwillingness to buy from NF Trading, sources added. Some said it was too early to say whether it would have any impact on prices, but that if buyers were unwilling to make purchases from NF Trading then this would severely curtail availability of ammonia, urea and AN from Ukraine and could push up prices. Further forwards, sources said the Ostchem Holding, which was formed in 2010/2011, could be broken up with each production unit becoming an individual entity again. There are also questions over the price of future gas supplies, sources noted. Currently Ostchem has a separate gas agreement for Russian supplies given Firtash’s strong links with the Russian gas industry, and does not buy at the official price set between Gazprom and However, there are now reports that with Firtash’s arrest this agreement might be voided which would also raise gas costs for the plants currently under the Ostchem umbrella. Group DF was not available for comment on Thursday. Richard Ewing, Deepika Thapliyal and Mark Milam also contributed to this article
http://www.icis.com/resources/news/2014/03/13/9762723/ukraine-nitrogen-market-faces-more-uncertainty-after-firtash-arrest/
CC-MAIN-2016-22
refinedweb
441
50.6
Calendar AppBar Flutter package for custom AppBar with calendar view. The package is currently optimized for mobile devices, therefore used only for mobile devices is prefered. It also works for larger screens, although it does not follow the rules of good UI. It is planned to be optimized shortly. Features - Define your custom color scheme - Enable or disable full-screen calendar view - Provide a list of dates that will be marked on the calendar view - Manipulate with a range of displayed dates - Set the language of your choice Installation and Basic Usage Add to pubspec.yaml: dependencies: calendar_appbar: ^0.0.6 Then import it to your project: import 'package:calendar_appbar/calendar_appbar.dart'; Finaly add CalendarAppBar in Scaffold: @override Widget build(BuildContext context) { return Scaffold( appBar: CalendarAppBar( onDateChanged: (value) => print(value), firstDate: DateTime.now().subtract(Duration(days: 140)), lastDate: DateTime.now(), ), ); } There are three required attributes of class CalendarAppBar. Attribute onDateChange is a function, which defines what happens when the user selects a date. firstDate attribute defines the first date, which will be available for selection. The date saved in lastDate attribute is the last date available for selection. At first initialization of the object, the date provided as lastDate will be selected, but will not be returned with onDateChange function. More Custom Usage Hide Full Screen Calendar View This package enables the usage of a full-screen calendar view. It is displayed when the user presses on month and year text in the top right corner of the AppBar. It comes in two different versions. If the first and last date are part of the same month the full-screen calendar will be displayed as seen on the third image at the top of this file. In the other way, it will be displayed as seen in the second image. It is possible to disable the full-screen view (which is enabled by default) by adding the following code: fullCalendar: false, Define Your Custom Color Scheme You can define your custom color scheme by defining white, black and accent color. Those three colors are set to Colors.white, Colors.black87 and Color(0xFF0039D9) by default. It is possible to customize those three colors as shown below. white: Colors.white, black: Colors.black, accent: Colors.blue, The design is currently optimized for light mode and therefore also suggested to be used in that way to achieve better UX. Dark mode will be added soon. Custom Padding The horizontal padding can be customized (default it is set to 25px) by adding the following code: padding: 10.0, Mark Dates with Events It is possible to provide the list of dates of type List<DateTime>, which will be marked on calendar view with a dot. events: List.generate( 10, (index) => DateTime.now().subtract(Duration(days: index * 2))), The code above will generate the list of 10 dates for every second day from today backwards. Back Button The last attribute of CalendarAppbar is backButton, which is default set to true. Customize that feature by setting this attribute to false. backButton: false, Internationalization The newest feature the CalendarAppbar package is offering is local support for different languages. Use atribute locale to customize the language used for the plugin by adding the language code as shown below. If this atribute is not set, English language will be used. locale: 'en', Thank you Special thanks goes to all contributors to this package. Make sure to check them out. Make sure to check out example project. If you find this package useful, star my GitHub repository. Getting Started with Flutter For help getting started with Flutter, view Flutter official documentation online documentation, which offers tutorials, samples, guidance on mobile development, and a full API reference.
https://pub.dev/documentation/calendar_appbar/latest/
CC-MAIN-2022-33
refinedweb
617
56.96
Answered by: NtQuerySystemInformation Hi, Does anyone know how to get a list of open system handles from C#? I've been looking into NtQuerySystemInformation with SYSTEM_INFORMATION_CLASS.SystemHandleList, but it's very convoluted and I can't get it to work. I've found no C# examples using this method either. The reason for getting them is to find which files are being used by different processes. Is there another way to do this? Or can anyone point me towards a resource explaining how to use NtQuerySystemInformation properly from C#? Thanks, Jonas Question Answers All replies - what about using something like this: Code Snippet using System.Diagnostics; Process[] processlist = Process.GetProcesses(); foreach(Process theprocess in processlist){ Console.WriteLine("Process: {0} ID: {1}", theprocess.ProcessName, theprocess.Id); } theprocess .StartTime (Shows the time the process started) theprocess .TotalProcessorTime (Shows the amount of CPU time the process has taken) theprocess .Threads ( gives access to the collection of threads in the process) I'm not sure if we really have support for NtQuerySystemInformation in C#. Thanks, I've been through that one, and others (including), but I haven't been able to translate it to working C# code. Might be something with the p/invoke signatures, but there's no reference for the relevant structs or methods on pinvoke.net. I guess I could compile the C++ project and pinvoke it as a dll, but I would like to have a C#-only solution to this (which would also help me use other NtQuerySystemInformation functionality). Any pointers to a C# solution? Thanks I was trying to achieve the same result, so that I could determine files open in the Media Center ehShell process. After much searching I found some C# managed code on a French web site. I then found some more code on Soheil Rashid's site to convert device paths to regular filenames. I have built a sample application to test this all out and it works well on XP and Vista. I have posted a link to the C# source and project on my blog at. You should be able to base your code on this.
https://social.msdn.microsoft.com/Forums/vstudio/en-US/ac990847-6d04-4ae0-aafe-8355bbc3f769/ntquerysysteminformation?forum=csharpgeneral
CC-MAIN-2016-30
refinedweb
354
65.83
Details Description It would be good to include an up-to-date jetty version for the example. Activity - All - Work Log - History - Activity - Transitions We could probably ditch the JMX stuff as we don't currently use it... or does Jetty export some cool stuff via JMX that we might want to point people at for debugging? If we go with JSP 2.1, we should just remember not to use any 2.1 specific features yet... most containers aren't quite there yet. The latest stable version of Tomcat 5.5.20 is only on Servlet 2.4/JSP 2.0 BTW, there's a Jetty 6.1.1 out... Ideally we could replace existing JSP with servlets or SolrRequestHandlers. I don't see anything that could not be handled using a SolrRequestHandler and default wt=xslt. For an embedded solr, it would be nice not to need the JSP dependencies. This would have the added bennefit of returning XML/JSON/ruby/etc for the admin interface. Jetty 6.1.1? where is it? Its not on: When you go to the download link: The disad to getting rid of JSPs is it would make it much harder for someone to easily improve the look or functionality of the admin pages. I'm not suggesting we drop JSP now... just that we definitely should not be using 2.1 specific stuff and where possible xml+xslt with a RequestHandler ( SOLR-58) in the future. I just tried this with 6.1.1 and it works fine (with the jsp-2.0 or jsp-2.1) What is the best way for me to post this patch? Most of the work is adding and deleting jar files. It has two xml config files etc/jetty.xml and etc/webdefault.xml I think we want to wait with the newest Jetty. I'm on the jetty ML, and the beast is not yet fully ironed out., not yet stable enough. I'd let it simmer a little longer while Greg, Jan and other Jetty folks get the deamons out. here is a zip with the example directory as we would (maybe) want it. This uses jetty-6.1.3 Any opinions on if we should put in Jetty 6.1.3 now, or wait until after Solr 1.2 is released? It has been working fine for me, but I have not been using it in production or under heavy load. Otis? If 1.2 is more then a week away, I think we should include it soon and take it out if anything looks fishy... OK, let's quickly go ahead then. There have been some JSP issues with our current Jetty version anyway. I'd like to have all the core changes done in the next few days so we can get a release out by the end of this month (1 week away). applied to rev541050 If anything goes wrong, we need to revert /example/ to rev 541049 and check NOTICE.txt and CHANGES.txt I'm using 6.1.3 in a pre-production project. I do see Hadoop guys hit a problem - see JETTY-345, which mentions 6.1.4rc1 Yeah, I saw that hadoop issue too, which is why I was planning on some quick indexing & querying benchmarks. Doing some quick connection handling tests with the python client: from solr import * c = SolrConnection(host='localhost:8080', persistent=False) for i in xrange(10000): c.search(q='id:1234' Java 1.5 -server on WindowsXP Jetty 5.1.??, persistent=20.9s, non-persistent=44s, sometimes "address already in use" exception Jetty 6.1.3, persistent=20.1s, non-persistent=114s Tomcat 6.0.13, persistent=20.2s, non-persistent=29s, sometimes "address already in use" exception Could be a python client issue, (SO_REUSE_ADDR for python http-lib?), but it does seem like Jetty 6.1.3 is slower at handling new connections? Anyone have benchmarks from a different client? Switching to the non-NIO connector in Jetty 6.1.3, I get 18.2s for persistent connections, 35s for non-persistent (again WinXP, Java5 -server) The 10,000 single-client connection test on a dual Opteron, RHEL4 64 bit, Java5 64 bit -server Jetty 6.1.3, NIO connector, persistent=414s, non-persistent=11.3s Jetty 6.1.3, non-NIO connector, persistent=408s, non-persistent=10.2s Situation reversed from WinXP! The persistent connections are horribly slow. On that Opteron Linux box, almost identical numbers for Resin 3.0 pro, so it's not Jetty specific. Hmmm, some bad interaction with persistent connections and POST (which solr.py uses)? I switched the method to use GET, and it finished 10k connections in 6.3sec Many browsers (incorrectly) send a CR+LF after the post body. The python httplib does not do this... I wonder if that's the issue (the server waiting for the possible CR+LF so it can throw it away?) The delay seems to be about 40ms I think the issue with persistent connections on linux is due to Nagle's algorithm. The python client sends the HTTP headers and body separately, thus triggering it. The following little program below writes it all at once and gets very good performance: --------- mysock.py------ import socket headers='''POST /solr/select HTTP/1.1 Host: localhost:8983 Accept-Encoding: identity Content-Length: 11 Content-Type: application/x-www-form-urlencoded; charset=utf-8 ''' body='q=id%3A1234' msg = headers + body s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(("localhost", 8983)) for i in xrange(10000): s.send(msg) rsp = s.recv(8192) #pray we get the complete response in one go will be included in solr1.2 I tried running the example with jetty 6.1. In less then 5 mins, I had everything working smoothly. The only issue I see is the JSP jar files are huge (~4MB) The current example folder includes settings and jars for JMX, is that necessary?
https://issues.apache.org/jira/browse/SOLR-128
CC-MAIN-2015-22
refinedweb
998
69.28
v1.1 March.24.2015 (Fixed renaming identical name issue when in heiarchy) Designed for a user experience where all operations can be done in the same location as oppose to multiple input boxes. Major features includes renaming a chosen section of the text, and options to attach numbering to various locations. (Maya custom shelf command): import sushi_rename as su_rename reload(su_rename) su_rename.sushi_rename() More details @ ------------ This script is shipped with sushi_bento: Please use the Feature Requests to give me ideas. Please use the Support Forum if you have any questions or problems. Please rate and review in the Review section.
https://ec2-34-231-130-161.compute-1.amazonaws.com/maya/script/script-sushi_rename-for-maya
CC-MAIN-2022-40
refinedweb
101
55.24
Attachment 'componentizing-to-heaven.html'Download Componentizing a Stairway to Heaven Why Componentize Imagine yourself writing a "generic notifier". You have a "notifier object" with the notification configuration. Various specific notifiers would like to keep persistent information (the IRC notifier likes to keep the IRC connection, the mailer notifier keeps track of "unsent notifications" so it can aggregate notifications close in time in one mail, etc.) One way to do it, of course, is to keep .mailer, .ircer, etc. attributes on the notifier. Then you need to co-ordinate, and have built a dependency from the generic infrastructure to the specific implementations. That is bad? So, of course, you can let the mail infrastructure add the .mailer attribute. In a sense, that shifts the co-ordination problem from a centralized one to a decentralized one, with no means to co-ordinate. The "hope and prayer" strategy (each person has to make sure their attributes don't step on their neighbour's) has the problems that a) it usually works and b) when it breaks, it breaks in the most inscrutable ways. How Components Guidelines for doing that: - Always access your object through an interface: ISomethingDoer(o).doSomething(). - Think about an initialization scheme. The easiest way is to test-and-add: def mail(self, o): p = IMailer(o) if p is None: # This is where we decide our mail strategy: p = DelayedMailer(o) o.addComponent(p, ignoreClass=True) p.sendmail() This scales up a way to hang ad-hoc properties on o (so that, for example, if the same o is passed, the same value is used. Wait a minute, what about the stairway to heaven? Oh, right, then. class IGlitter(Interface): pass class Gold(object): implements(IGlitter) So, you see, all that glitters is gold. So is that the best part? No! The really neat thing is that now modules who want to co-operate can do that easily. Ideally, it means reaching true "component nirvana". But MosheZ! What is "component nirvana" please? Am I glad you asked! Component Nirvana is the ideal of "zero infratructure": whether something is "infrastructure" or "application" is not encoded in the Python, but is only a human attribute: the mailer can use the IRC if it wants the dependency, or we can write an ISetttings interface all the modules use: or all the modules that are interested in those settings, in any case. This is the so-called "Raviolli" pattern (or, done badly, the "Risotto" pattern) as opposed to the classic "Lasagna" pattern (layered infrastructure, from generic to specific) or "Spaghetti" pattern (every object accesses every object it feels like, with no internal division)..
https://wiki.python.org/moin/EuroPy2006LightningTalks?action=AttachFile&do=view&target=componentizing-to-heaven.html
CC-MAIN-2020-34
refinedweb
440
54.63
#include <CGAL/Concurrent_compact_container.h> An object of the class Concurrent_compact_container is a container of objects of type T, which allows to call insert and erase operations concurrently. Other operations are not concurrency-safe. For example, one should not parse the container while others are modifying it. It Concurrent::Concurrent ccc2 is copied. The allocator is copied. The iterator order is preserved. all items in ccc are deleted, and the memory is deallocated. After this call, ccc is in the same state as if just default constructed. constructs an object of type T with the constructor that takes t1 as argument, inserts it in ccc, and returns the iterator pointing to it. Overloads of this member function are defined that take additional arguments, up to 9. returns the maximum possible size of the container ccc. This is the allocator's max_size value adds the items of ccc2 to the end of ccc and ccc2 becomes empty. The time complexity is O( ccc. capacity()- ccc. size()). ccc2must not be the same as ccc, and the allocators of cccand ccc2must be compatible: ccc.get_allocator() == ccc2.get_allocator(). assignment. Each item in ccc2 is copied. The allocator is copied. Each item in ccc is deleted. The iterator order is preserved. returns the number of items in ccc. Note: do not call this function while others are inserting/erasing elements swaps the contents of ccc and ccc2 in constant time complexity. No exception is thrown.
https://doc.cgal.org/5.1.3/STL_Extension/classCGAL_1_1Concurrent__compact__container.html
CC-MAIN-2022-27
refinedweb
239
51.85
read a message from another process #include <sys/kernel.h> #include <sys/sendmx.h> unsigned Readmsgmx( pid_t pid, unsigned offset, parts, struct _mxfer_entry *msgmx ); The kernel function Readmsgmx() reads data into an array of buffers pointed to by msgmx from the process identified by pid. The number of elements in the array, parts, must not exceed _QNX_MXTAB_LEN (defined in <limits.h>). The offset allows you to read data from the sender's send message starting at any point. The data transfer occurs immediately, and your task doesn't block. The state of the process pid isn't changed. The process pid must have sent a message that was received and not yet replied to. It must be in the REPLY BLOCKED state. If you attempt to read past the end of the sender's message then Readmsgmx() returns fewer bytes than was specified in the sum of the buffer lengths in the _mxfer_entry list provided. This function is often used in one of three situations: The number of bytes read. On error, -1 is returned and errno is set. See Sendmx(). QNX Readmsgmx() is a macro. Creceive(), Creceivemx(), errno, Receive(), Receivemx(), Reply(), Replymx(), Readmsg(), Send(), Sendfd(), Sendfdmx(), Sendmx(), Writemsg(), Writemsgmx(), Trigger()
https://users.pja.edu.pl/~jms/qnx/help/watcom/clibref/qnx/readmsgmx.html
CC-MAIN-2022-33
refinedweb
201
66.03
Is Analog the Fix For Cyber Terrorism? 245. obviously BSG was right (Score:5, Insightful) sure, no problem (Score:4, Informative) >Or maybe you could isolate control systems from the Internet said the person volunteering to get up at 3 am to go to the office to reset the a/c system. Re: (Score:2, Funny) Don't worry, Bill Gates says a robot will take that guy's job soon enough. Re:sure, no problem (Score:5, Insightful) Sounds to me like you need a better A/C system. Or you need to not consider an HVAC system to be so critical that it can't be on the network. Or, perhaps you need to design the HVAC system to take only the simplest of input from Internet-connected machines through interfaces like RS-422, and to otherwise use its not-connected, internal network for actual major connectivity. And design it to fail-safe, where it doesn't shut off and leave the data center roasting if there's an erroneous input. And anything that is monitored three-shifts should not be Internet-connected if it's considered critical. After all, if it's monitored three shifts then it shouldn't have to notify anyone offsite. Re:sure, no problem (Score:5, Informative) There's a lot of misconceptions on slashdot about how these "critical infrastructure" plants actually run. I've spent a lot of time working in chemical plants, and these plants are heavily instrumented, with all parameters recorded. These are accessible in real time to the plant engineers, who typically don't sit in the control room, and often aren't in the same state (there's a very limited pool of people available who are "experts" at some of these processes, and when a serious problem occurs companies want the best person to look at the data ASAP). The guys who sit in the control room are not engineers. They're plant operators, and their job is to keep the plant running as smoothly as possible, and escalate the issue to an engineer if there's a non-standard problem. Most plants these days are so heavily automated that for normal, stable operation only two operators are required on site per say $100 million of plant (as a guesstimate - more during the day when scheduled maintenance is occurring). The engineers at these sites are actually classed as management. That's because they have ultimate responsibility for the plant when problems happen, although they don't control the day to day operation of the site. Most of an engineer's day on a chemical plant should be spent looking at whether the plant is configured optimally, and trying to troubleshoot longer term problems which require a more theoretical viewpoint. However, they do have to get out of bed at three in the morning if something's gone wrong. They also have to manage the operators, and have a promotion path to "real" management - refinery managers (for example) are usually engineers. However, what the article totally missed is that these sites already have two layers of control system - the Distributed Control System (DCS), and the Safety Instrumented System [wikipedia.org] (SIS). The wikipedia contains a lot more detail, but essentially these SIS's are hard wired systems that aren't programmable at all, so they are intrinsically resistant to an internet or software based attack. However, they're very expensive (every trip needs to be built as a dedicated circuit), so these systems are only used to ensure that the plant fails in a safe manner, not continued operation. Priority is given to safety of people in the vicinity over integrity of the plant equipment - these systems wouldn't typically be used a stop a pump or centrifuge (for example) from running too fast, unless that could cause some consequential (human) damage. Finally, an analog system would be a big step backwards from a safety viewpoint because it wouldn't allow the plants to automatically shut down safely when a problem occurs. Plant shutdowns are typically a multiple step process, and in a refinery (for example), large quantities of high temperature, high pressure flammable gases need to be disposed of, which would simply not be possible to safely "program" in an analog environment. Before digital systems came along, plant trips were "all hands on deck" incidents, with operators frantically adjusting adjusting setpoints on dials to bring the plants down. Of course, the risk of operator error was high, so automated shutdowns were a big step forwards in plant safety. @ CGordy - Re:sure, no problem (Score:5, Interesting) However, the plant operators are engineers (this is the UK) and the senior ones and fast-track juniors have degrees (though a degree does not mean so much these days), even though the Operating Department is separate from the Engineering Department. Personnel do move from one to the other, and it is expected that even senior management will have had at least a few months experience "on the desk" (ie in the Control room). There is no way whatsoever, no-how, any-which-way-but-loose (how else can I say it?) that these sysems would have any connection to the outside world or even within the plant itself to other than to the essential control panels. There is however a problem with modern "smart" devices such as thermocouple local amplifiers/transmitters with microchips in them. This is that we don't always know how they are programmed. I am not talking about malware, but simply the programmer making errors (or well-meaning assumptions) such as buffer overflow after a certain future date. For this reason we prefer the old-fashioned analog versions of devices at this level. Re:sure, no problem (Score:4, Interesting) Or, perhaps you need to design the HVAC system to take only the simplest of input from Internet-connected machines through interfaces like RS-422, and to otherwise use its not-connected, internal network for actual major connectivity. I used to do software for fire alarm systems and heard a story about this. A shopping centre wanted to have a remote monitoring and reset system. All it could do was read the indoor temperature or reset the system. RS-485 link to a dedicated PC, firewalled with just the remote management service exposed to the LAN. Access was by using a VPN connection to the LAN. One day they noticed that the system was stuck in some kind of reset loop. Seems someone found a way in and caused the machine it was connected to to keep sending reset commands. It must have happened some time in the night, and by the time they figured out what was going on the next day a couple of the motorized vents and one fan had failed due to the motors overheating. Every time the reset command was sent they did a self test where they exercised their motors. The suspicion was that this was a distraction to cover up whatever else they were doing inside the network. Not being close to it I never found out the fully story, but it just shows that even a simple reset command can cause significant damage if abused. Re:sure, no problem (Score:4, Insightful) Re: (Score:3) Just don't put your HVAC controls on the same network as your credit card payment devices... Re: (Score:2, Insightful) Networked does not imply internet connected. In the same way, if you are using electricity, it does not mean you need to be connected to the electric grid. There is no reason to going analog IF people are not stupid. Unfortunately, we have plenty of examples that refute your premise. People ARE stupid, including the people who designed the highly vulnerable smart grid that most of the US is now using for power distribution. Re:sure, no problem (Score:4, Insightful) Re: (Score:2) And to make it even more simple: Everyone, including smart people, makes mistakes. Or gets into a Homer Simpson mood and doesn't take the usual amount of care. Re: (Score:2, Informative) Networked does not imply internet connected. In the same way, if you are using electricity, it does not mean you need to be connected to the electric grid. There is no reason to going analog IF people are not stupid. You may want to be careful using words like "stupid". A reasonably intelligent person would recognize that a purely internal network without internet connectivity is still vulnerable. The internet is just one method of ingress. A malware payload could be introduced through physical media for example. A lack of internet connectivity may make data theft more difficult however in an industrial control application merely getting into internal network and taking control of machinery is all that is necessary. Re:sure, no problem (Score:5, Interesting) said the person volunteering to get up at 3 am to go to the office to reset the a/c system. I can't speak for everyone, but I would rather pay extra for someone to be willing to do that (or do it myself, it shouldn't be a common situation) before I connect important systems to the internet. Having an air gap isn't a perfect solution, but it makes things a lot harder for attackers. Re:sure, no problem (Score:5, Interesting) As a compromise, one can always do something similar to this: 1: Get two machines with a RS232 port. One will be the source, one the destination. 2: Cut the wire on the serial port cable so the destination machine has no ability to communicate with the source. 3: Have the source machine push data through the port, destination machine constantly monitor it and log it to a file. 4: Have a program on the destination machine parse the log and do the paging, etc. if a parameter goes out of bounds. This won't work for high data rates, but it will sufficiently isolate the inner subsystem from the Internet while providing a way for data to get out in real time. Definitely not immune to physical attack, but it will go a long ways to stopping remote attacks, since there is no connections that can be made into the source machine's subnet. Re:sure, no problem (Score:5, Insightful) And that is the use case that caused problems with for Iran with Stuxnet. They had an airgap, but the attackers infected other computers in the area, got their payload on a USB key, and when someone transferred files to the main target, it got infected. That is my understanding of how that situation went down. But once you start thinking along those lines, you start thinking of other attacks that might work. Re: (Score:2) That's part of a larger issue. People will ALWAYS get sloppy and lazy. Part of the security system has to include a way to check the systems and to check the people. Security is not an item in itself. It is only a reference point. You can have "better security" than you had at time X or you can have "worse security" than you had at time X (or the same). Improving security is about reducing the number of people who can EFFECTIVELY attack you. Once you've gotten that down to the minimum number then you increase t Re: (Score:2) Iran needs to learn about superglue on USB ports. How do you suggest they copy files to the computers then? Type them in by hand? Re:sure, no problem (Score:5, Interesting) This is why security should be a system and not an airgap. The idea that a computer should not be on the internet and patting yourself on the back for the idea and calling it a job well done is almost becoming a slashdot meme. Never underestimate what bored shift workers do during night shift. We had one group of people figure out how to watch a divx movie on the screen of an ABB Gas Chromatograph. The problem is more social than technological. Re: (Score:3) That's terrible. For research purposes, I'd like to know how he did it. Re: (Score:3) i remember watching 'nikita' episode where they hacked a computer through its power connection and going "um, that's a bit stretching it..." then, several years later, some proof of concept attack vector like that was demonstrated. assuming that experts in the field can do much more than public knows, it might have been not that much of a stretch after all. i would also imagine that attacks for analog systems have been polished quitealot, given that they have been around longer. not that they could not be mor Re:sure, no problem (Score:4, Informative) When a local startup went out of business, one of the things the failed startup had at their bankruptcy auction was an electric motor that would spin a crankshaft/flywheel... only for a generator head on the other end to turn the motion back into electricity. I wondered why they had something that inefficient until I found that it was a "power firewall"... i.e. to mitigate attacks via the mains power. Re: (Score:2) That's known as a data diode, and it's a great idea (and can be done at higher speeds than RS232, if necessary; e.g. you can do something similar with an Ethernet cable). It does have one big limitation, though -- it won't let you control the system from off-site. If that's okay, then great, but often off-site control is something that you want to have, not just off-site monitoring. Re: (Score:3) Stuxnet proved that air gapping isn't enough. Air gapping is not a 100% fix. Its part wishful thinking and part buzz phrase which gets thrown around carelessly. If someone guarantees nothing will go wrong because of an air gap or one way serial connection then they are full of shit. Think about it, how many computers have you ever come across that could function on a 100% "air gap"? What about updates or software fixes? You could write a control program and debug the hell out of it to ensure nothing will go Re: (Score:2) >Or maybe you could isolate control systems from the Internet Oh, you mean like all those systems Stuxnet infected?... [wikipedia.org] Re:sure, no problem (Score:5, Informative) A more common control with this type of critical limits is an elevator. The digital controls calls the cars to the floors, opens doors, etc. Between the digital world and electrical/mechanical world is control relays. Limit switches are in pairs. One you are used to. The elevator arrives at a floor and there is a pause while the fine alignment is completed to level with the current floor. The hard limit on the other hand such as exceeding safe space below bottom floor or past the top floor, does interrupt power to the control for the power relays. One drops power to the motor and the other drops the power to the brake pick solonoid. Brakes fail safe in an elevator. Need power to release the brakes. Yea, it is a pain to reset the elevator at 3 am with someone stuck inside, but that is better than a runaway elevator. And no, there is no software defeat for the hardware limit switches. Re: (Score:2) "said the person volunteering to get up at 3 am to go to the office to reset the a/c system." That is not a realistic scenario. I know what you are saying but an a/c system isn't turned on at 3AM to begin with (unless you like wasting electricity). Most likely you will have these systems in a plant that runs 24/7 with 3 shifts and someone will know how to handle minor breakdowns and press a reset button if need be. A major breakdown can be solved in one of two ways: remotely or someone has to come on site. G Re: (Score:2) No. Said the person who should have known that the Stuxnet attack had an attack vector that didn't have anything to do with the internet. The actual machines it was aimed against actually weren't connected to the internet at all. So the comment is just dumb. Because no analog system has (Score:2, Insightful) ever been compromised :) Physical kill switches, human operated are not simply analog (one might argue they are digital at the switch level). Analog might be the wrong word, since analog systems have been repeatedly compromised (from macrovision, to phreaking boxes, etc, etc). keep it off a communications network, even off local networks if they are uber critical. Re:Because no analog system has (Score:5, Insightful) Maybe that wasn't his point, but it's still a good one. Re: (Score:3) No, it is not. If the remote analog access is by a dedicated wire (and that is what you do in analog), then the attacker has to have physical access to that wire. Usually the "remote analog" access is through an analog circuit provided by a telecommunications company between two locations called an ISDN circuit. If the locations are far enough, your so called "dedicated wire" gets muxed, and then transmitted over a digital trunk which may be copper or optical with a bunch of other "dedicared wires" Th Re: (Score:2) Usually the "remote analog" access is through an analog circuit provided by a telecommunications company between two locations called an ISDN circuit. What does the "D" in ISDN stand for? Re: (Score:3) What does the "D" in ISDN stand for? "does." As in It Still Does Nothing. Old telecom joke. Re: (Score:3) No, it is not. If the remote analog access is by a dedicated wire (and that is what you do in analog), then the attacker has to have physical access to that wire And that dedicated wire could control digital circuitry or even a conventional computer running software. So what is your point? The only advantage of analog is that control methods are generally so limited that doing something stupid like sending a critical control signal over the Internet is not possible. However, the cost is very very high and it doesn't do anything that following a policy of never sending controls over the Internet would not do. Further, without such a policy, the security advantage i Re: (Score:2) No, it is not. If the remote analog access is by a dedicated wire (and that is what you do in analog), then the attacker has to have physical access to that wire. Come on, does not body know basic EE anymore? No wonder all this insecurity and stupidity happens... It's not clear you even understood the point before replying. Maybe you did, but your comment doesn't make that clear. analog control != airgap or dedicated line vuln Re: vulnerable to someone hacking the max RPM limit on some centrifuges etc, since the attacker would need to physically alter the control mechanisms/analog electronics to alter the rpm. obviously such systems are more demanding to operate too.. they are more expensive to do and more prone for faults though... Consider the dreaded EMP attack. By producing a sufficiently powerful EM impulse sufficiently close, even digital circuits can be pushed into analog domains where they can become deranged or destroyed. That's an extreme, but it shows that attacks need not be solely on the wire or through the wire. A more finely-tuned attack might be able to simply strobe some other line in a cable bundle to a degree that its effects leak past whatever shielding might be present and induce a false impression of a control or d Re: (Score:2) So, I work for an ISP... most major infrastructure already has a dedicated wire. We install them all the time. They are pretty rare. They control Damns, power plants, etc... etc... They usually lead from the facility to a government building in town. Most are very small... 56k or so. The only non-government uses I've seen for them are usually 2 manufacturing plants built close to each other. They run this dedicated line between them. Of other note, many power plants forbid ANY copper/metal wire from entering Stuxnet (Score:4, Informative) "Or maybe you could isolate control systems from the Internet." Wasn't Stuxnet partially a sneakernet operation? I can't imagine Iran being so stupid to connect secret centrifuges to the internet. The only way to win is not to play. Re:Stuxnet (Score:4, Informative) I work on a telescope whose Siemens PLC is so old that it has a PROM in a 40 pin DIP package for firmware updates. Not that we've touched the firmware in 20 years. After all, it works. And it ought to work for another 20 years, as long as we replace the dried-out aluminum electrolytic capacitors regularly. Challenge accepted (Score:2). This is very, very old (Score:4, Insightful) It is called self-secure systems. They have limiters, designed-in limitations and regulators in there that do not permit the systems to blow themselves up and there is no bypass for them (except going there in person and starting to get physical). This paradigm is centuries old and taught in every halfway reasonable engineering curriculum. That this even needs to be brought up shows that IT and CS do not qualify as engineering disciplines at this time. My guess would be that people have been exceedingly stupid, e.g. by putting the limiters in software in SCADA systems. When I asked my EE student class (bachelor level) what they though about that, their immediate response was that this is stupid. Apparently CS types are still ignoring well-established knowledge. Re:This is very, very old (Score:5, Insightful) Re: (Score:2) Re: (Score:2) That's because CS is math, not engineering. There are rather more disciplines than that. Theoretical CS is definitely towards the math side of things, but that's really at one end of the spectrum. The study of how people and computers interact is definitely part of CS, but isn't either engineering or math; it's closer to psychology. On the other hand, Computer Engineering is definitely an engineering discipline (as you'd expect with anything doing construction of physical objects on a mass scale). Software Engineering is unusual though, as the costs o Re: (Score:3) That's because CS is math, not engineering. Computer Engineering is engineering, Computer Science is the study of the mathematics of computer systems. CE is a lot rarer than CS though, so a lot of people with CS degrees try to be engineers, but aren't trained for it. The difference between CS and CE is usually just the name the department chooses, not their course work. In other words it is usually a cosmetic difference. This is not true, or even approximately true. CE is a discipline of EE. It is created mostly by learning EE, with a few computer architecture classes, lots of Verilog, and a few CS classes. In most universities, the program is offered by the EE college. Re: (Score:2) All blame is on the engineers if they don't build a self-secure system (or management if it's their fault). Re:This is very, very old (Score:4, Insightful) My guess would be that people have been exceedingly stupid, e.g. by putting the limiters in software in SCADA systems. Or they just did what they were told by management. After all, software solutions to problems tend to be a fraction of the price of dedicated hardware solutions, and can be updated and modified later. Apparently CS types are still ignoring well-established knowledge. You can't build a SCADA system with *just* CS types; so apparently all your 'true engineers' were also all asleep at the wheel. What was their excuse? Seriously, get over yourself. The CS types can and should put limiters and monitors and regulators in the software; there's no good reason for them not to ALSO be in there; so when you run up into them there can be friendly error messages, logs, etc. Problems are caught quicker, and solved easier, when things are generally still working. This is a good thing. Surely you and your EE class can see that. Of course, there should ALSO be fail safes in hardware too for when the software fails, but that's not the programmers job, now is it? Who was responsible for the hardware? What were they doing? Why aren't those failsafes in place? You can't possibly put that at the feet of "CS types". That was never their job. Re: (Score:2) Re: (Score:2) Way to shunt blame! I design code, your "EEs" design electrical hardware. I have been delivered hardware without such safeties. I could simply refuse to deliver code for the platform -- it will simply be offshored. Just costs me work. Re: (Score:2) Your "EEs" actually "code" too, but in disguise. PLCs are programmed, just (usually) not in written code, but rather, in Ladder Diagram or Function Blocks. But you know that, right? I'm a programmer, but also a hobby electronics guy. And I've worked with PLCs. And I know for sure that "CS" types are never involved in these projects. The programming required is minimal (as usual with "elegant" engineering solutions), so a CS degree isn't required. It's much more about the hardware than software. A CS guy usual Re: (Score:2) Yep I'll be the first to call you old fashioned. Just like I would also call the article ridiculous. Digital positioners as well as advanced digital electronics in field instrumentation has been one of the best things to come to process industry. Your old analogue valve may be unhackable, but it will also be unable to report advanced diagnostic data such as torque, stiction, and won't be able to report stroke test results, or alarm on deviations from normal performance parameters. So pat yourself on the back Re: (Score:2) Re: (Score:2) Any halfway reasonable engineering curriculum also teaches that engineering is all about tradeoffs, and that safety and security are variables like any other. Hardware based safety and security features are expensive, costs that aren't made up for by reductions in risk in many applications. Furthermore, s analog vs digital isnt the problem (Score:5, Insightful) analog is actually more suceptable to interference generated by rather simple devices, as there is no error checking on whats being fed to the system the problem is your reactor is for some fucking reason hooked to the same network as facebook and twitter Re: (Score:2) Rats, I knew I shouldn't have "liked" nuclear meltdown. Good idea (Score:5, Insightful) There's a lot to be said for this. Formal analysis of analog systems is possible.The F-16 flight control system is an elegant analog system. Full authority digital flight control systems made a lot of people nervous. The Airbus has them, and not only do they have redundant computers, they have a second system cross-checking them which is running on a different kind of CPU, with code written in a different language, written by different people working at a different location. You need that kind of paranoia in life-critical systems. We're now seeing web-grade programmers writing hardware control systems. That's not good. Hacks have been demonstrated where car "infotainment" systems have been penetrated and used to take over the ABS braking system. Read the papers from the latest Defcon. If you have to do this stuff, learn how it's done for avionics, railroad signalling, and traffic lights. In good systems, there are special purpose devices checking what the general purpose ones are doing. For example, most traffic light controllers have a hard-wired hardware conflict checker. [pdhsite.com] If it detects two green signals enabled on conflicting routes, the whole controller is forcibly shut down and a dumb "blinking red" device takes over. The conflict checker is programmed by putting jumpers onto a removable PC board. (See p. 14 of that document.) It cannot be altered remotely. That's the kind of logic needed in life-critical systems. Re: (Score:2) That's interesting they have a different system cross checking. But what happens when they are in disagreement? Who wins? There might not be time for the pilots to figure it out. Re: (Score:3) It's not that the secondary system is 'cross checking' or comparing results. They are really just monitoring circuits with a particular set of rules embedded in separate circuitry that just makes sure the primary system never breaks those rules. It is effectively the master control and will always 'win' if there is a problem. They are designed to be simple, robust and if possible, completely hardware based. Some other examples are 'jabber' control hardware lockouts to stop a radio transmitter from crashing a Re: (Score:2) That's interesting they have a different system cross checking. But what happens when they are in disagreement? Who wins? There might not be time for the pilots to figure it out. Then the minority report is filed in the brain of the female, who is obviously the smarter one. Duh. Didn't you see the movie? Re: (Score:2) -. Re: (Score:2) For example, most traffic light controllers have a hard-wired hardware conflict checker. [pdhsite.com] If it detects two green signals enabled on conflicting routes, the whole controller is forcibly shut down and a dumb "blinking red" device takes over. That's really cool Re: (Score:2) Code written in a different language is totally helpless here. Unless you believe the avionics is running on an interpreter instead of compiled code. Once compiled, the code is dialect free. An even if it is not my field, I doubt any sane designer will design avionics to run on a interpreter. We are talking about realtime systems here. A different kind of CPU makes sense if you want to isolate the system from bugs in hardware that may be specific to a kind of CPU. Re:Good idea (Score:4, Insightful) "Code written in a different language is totally helpless here" No it isn't. Some languages have different pitfalls to others eg, C code often has hidden out of bounds memory access issues , Ada doesn't because checking these is built into the runtime. Also different languages make people think in slightly different ways to solve a problem which means the chances of them coming up with exactly the same algorithm - and hence possibly exactly the same error - is somewhat less. Re: (Score:2) Compiled code that's functionally identical will differ depending on the language, though. It'll even differ when the same language is used but you merely change the compiler (or merely even change some options to the same compiler - for example, a latent bug may manifest itself by merely changing the compiler's optimization setting) To see this happen just compile to assembler a simple "Hello world" program using GCC, then do the same with the LLVM compiler. The outputs will look different even though the Re: (Score:2) There's a lot to be said for this. There's a lot to be said against this as well. Digital process control has opened up a whole world of advanced diagnostics which are used for protecting against critical process excursions. Most industrial accidents had failed instrumentation as a contributing factor. Most instrumentation these days have so much internal redundancy and checking that you're missing out on a whole world of information in the analogue realm. So you got a pressure reading on the screen is that number the actual pressure or is t yes isolate (Score:3) Unkown Lamer has it. tl;dr - using analog in security situations would be obvious if "computer security" wasn't so tangled in abstractions Sure someone may point out that the "air gap" was overcome by BadBios... [slashdot.org] but that requires multiple computers with speakers and microphones connected to an infected system IMHO computer security (and law enforcement/corrections) has been reduced to hitting a "risk assessment" number, which has given us both a false sense of security & a misperception of how our data is vulnerable to attack 100% of computers connected to the internet are vulnerable...just like 100% of lost laptops with credit card data are vulnerable Any system can have a "vulnerability map" illustrating nodes in the system & how they can be comprimised. I imagine it like a Physical Network Topology [wikipedia.org] map for IT networking only with more types of nodes. This is where the "risk assessment" model becomes reductive...they use statistics & infer causality...the statistics they use are historical data & they use voodoo data analysis to find **correlations** then produce a "risk assessment" number from any number of variables. If I'm right, we can map every possible security incursion in a tree/network topology. For each node of possible incursion, we can identify every possible vulnerability. If we can do this, we can have alot more certainty than an abstract "risk assessment" value. Analog comes into play thusly: if you use my theory, using **analog electronics** jumps out as a very secure option against "cyber" intrusions. Should be obvious! "computer security".... besides digital or analog, for safety, use physics (Score:5, Insightful) Analog vs. digital, fully connected vs less connected - all can fail in similar ways. If it's really critical, like nuclear power plant critical, use simple, basic physics. The simpler the better. You need to protect against excessive pressure rupturing a tank. Do you use a digital pressure sensor or an analog one? Use either, but how also add a blowout disc made of metal 1/4th as thick as the rest of the tank. An analog sensor may fail. A digital sensor may fail. A piece of thin, weak material is guaranteed to rupture when the pressure gets to high. Monitoring temperature in a life safety application? Pick analog or digital sensors, ei ther one, but you better have something simple like the vials used in fire sprinklers, or a wax piece that melts, something simple as hell based on physics. Ethanol WILL boil and wax WILL melt before it gets to be 300 F. That's guaranteed, everytime. New nuclear reactor designs do that. If the core gets to hot, something melts and it falls into a big pool of water. Gravity is going to keep working when all of the sophisticated electronics doesn't work because "you're not holding it right". Re: (Score:2) Re: (Score:3) Re: (Score:3, Funny) Re: (Score:2) Inherently safe design and mechanical safety systems are the final word you are absolutely correct, however in the digital vs analogue debate I would not be so quick to say use either. Digital systems have allowed a world of advanced diagnostics to be reported. Your pressure transmitter can now not only tell you what it thinks the pressure is, but it can also tell you if the tapping / impulse line is plugged. Your valve can report when it's near failure or if torque requirements are increasing, or stiction No, it's education (Score:5, Insightful) Such systems are not insecure because they are digital or involve computers or anything. (seriously I doubt the guy even understands what digital and analog means) Such systems are insecure because they are unnecessarily complex. Let's take the Stuxnet example. That system designed to control and monitor the speed at which centrifuges spin. That's not really a complex task. That's something you should be able to solve in much less than a thousand lines of code. However the system they built had a lot of unnecessary features. For example if you inserted an USB stick (why did it have USB support) it displayed icons for some of the files. And those icons can be in DLLs where the stub code gets executed when you load them. So you insert an USB stick and the system will execute code from it... just like it's advertised in the manual. Other features include remote printing to file, so you can print to a file on a remote computer, or storing configuration files in an SQL database, obviously with a hard coded password. Those systems are unfortunately done by people who don't understand what they are doing. They use complex systems, but have no idea how they work. And instead of making their systems simpler, they actually make them more and more complex. Just google for "SCADA in the Cloud" and read all the justifications for it. Battlestar Galactica (Score:3) Reminds me a bit of one of the tropes from battlestar galactica. Adama knew from the previous war that the cylons where master hackers and could disable battlestars by breaking into networks via wireless and then using them to disable the whole ship, leaving them effectively dead in the water, so he simply ordered that none of his ship ever be networked and that the ship be driven using manual control. Later on they meet the other surviving battleship, the pegasus, and it turns out that only survived because its network was offline due to maintainance. Its not actually a novel idea in militaries. I remember in the 90s doing a small contract for a special forces group I can't name, and asked them about their computer network. He said they used "Sneaker-net", which is that any info that needed transfer was put on a floppy and walked to its destination, thus creating an air gap between battlefield systems. I guess this isn't quite that, but it certainly seems to be a sort of variant of it. Computer viruses predated the internet ... (Score:2) isolate control systems from the Internet. (Score:2) Editor or submitter said isolate control systems from the Internet. Stuxnet has shown that it is not enough. You can still be infected by an USB key. What a pathetic uninformed crock of sh artic (Score:2) Analog vs digital has nothing to do with "cyberterrorism". Analog refers to systems with an infinite number of states, digital refers to systems with a finite number of states. If properly designed, both are perfectly safe. Cyber security has nothing to do with digital or analog, and everything to do with software and networking. Which have nothing whatsoever to do with the analog vs digital design choices. TFA reads like a science essay from a 3rd grader who write with technical words to look smart, but does Re: (Score:2) The problem is that modern digital systems have to many possibilities. You can not be certain that a security system with in field reprogramming abilities is safe. It may be expensive (in both space and dollars) but critical systems should have safe limits embedded in the hardware. A powerplant should not be able to increase the output voltage without hardware modifications. A nuclear plant must fail safe, even if the software is hacked. In essence you are right: It doesn't matter if those securities are in d Maybe you could (Score:2) >Or maybe you could isolate control systems from the Internet. Yes, maybe is the keyword there. Set up everything to be nice and air-gapped, and maybe some joker won't bring in his malware-infected laptop the next day and temporarily hook it up to your "secure network" in order to transfer a file over. Or then again, maybe he will. Who knows? This fixes it as a side effect (Score:3) The core problem is that "data" and "code" are being sent over the same path - the reporting data is being sent out, and the control "data" is being sent in, but it's over a two-way Internet connection. If you had an analog control system that was openly accessible in some way, you'd have the exact same problems. Or you could have a complete separate, non-public digital control connection that would be secure. But nobody wants to lay two sets of cable to one device, and there's a convenience factor in remote control. So since security doesn't sell products*, but low price and convenience features do, we got into our current situation. It's not "digital"'s fault. It's not "analog"'s fault. It probably would have happened even if all our long-range communication networks were built of hydraulics and springs. * For those who are about to point out how much money antivirus software makes, that's fear selling, not security. Fear moves product *very* well. "Isolate from the Internet" is hard (Score:3) Air-gap alone is not enough. Stuxnet travelled via USB sticks. And if your hardware (or anything connected to it) has a wireless interface on it (Bluetooth, Wifi, etc), you have a problem ... an operator might bring a hacked phone within range, for example. Simplifying the hardware down to fixed-function IC or analog reduces the attack surface much more than attempts to isolate the hardware from the Internet. Re: (Score:2) Air-gap alone is not enough. Stuxnet travelled via USB sticks. The Stuxnet attack was (for the Iranians) a failure of operational security. The attackers knew exactly what hardware/software was being used and how it was set up. If the Iranians had one less centrifuge hooked up, or a different SCADA firmware version, the worm would have never triggered. There is such a thing as security through obscurity. It's never a complete solution, but it should always be your first line of defense. Re:"Isolate from the Internet" is hard (Score:4, Interesting) Simplifying the hardware down to fixed-function IC or analog reduces the attack surface much more than attempts to isolate the hardware from the Internet. It also dramatically reduces the functionality. You've saved yourself from hackers only to get undone by dangerous undetected failure of instrumentation. Anyone who boils a security argument down to stupefying everything has missed a world of advancements which have come from the digital world. Thanks but no thanks. I'm much more likely to blow up my plant due to failed equipment than due to some hacker playing around. Perhaps analog isn't the right term (Score:2) The key is hard stop rather than analog. For a simple example, imagine 3 machines that draw a great deal of inrush current using typical start/stop controls. Since we're in the digital age, we put them under computer control. The controller can strobe the start or stop lines for the 3 machines. Now, they must not all be started at once or they'll blow out everything back to the substation. We know they must be started 10 seconds apart at least. Doing it the "digital way" we program the delay into the control Better design and discipline (Score:2) Whether it is a series of mechanical cogs or a digital controller problem in abstract seems not so much selection of technology as it is proliferation of "nice to have" yet possibly unnecessary capabilities.. widgets which may not offer significant value after closer inspection of all risks. Is remote management really a must have or can you live without? Perhaps read-only monitoring (cutting rx lines) is a good enough compromise... perhaps not all systems need network connections, active USB ports..etc Th Obvious solution is obvious (Score:3) The hubris of some thinking that everything can be linked to the internet while maintaining acceptable security is ignorant. Some systems need to be air gapped. And some core systems just need to be too simple to hack. I'm not saying analog. Merely so simple that we can actually say with certainty that there is no coding exploit. That means programs short enough that the code can be completely audited and made unhackable. Between airgapping and keeping core systems too simple to hack... we'll be safe from complete infiltration. Re: (Score:2) The hubris of some thinking that everything can be linked to the internet while maintaining acceptable security is ignorant. Actually I find the entire debate boiling down to one side thinking everything is completely open directly connected to the internet almost as laughable as the other side thinking air gapping is the answer. I'll meet you in the middle. Air-gapping is not a solution in many cases. You simply can't run many modern plants without the ability to get live data out of the system and whisk it across the world. Does that mean your control system has an ADSL modem attached? Hell no. But there are many ways to network Re: (Score:2) As to modern plants requiring remote control, I would look at that very carefully and do my best to limit it. Most plants are manned 24/7. There's no reason those plants couldn't take directions from grid operators and manually throttle the plant up or down. Sure, the standby diesel plants might throttle up and down a lot but most of the large coal, hydro, etc plants tend to hold a given output. As to insiders hacking the system, there is no solution to that issue so that's a bullshit counter argument. An ins Lots of unproven assertions here. (Score:4, Interesting) . The key is cost (Score:2) Re: (Score:2) perspective of a controls engineer-- (Score:5, Insightful) There are billions of embedded systems out there, and most of them are not connected to the internet. I've designed embedded control systems for most of my career, and can attest to the many advantages a digital control system has over an analog one. Analog still has it's place (op-amps are pretty fast & cheap), but it's often quite useful to have a computer do it. Most capacitors have a 20% tolerance or so, have a temperature tolerance, and have values that drift. Your control system can drift over time, and may even become unstable due to the aging of the components in the compensator (e.g. PI, PID,lead/lag) .. Also a microcontroller wins hands down when it comes to long time constants with any kind of precision (millihertz). It's harder to make very long RC time constants, and trust those times. Microcontrollers/FPGA's are good for a wide control loops including those that are very fast or very very slow. Microcontrollers allow you to do things like adaptive control when you plant can vary over time like maintaining a precision temperature and ramp time of a blast-furnace when the volume inside can change wildly.. They also allow you to easily handle things like transport/phase lags, and a lot of corner conditions, system changes -- all without changing any hardware.. I am happy to see the same trend with software-defined radio, where we try to digitize as much of the radio as possible, as close to the antenna as possible.. Analog parts add noise, offsets, drift, cross-talk exhibit leakag,etc.. Microcontrollers allow us to minimize as much of the analog portion as possible. Tautology (Score:4) Re: (Score:3) No. A "cyber-attack" is an attack on a "cyber". Whatever the fuck that is. Cybernetics refers to control and feedback systems, which is traditionally an analogue discipline. Today "cyber", for whatever reasons, refers to doing things over teh intarwebz. So the problem is having old cyber connected to new cyber. (BTW, "cyber" has something to do with "android" when you stay within either one of the "old" or "new" namespaces.) Hmm... (Score:2) Re: (Score:2) The Therac-25 problems could have been easily prevented with better software processes and practices; no hardware safeguards were/are needed. If the hardware had been developed like the software was, the hardware would likely have failed too. The simple fact is this : (Score:2) If a device can be controlled with an electronic signal, that means that the device can be controlled with an electronic signal. Sometimes that signal will come from where you want it to, but there can be no guarantee that it will not come from somewhere else. Perhaps. (Score:2) "the analog protection systems have one big advantage over their digital successors: they are immune against cyber attacks." Unfortunately they are not immune to idiotic engineers as we learned the hard way. slashdot needs... (Score:2) Re: (Score:2) Yes it did. It was transmitted via USB stick.
https://it.slashdot.org/story/14/03/18/021239/is-analog-the-fix-for-cyber-terrorism?sdsrc=prev
CC-MAIN-2016-40
refinedweb
8,159
62.58
A resource file contains static, non-executable data that is deployed with an application. A common usage for resource files is error messages and static user interface text. Resource files allow you to easily change the data without touching the application code. The .NET Framework provides various avenues for working with resource files. This includes comprehensive support for the creation and localization of resources, as well as a simple model for packaging and deploying localized resources. (Localized means it is developed for a certain culture or language, thus you can use resources to provide application access to more than one language.) More information about resource files Resources can store data in a variety of formats, including strings, images, and persisted objects. Note that to write persisted objects to a resource file, the objects must be serializable. Also, Microsoft advises against using resource files for storing password or other sensitive data. Resource files use the .resx file extension. When developing ASP.NET applications via Visual Studio .NET (VS.NET), you will notice a resource file for each Web form used (select the Show All Files icon if they do not appear). Resource files may also be added if you need more, or you are not using Visual Studio. Like almost all .NET files, resource files are text based and easily edited with your favorite text editor. However, resource files are XML, so valid XML is required. The XML schema is included with resource files created in VS.NET. Listing A shows a simple resource file associated with an ASP.NET Web form. A quick review of the XML reveals the XSD comprises almost this entire example. The data element contains our data values. The attributes and elements contained within the data element are defined in the XSD. This includes the following data values: - Name: The value used to retrieve the specific data from the resource file. It is analogous to a variable name. - Type: The type of data stored. - Value: The value assigned to the variable. A quick glance at the XSD reveals this is an option (minOccurs is 0), and it can only have one value (maxOccurs is 1). - Comment: Any additional information you want to include with the data. The value element may contain simple text (as in our example), or it may contain more complex data such as a serialized object or a bitmap image. At this point, we have the data in the resource file, but how is it used in the application? Let's turn our attention to the .NET classes used to access resource data. Working with resource data The .NET Framework provides various classes for working with resources in the System.Resources namespace. One good example is the ResourceManager class, which provides access to culture-specific resources. However, if no culture is designated, it falls back to the default culture. Our example does not have multiple cultures defined, so the default is utilized with the default being the resources attached to the base application DLL. The code in Listing B uses the ResourceManager class to use the data stored in our resource file. (Listing C features the equivalent VB.NET.) The code runs and retrieves the value from the specific resource file—in this case, it is the resource file associated with the Web form. The text from the data element is displayed in the Web page. This is very simple, but it showcases the flexibility afforded by resource files. You could easily store all user interface text (well, at least the text that is relatively static). This would include toolbar labels, titles, copyright information, and so forth. As previously stated, you can add new resource files to the project as well. To work with data stored in added resource files, you will use the resource file's name just like TRResource.WebForm1 was used in the previous listing. The following line uses the resource file named Example: ResourceManagerrm = new ResourceManager("NamespaceName.Example", a); Here's the equivalent VB.NET code: Dim rm As ResourceManager rm = New ResourceManager("NamespaceName.Example", a) Although this article focuses on ASP.NET, you could use the same approach with a Windows Forms application. A big difference is resource files are not included with Windows applications by default in VS.NET, so you'll have to add resource files to the project. Application roll-out If you are developing an application that uses only one language, then pushing it to a Web server is no different than any other project. However, if you are providing localized text via resource files, you'll have to follow a certain approach to ensure .NET can locate the necessary resources for a specific culture/language. This is well beyond the scope of this article, but MSDN provides a wealth of information in this library entry. More options Resource files are handy and time-saving when basic site text needs to be changed. You may think that directly editing the text on an ASP.NET page is just as easy, but often text stored in a resource file spans multiple pages. The best application of resource files occurs when providing multiple language versions of a site..
http://www.techrepublic.com/article/simplify-web-site-maintenance-by-using-resource-files-in-your-net-apps/6051876/
CC-MAIN-2016-36
refinedweb
862
58.18
When writing applications, I often have a good idea about what it's supposed to be doing when it's running, after all I wrote/helped write the darn thing. But once you start adding WCF services, databases, authentications, and other things not controlled directly by you, it's hard to make sure the program you wrote is working like you intended it to. Was the WCF call successful? Was the database authenticated successfully? Naturally, if your application is working then everything is doing what you programed it to. But when it's not - and there will be times it doesn't - it's good to know where the problem lies, and what caused it. In the past, I've had several instances when I was sitting at my desk and someone would come over and say "Casey! Your death ray controller application is down!" in which I would proceed to spend time debugging the application to find out why it all of a sudden wasn't working when it used to. Maybe the database is down... maybe the host machine ran out of resources... maybe there was a change to a server that caused one of my components to break. It was usually something odd. But the problem wasn't that it broke, the problem was the length of time it took to find what the problem was. "What do you mean you have no idea?!" They would say. "You WROTE the program!" Soon I learned that adding some logging to the application can QUICKLY shorten the time it takes to determine what the problem is. It also helps to give you some piece of mind that the application is running, so you can feel comfortable handing the application off to someone else. There are a couple of different ways of logging your application, and a couple of different situations in which you should do the logging. For logging you have: Event Logs, DebugView, text file, web page, and console. Event Logs are the least helpful to other programmers, but most helpful to admins. The opposite is true for DebugView. Text files, web pages, and the console works good also for some cases and can be a helpful way to check the status of your application. So let's make a class called "DeathRayController", and do some logging against it... class DeathRayController { public DeathRayController() { } public void Zap() { if (IsDeathRayConnected() == true & IsDeathRayTempOkay() == true) { Console.WriteLine("ZZZZZAAAAAPPPPPP!!!!"); } } private bool IsDeathRayTempOkay() { //Since we don't have an ACTUAL death ray (yet), lets simulate some conditions... Random rnd = new Random(DateTime.Now.Millisecond); int action = rnd.Next(0, 20); if (action <= 5) { return false; } //Death Ray is too hot! Can't use! if (action == 13) { throw new DeathRayOutOfMemoryException(); } //Weird exception we dont handle... return true; } private bool IsDeathRayConnected() { //again, no death ray, so lets simulate some conditions. Random rnd = new Random(DateTime.Now.Millisecond); int action = rnd.Next(0, 20); if (action == 5 | action == 7 | action == 14) { throw new UnableToConnectToDeathRayException(); } //unable to connect! return true; } } public class UnableToConnectToDeathRayException : Exception { } public class DeathRayOutOfMemoryException : Exception { } ...and then we run through it 10 times... static void Main(string[] args) { DeathRayController dr = new DeathRayController(); for (int i = 0; i < 10; i++) //Lets use the Death Ray 10 times! { dr.Zap(); Console.WriteLine(); System.Threading.Thread.Sleep(1000); //Just wait a bit... } } Now when we run this, a few things can happen. We could see a console output ZZZZZAAAAAPPPPPP!!!!, or we can see nothing, or we throw an exception. If this is a console application, we can just display the problem on the console, and hope then when it's run, someone will see this. If it's a service though (death ray service?), nobody will get to see a console, or maybe it's run on a remote machine somewhere that's not always monitored (on the moon!) This is when logging comes in handy. Let us create a logging class, something that we can use for... logging. class Logger { public static void LogError(string message) { Console.WriteLine("ERROR: " + message) } } Then, we can alter our DeathRayController class to take advantage of the logging, here I've altered the IsDeathRayTempOkay class... private bool IsDeathRayTempOkay() { //Since we don't have an ACTUAL death ray (yet), lets simulate some conditions... try { Random rnd = new Random(DateTime.Now.Millisecond); int action = rnd.Next(0, 20); if (action <= 5) { Logger.LogError("Death Ray is too hot, please wait for cooling."); return false;//Death Ray is too hot! Can't use! } if (action == 13) { throw new DeathRayOutOfMemoryException(); //Weird exception we dont handle... } } catch (DeathRayOutOfMemoryException) { Logger.LogError("Exception Thrown! Death Ray is out of memory!"); } return true; } I've highlighted the area of my code when we access our logger. Again, it's only going to log to the console. But that's okay for now... lets see this thing in action... Okay! now we see how our our logger is logging to the console... Lets expand it a little bit, and write to the DebugView. What the heck is the DebugView? Debug view is part of the SysInternals that you can download from technet here. The benefit to DebugView, is you can write to the debugger and it will display live on the DebugView application. No need for the console, and you can access it from a different machine! Let's alter our class so we can take advantage of the DebugView application. First, we need to add a reference to System.Diagnostics; ...and then all we have to do is add a new line to our class... public static void LogError(string message) { Console.WriteLine("ERROR: " + message); Debug.Write(message, "DeathRayControllerERROR"); } now, if we open up DbgView, and run our application a few times, we can see the log that's produced: As you can see, we get all the benefits of console-like logging without even having a console window. Again, this works great when you have an application that doesn't open in a console window, or isn't installed on a machine that you have access to. Just like a console window, you can see this live as the application is running. The problem is, of course, it wont tell you what WAS happening, only what IS happening. Though the DbgView still has it benefits to being used. For knowing what HAS happened, you need to either write to a text file, or write to the event log. In Part 2, I will discuss connecting to the Event Log... Great article, and well written. DebugView is a fantastic tool. Debug.WriteLine will get compiled out in Release mode builds. Trace.WriteLine works in both Debug and Release builds.
https://blogs.msdn.microsoft.com/casey_the_net_consultant/2011/02/15/logging-your-application-for-fun-and-profit-part-1/
CC-MAIN-2018-22
refinedweb
1,109
75.81
On Mon, Aug 13, 2007 at 05:13:01PM -0400, Brandon S. Allbery KF8NH wrote: >. It's the *effect* of a monad, not the *side* effect. The type of >>= defines this dependency. And when you have a chain of dependencies, that is sometimes referred to as a sequence. True, it's not mystical, but it's still sequenced. Try executing: do { x <- return 2; undefined; return (x*x); } in any monad you like, and you'll find that regardless of the *data* dependencies (the return value of this monadic action is unambiguous), the undefined is evaluated *before* the value 4 is returned. -- David Roundy Department of Physics Oregon State University
http://www.haskell.org/pipermail/haskell-cafe/2007-August/030471.html
CC-MAIN-2014-41
refinedweb
110
66.84
19 December 2011 09:00 [Source: ICIS news] LONDON (ICIS)--Here are some of the top stories from ICIS Europe for the week ended 16 December 2011. Europe MEG prices unlikely to fall further as market balances out European monoethylene glycol (MEG) prices are unlikely to decrease further as availability balances out with demand, sources said on Friday. Europe DOP producers turn down sales due to low prices European producers of dioctyl phthalate (DOP) are turning down sales to avoid negative margins, market sources said on Thursday. ?xml:namespace> OPEC cuts 2012 world oil demand growth forecast The forecast for world oil demand growth in 2012 has been revised down by 100,000 bbl/day to 1.1m bbl/day, because of a slowdown in the global economy, OPEC said on Tuesday. UN’s The outcome of the
http://www.icis.com/Articles/2011/12/19/9517415/europe-top-stories-weekly-summary.html
CC-MAIN-2015-11
refinedweb
138
57.91
After all the hard work you did to train your tree ensemble model, you now have to deploy the model. Deployment refers to distributing your model to other machines and devices so as to make predictions on them. To facilitate the coming discussions, let us define a few terms. Host machine : the machine running Treelite. Target machine : the machine on which predictions will be made. The host machine may or may not be identical to the target machine. In cases where it’s infeasible to install Treelite on the target machine, the host and target machines will be necessarily distinct. Shared library : a blob of executable subroutines that can be imported by other native applications. Shared libraries will often have file extensions .dll, .so, or .dylib. Going back to the particular context of tree deployment, Treelite will produce a shared library containing the prediction subroutine (compiled to native machine code). Runtime package : a tiny fraction of the full Treelite package, consisting of a few helper functions that lets you easily load shared libraries and make predictions. The runtime is good to have, but on systems lacking Python we can do without it. In this document, we will document two options for deployment. We will present the programming interface each deployment option presents, as well as its dependencies and requirements. Contents Option 1: Deploy prediction code with the runtime package Option 2: Deploy prediciton code only Dependencies and Requirements If feasible, this option is probably the most convenient. On the target machine, install the Treelite runtime by running pip: python3 -m pip install treelite_runtime --user Once the Treelite runtime is installed, it suffices to follow instructions in First tutorial. With this option, neither Python nor a C++ compiler is required. You should be able to adopt this option using any basic installation of UNIX-like operating systems. The target machine shall meet the following conditions: A C compiler is available. The C compiler supports the following features of the C99 standard: inline functions; declaration of loop variables inside for loop; the expf function in <math.h>; the <stdint.h> header. GNU Make or Microsoft NMake is installed. An archive utility exists that can open a .zip archive. 1. On the host machine, install Treelite and import your tree ensemble model. You should end up with the model object of type Model. ### Run this block on the **host** machine import treelite model = treelite.Model.load('your_model.model', 'xgboost') # You may also use `from_xgboost` method or the builder class 2. Export your model as a source package by calling the method export_srcpkg() of the Model object. The source package will contain C code representation of the prediction subroutine. ### Continued from the previous code block # Operating system of the target machine platform = 'unix' # C compiler to use to compile prediction code on the target machine toolchain = 'gcc' # Save the source package as a zip archive named mymodel.zip # Later, we'll use this package to produce the library mymodel.so. model.export_srcpkg(platform=platform, toolchain=toolchain, pkgpath='./mymodel.zip', libname='mymodel.so', verbose=True) Note On the value of toolchain Treelite supports only three toolchain configurations (‘msvc’, ‘gcc’, ‘clang’) for which it generates Makefiles. If you are using a compiler other than these three, you will have to write your own Makefile. For now, just set toolchain='gcc' and move on. After calling export_srcpkg(), you should be able to find the zip archive named mymodel.zip inside the current working directory. john.doe@host-machine:/home/john.doe/$ ls . mymodel.zip your_model.model The content of mymodel.zip consists of the header and source files, as well as the Makefile: john.doe@host-machine:/home/john.doe/$ unzip -l mymodel.zip Archive: mymodel.zip Length Date Time Name --------- ---------- ----- ---- 0 11-01-2017 23:11 mymodel/ 167 11-01-2017 23:11 mymodel/Makefile 4831036 11-01-2017 23:11 mymodel/mymodel.c 311 11-01-2017 23:11 mymodel/mymodel.h 109 11-01-2017 23:11 mymodel/recipe.json --------- ------- 4831623 5 files 3. Now you are ready to deploy the model to the target machine. Copy to the target machine the archive mymodel.zip (source package). john.doe@host-machine:/home/john.doe/$ sftp john.doe@target-machine Connected to target-machine. sftp> put mymodel.zip Uploading mymodel.zip to /home/john.doe/mymodel.zip mymodel.zip 100% 410KB 618.2KB/s 00:00 sftp> quit 4. It is time to move to the target machine. On the target machine, extract the archive mymodel.zip: john.doe@host-machine:/home/john.doe/$ ssh john.doe@target-machine Last login: Tue Oct 31 00:43:36 2017 from host-machine john.doe@target-machine:/home/john.doe/$ unzip mymodel.zip Archive: mymodel.zip creating: mymodel/ inflating: mymodel/Makefile inflating: mymodel/mymodel.c inflating: mymodel/mymodel.h inflating: mymodel/recipe.json 5. Build the source package (using GNU Make or NMake). john.doe@target-machine:/home/john.doe/$ cd mymodel john.doe@target-machine:/home/john.doe/mymodel/$ make gcc -c -O3 -o mymodel.o mymodel.c -fPIC -std=c99 -flto -fopenmp gcc -shared -O3 -o mymodel.so mymodel.o -std=c99 -flto -fopenmp john.doe@target-machine:/home/john.doe/mymodel/$ ls Makefile mymodel.c mymodel.so mymodel.h mymodel.o recipe.json Note Parallel compilation with GNU Make If you used parallel_comp option to split the model into multiple source files, you can take advantage of parallel compilation. Simply replace make with make -jN, where N is replaced with the number of workers to launch. Setting N too high may result into memory shortage. Note Using other compilers If you are using a compiler other than gcc, clang, or Microsoft Visual C++, you will need to compose your own Makefile. Open the Makefile and make necessary changes. The prediction library provides the function predict with the following signature: float predict(union Entry* data, int pred_margin); Here, the argument data must be an array of length M, where M is the number of features used in the tree ensemble. The data array stores all the feature values of a single row. To indicate presence or absence of a feature value, we use the union type Entry, which defined as union Entry { int missing; float fvalue; }; For missing values, we set the missing field to -1. For non-missing ones, we set the fvalue field to the feature value. The total number of features is given by the function size_t get_num_feature(void); Let’s look at an example. We’d start by initializing the array inst, a dense aray to hold feature values of a single data row: /* number of features */ const size_t num_feature = get_num_feature(); /* inst: dense vector storing feature values */ union Entry* inst = malloc(sizeof(union Entry) * num_feature); /* clear inst with all missing values */ for (i = 0; i < num_feature; ++i) { inst[i].missing = -1; } Before calling the function predict, the array inst needs to be initialized with missing and present feature values. The following peudocode illustrates the idea: For each data row rid: inst[i].missing == -1 for every i, assuming all features lack values For each feature i for which the data row in fact has a feature value: Set inst[i].fvalue = [feature value], to indicate presence Call predict(inst, 0) and get prediction for the data row rid For each feature i for which the row has a feature value: Set inst[i].missing = -1, to prepare for next row (rid + 1) The task is not too difficult as long as the input data is given as a particular form of sparse matrix: the Compressed Sparse Row format. The sparse matrix consists of three arrays: val stores nonzero entries in row-major order. col_ind stores column indices of the entries in val. The expression col_ind[i] indicates the column index of the i th entry val[i]. row_ptr stores the locations in val that start and end data rows. The i th data row is given by the array slice val[row_ptr[i]:row_ptr[i+1]]. /* nrow : number of data rows */; } } It only remains to create three arrays val, col_ind, and row_ptr. You may want to use a third-pary library here to read from a SVMLight format. For now, we’ll punt the issue of loading the input data and write it out as constants in the program: #include <stdio.h> #include <stdlib.h> #include "mymodel.h" int main(void) { /* 5x13 "sparse" matrix, in CSR format [[ 0. , 0. , 0.68, 0.99, 0. , 0.11, 0. , 0.82, 0. , 0. , 0. , 0. , 0. ], [ 0. , 0. , 0.99, 0. , 0. , 0. , 0. , 0. , 0. , 0.61, 0. , 0. , 0. ], [ 0.02, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ], [ 0. , 0. , 0.36, 0. , 0.82, 0. , 0. , 0.57, 0. , 0. , 0. , 0. , 0.75], [ 0.47, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.45, 0. ]] */ const float val[] = {0.68, 0.99, 0.11, 0.82, 0.99, 0.61, 0.02, 0.36, 0.82, 0.57, 0.75, 0.47, 0.45}; const size_t col_ind[] = {2, 3, 5, 7, 2, 9, 0, 2, 4, 7, 12, 0, 11}; const size_t row_ptr[] = {0, 4, 6, 7, 11, 13}; const size_t nrow = 5; const size_t ncol = 13; /* number of features */ const size_t num_feature = get_num_feature(); /* inst: dense vector storing feature values */ union Entry* inst = malloc(sizeof(union Entry) * num_feature); float* out_pred = malloc(sizeof(float) * nrow); size_t rid, ibegin, iend, i; /* clear inst with all missing */ for (i = 0; i < num_feature; ++i) { inst[i].missing = -1; }; } printf("pred[%zu] = %f\n", rid, out_pred[rid]); } free(inst); free(out_pred); return 0; } Save the program as a .c file and put it in the same directory mymodel/. To link the program against the prediction library mymodel.so, simply run gcc -o myprog myprog.c mymodel.so -I. -std=c99 -lm As long as the program myprog is in the same directory of the prediction library mymodel.so, we’ll be good to go. A sample output: pred[0] = 44.880001 pred[1] = 44.880001 pred[2] = 44.880001 pred[3] = 42.670002 pred[4] = 44.880001
https://treelite.readthedocs.io/en/latest/tutorials/deploy.html
CC-MAIN-2022-21
refinedweb
1,691
57.98
Fixed. Please update Search Criteria Package Details: downgrader 2.0.0-3 Dependencies (0) Required by (0) Sources (1) Latest Comments Spilver commented on 2016-05-31 18:31 Spilver commented on 2016-01-31 07:39 Hi It is temporary problems with ARM datacenter. Also at this moment I prepare new package checking algorithm. Please update downgrader from AUR and you can temporary downgrade in auto mode, without -l option boomshalek commented on 2016-01-30 22:59 I am getting for all packages "not available". Any hoint why ? downgrader php Downgrade package: php Package 'php' not available. Please check package name downgrader firefox Downgrade package: firefox Package 'firefox' not available. Please check package name Spilver commented on 2016-01-09 12:50 Thanks, resolved. Also place downgrading two packages at once in my todo list Spilver commented on 2016-01-08 17:07 pls check your e-mail severach commented on 2016-01-07 21:37 No help. Crashes on gcc-multilib but not php or php-apache. Seems that downgrading two packages at once doesn't work. # downgrader php php-apache Downgrade package: php-apache Spilver commented on 2016-01-04 21:19 Looks like fixed. Please update severach commented on 2016-01-04 10:27 % downgrader gcc-multilib Downgrade package: gcc-multilib [1] 25055 segmentation fault (core dumped) downgrader gcc-multilib Spilver commented on 2015-10-14 17:35 Hi. I fix md5 check in PKGBUILD. I hope now all fine pizzapill commented on 2015-10-14 09:12 Same issue as Lucius. This package is broken. Lucius commented on 2015-09-11 11:47 I am a big fan of Arch Linux, learning a lot. Usually I solve all my problembs with Dr. Google But here I am stuck. Trying to install this paket i get ==> FEHLER: Integritäts-Prüfungen fehlen. ==> FEHLER:Makepkg konnte downgrader nicht erstellen. I am aware that I can manually install the package with makepkg with the appropriate flag. But I fear I can break the system if manually installing packages. At the moment I try to install via yaourt. do I need to add a packagekey ? I cant find any anybody can help ? :) anthraxx commented on 2015-09-01 15:54 please clean up this package repository, it should only contain the PKGBUILD related files and not the whole source-code of the project (*.c and *.h files) backfist commented on 2015-05-28 10:35 This depends on sudo. firekage commented on 2015-02-21 02:43 I tried to use it in comparision to downgrade package and downgrader does not work at all. Tried to put: downgrader -l nvidia and after that i saw error: "memory seg fault". Spilver commented on 2015-02-03 20:50 All done Alister.Hood commented on 2014-12-10 22:38 Please could this rename the source file to prevent problems for people using AUR helpers: source=($pkgname-$pkgver::"") vania commented on 2014-11-10 11:57 Is possible remove "sudo" dependence? Tx digifuzzy commented on 2014-06-29 18:18 problem solved. Thx! Zeben commented on 2014-06-27 17:50 Now all works. Thank you very much! Happy birthday 'downgrader' program :) Spilver commented on 2014-06-27 16:02 Please reinstall package 'libarchive' Spilver commented on 2014-06-27 15:58 Hm..strange.. Is file /usr/include/archive.h exists? No? - reinstall package 'libarchive' Finaly all right ? digifuzzy commented on 2014-06-27 15:29 Yea birthdays! However, latest version gives the message: ==== ==> Starting build()... In file included from main.c:5:0: /usr/include/alpm.h:35:21: fatal error: archive.h: No such file or directory #include <archive.h> ^ compilation terminated. make: *** [main.o] Error 1 ==> ERROR: A failure occurred in build(). Aborting... gcc -g -c main.c -o main.o -std=c99 -Wno-implicit-function-declaration Makefile:5: recipe for target 'main.o' failed ===== I can't seem to locate this header file in pacman sources. Suggestions? Spilver commented on 2014-06-27 14:37 Today downgrader 3 years anniversary! Spilver commented on 2014-06-27 14:35 Please update, it is fixed. But situation with glibc stays unclear. I disable temporary freeing memory, after execution complete. Zeben commented on 2014-06-27 07:19... Update: This error only present on my amd64 system and not in other i686 system. Zeben commented on 2014-06-26 20:51... Spilver commented on 2014-06-26 16:24 It was problem with access to ARM. Now it is OK, but please update - I make some modifications Spilver commented on 2014-06-26 15:50 Please update. ARM is unavailable (I hope temporary), so I disable list function Zeben commented on 2014-06-26 13:48 Downgrader doesn't work with glibc-2.19 [archzeb@devtester-uefi64-arch shm]$ downgrader -l chromium Downgrade package: chromium Segmentation fault [archzeb@devtester-uefi64-arch shm]$ downgrader -l linux Downgrade package: linux Segmentation fault Logs in dmesg: [ 2460.302532] downgrader[4209]: segfault at 0 ip 00007fd8fa8c4617 sp 00007fff05fd1768 error 4 in libc-2.19.so[7fd8fa82e000+1a4000] What needed for fixing this bug? Thanks in advance for replying. nonerd commented on 2014-05-17 06:01 Would be more useful when listing package dates and supporting a --date option. Spilver commented on 2014-03-20 18:18 Thanks for report. Now fixed, please update silverbucket commented on 2014-03-20 13:23 I would like to downgrade my kernel from 3.13 to 3.12, however when I run downgrader there doesn't seem to be that option: $ sudo downgrader -l linux Downgrade package: linux Packages in ARM: 0 >> Please enter package number, [q] to quit If I just try to downgrade it directly: $ sudo downgrader linux Downgrade package: linux Installed version: 3.13.6-1 Downgrading from Cache, to version 3.13.5-1 warning: downgrading package linux (3.13.6-1 => 3.13.5-1) resolving dependencies... looking for inter-conflicts... Packages (1): linux-3.13.5-1 Total Installed Size: 67.12 MiB Net Upgrade Size: 0.00 MiB :: Proceed with installation? [Y/n] n ... I want to go to 3.12, not 3.13.5 Any way to do this? Spilver commented on 2014-01-20 17:53 Temporary I remove config support, my library is slightly unstable Please UPDATE keepitsimpleengr commented on 2014-01-20 17:31 Today's update(1.7.0-3) fails with: gcc -g -c main.c -o main.o -std=c99 -Wno-implicit-function-declaration main.c:8:19: fatal error: cJSON.h: No such file or directory #include "cJSON.h" ^ compilation terminated. Makefile:5: recipe for target 'main.o' failed make: *** [main.o] Error 1 Linux kise-005 3.12.7-2-ARCH #1 SMP PREEMPT Sun Jan 12 13:09:09 CET 2014 x86_64 GNU/Linux Spilver commented on 2014-01-20 16:43 Temporary I remove config support, my library is slightly unstable Please update thefrip commented on 2014-01-20 16:21 The default config file downgrader.conf is not copied through the PKGBUILD (only by the make install which is not triggered). This should be added. bernd_b commented on 2013-12-29 21:48 I didn't use sudo. Sorry if I was unclear about this. I ran downgrader as user root. If I start it as normal user, this happens: ============================= downgrader -l vlc Downgrade package: vlc ... 1: vlc-2.1.2-1 [installed] ... 8: vlc-2.0.8.a-1 [will be installed by default] >> Please enter package number, [q] to quit , [d] to install default package: 8 [sudo] password for bernd_b: Sorry, user bernd_b is not allowed to execute '/usr/bin/pacman -U' as root on amd64-archlinux ============================ So am I supposed to add /usr/bin/pacman to my /etc/sudoers config to use downgrader? I would have bet I did it once simply logged in as user root ... Spilver commented on 2013-12-29 21:31 Please start downgrader without "sudo". And it will work like a sharm! It will ask you a root password, when it will need them bernd_b commented on 2013-12-29 11:58 No. It happens on different pc here in a terminal in X11 as well as in a virtual console - every time logged in as root. Spilver commented on 2013-12-29 09:28 Hi. For me works ok. Are you starting downgrader without "sudo" ? bernd_b commented on 2013-12-28 22:47 Is this working for all but me? ================= downgrader -l linux Downgrade package: linux Segmentation fault (core dumped) ================= Spilver commented on 2013-08-30 11:03 1. Remove install script, I hope all users are updated =) 2. Migrating to new ARM service, in case old is dead Version 1.6.7-4 released. Thanks for your help! Spilver commented on 2013-08-29 17:46 1. Remove install script, I hope all users is updated =) 2. Migrating to new ARM service, in case old is dead Version 1.6.7-4 released. Thanks for your help! Spilver commented on 2013-08-29 17:45 1. Remove install script, I hope all users is updated =) 2. Migrating to new ARM service, in case old is dead Thanks for your help! Spilver commented on 2013-08-26 18:13 Now, you can help me to testing implementation of new ARM service - please build downgrader from github.com and help me test something. Thanks for your support Spilver commented on 2013-08-25 15:49 Hi. @jthurner: Changed. @all: A.R.M. is dead, at this moment work with ARM is incorrect. In a few days/weeks I will implement new fork of ARM: after testing. Details: Spilver commented on 2013-08-25 15:34 Hi Folks. A.R.M. is dead, at this moment work with ARM is incorrect. In a few days/weeks I will implement new fork of ARM: after testing. Details: jthurner commented on 2013-08-13 22:20 Could you add something along the lines of "..your pacman.log has been backed up to /var/log/pacman.log.old" to the install script? I was a bit worried on "Now, for correct work I need to clear your pacman log file". botika commented on 2013-07-18 15:16 namcap out: downgrader E: ELF file ('usr/bin/downgrader') found in an 'any' package. Spilver commented on 2013-06-16 09:04 done, thanks! Anonymous comment on 2013-06-16 08:16 Ok, you implemented it, but please don't do it that way! If you need root access with a package, don't use sudo in the package function, but use an install script. This is called by root at the install process. No sudo required. Spilver commented on 2013-06-15 16:51 It is good idea. PKGBUILD updated mar04 commented on 2013-06-15 16:40 Yes, now it works, but removing pacman.log is IMHO unacceptable. If you absolutely have to do this, then move it to pacman.log.old and display a warning. Spilver commented on 2013-06-15 15:55 Please update. Now it works On update installer will remove your pacman log file - it is needed, because after pacman update to version 4.1.0-2 syntax of log file is changed and can`t processing by downgrader. Thanks for your help Spilver commented on 2013-06-15 13:59 Dear Friends, Please chek your e-mails with my questions. mar04 commented on 2013-06-15 10:09 Same here, segmentation fault stephanbeta commented on 2013-05-24 17:11 It happens to me too, with virtually any package (linux, amarok, etc...): $ downgrader -l linux Downgrade package: linux Segmentation fault (core dumped) Spilver commented on 2013-05-22 15:31 Please send me name of downgraded package or console output sledge commented on 2013-05-22 10:02 It gives segmentations fault (newest updates as of today) igndenok commented on 2013-04-12 23:09 Thanks for the update, it's works now. Spilver commented on 2013-04-12 15:25 It is updated, please rebuild Thanks for your help igndenok commented on 2013-04-12 10:51 Doesn't compatible with libalpm.so.8 need libalpm.so.7 to works. Can you update it? Alister.Hood commented on 2013-02-09 23:04 It would be good if this program had a versioned source package, to prevent problems like this: ==> Making package: downgrader 1.4.3-1 (Sun Feb 10 12:09:59 NZDT 2013) ==> Checking runtime dependencies... ==> Checking buildtime dependencies... ==> Retrieving Sources... -> Found downgrader.tar.xz ==> Validating source files with md5sums... downgrader.tar.xz ... FAILED ==> ERROR: One or more files did not pass the validity check! ==> ERROR: Makepkg was unable to build downgrader. ==> Restart building downgrader ? [y/N] Spilver commented on 2012-12-05 21:03 Please upgrade to new version Spilver commented on 2012-12-04 20:12 I need some time for repair a program, in case of aur structure is changes cobalt commented on 2012-11-20 09:23 It gives Segmentation fault cobalt commented on 2012-11-20 09:08 It gives Segmentation fault Spilver commented on 2012-07-18 16:01 sudo installed on most of systems..updated psychoticmeow commented on 2012-07-18 01:40 Package needs to be updated as it has a runtime dependency on sudo. Spilver commented on 2012-06-26 15:40 Dear Fandekasp, send me please compressed your file /var/log/pacman.log by e-mail, I will check it. For me all works fine. Thanks in advance Fandekasp commented on 2012-06-26 08:09 Just installed downgrader 1.3.2-1, then tried to get the list of available packages versions for downgrade package openssl, and get: [root@arch ]# downgrader -l openssl Downgrade package: openssl Segmentation fault Spilver commented on 2012-04-09 18:36 Please rebuild, fixed zwastik commented on 2012-04-09 17:45 ==> Making package: downgrader 1.3.0-1 (lun abr 9 14:44:45 CLST 2012) ==> Checking runtime dependencies... ==> Checking buildtime dependencies... ==> Retrieving Sources... -> Found downgrader.tar.xz ==> Validating source files with md5sums... downgrader.tar.xz ... Passed ==> Extracting Sources... -> Extracting downgrader.tar.xz with bsdtar ==> Starting build()... g++ -g -c main.cc -o main.o main.cc: In function ‘int main(int, char**)’: main.cc:20:47: error: ‘getopt’ was not declared in this scope main.cc:24:15: error: ‘optarg’ was not declared in this scope main.cc:29:15: error: ‘optarg’ was not declared in this scope main.cc:34:27: error: ‘optarg’ was not declared in this scope make: *** [main.o] Error 1 ==> ERROR: A failure occurred in build(). Aborting... ==> ERROR: Makepkg was unable to build downgrader. 0rAX0 commented on 2012-02-22 07:46 Thanks. :) Spilver commented on 2012-02-21 21:21 Migrating from sockets to curl+json complete! Spilver commented on 2012-02-21 18:12 Please update, now it fixed. I hope all will be fine now 0rAX0 commented on 2012-02-21 17:51 Done. Spilver commented on 2012-02-21 17:42 yes, please compress this file and send it to me 0rAX0 commented on 2012-02-21 17:35 I'm trying to downgrade Wine. You want all the file? Spilver commented on 2012-02-21 17:32 Hi Thanks for segfault report Please let me know, what package you try to downgrade ? Also send me please your file /var/log/pacman.log Thanks in advance 0rAX0 commented on 2012-02-21 17:20 Still segfaulting! What's the problem? Spilver commented on 2012-01-18 18:11 Adopted for new pacman and libalpm versions Spilver commented on 2011-11-26 15:58 Thanks for help. Fixed! Spilver commented on 2011-11-23 07:10 Dear canuckkat, please check your e-mail. There are my request canuckkat commented on 2011-11-22 21:37 It segfaults like this: [katrina@aerynsun ~]$ downgrader nettools Segmentation fault And my pacman.log just says: [2011-11-22 16:35] Running 'pacman-color -U /tmp/yaourt-tmp-katrina/PKGDEST.zTn/downgrader-1.1.2-2-any.pkg.tar.xz' [2011-11-22 16:35] upgraded downgrader (20111110-1 -> 1.1.2-2) Spilver commented on 2011-11-12 19:19 for available Spilver commented on 2011-11-11 08:58 Please send me a packge name, where segfault shows, and, if possible, your /var/log/pacman.log file. Thanks in advance canuckkat commented on 2011-11-10 19:20 Still segment faults on query. Spilver commented on 2011-11-03 17:16 Fixed bug with wrong ARM response Spilver commented on 2011-11-02 16:44 Fixed 2 problems: Segfault when reading AUR Segfault when reading long strings from Pacman logs Spilver commented on 2011-10-26 19:31 Fully rewrite in C++ complete! Happy using Spilver commented on 2011-09-10 19:56 Some improvements and changes also, done. artemklevtsov commented on 2011-08-01 21:51 You should move gtt to makedepends array. Spilver commented on 2011-07-27 16:02 Thanks, fixed! Anonymous comment on 2011-07-26 21:07 need to add intltool as a build requirement. Spilver commented on 2011-07-01 19:26 First libalpm integration Spilver commented on 2011-06-28 20:58 Huge internal system update is complete Spilver commented on 2011-06-27 18:33 Powerful packages downgrader. Written especially for Archlinux, in C. Git version. Initial release Spilver commented on 2011-06-27 18:32 Powerful downgrade packages. Written especially for Archlinux in C. Git version. Initial release
https://aur.archlinux.org/packages/downgrader/?ID=50246&comments=all
CC-MAIN-2016-36
refinedweb
2,913
68.16
", unless a "name" argument is specified below. A well-written state function will follow these steps: Note This is an extremely simplified example. Feel free to browse the source code for Salt's state modules to see other examples. Set up the return dictionary and perform any necessary input validation (type checking, looking for use of mutually-exclusive arguments, etc.). ret = {'name': name, 'result': False, 'changes': {}, 'comment': ''} if foo and bar: ret['comment'] = 'Only one of foo and bar is permitted' return ret Check if changes need to be made. This is best done with an information-gathering function in an accompanying execution module. The state should be able to use the return from this function to tell whether or not the minion is already in the desired state. result = __salt__['modname.check'](name) If step 2 found that the minion is already in the desired state, then exit immediately with a True result and without making any changes. if result: ret['result'] = True ret['comment'] = '{0} is already installed'.format(name) return ret If step 2 found that changes do need to be made, then check to see if the state was being run in test mode (i.e. with test=True). If so, then exit with a None result, a relevant comment, and (if possible) a changes entry describing what changes would be made. if __opts__['test']: ret['result'] = None ret['comment'] = '{0} would be installed'.format(name) ret['changes'] = result return ret Make the desired changes. This should again be done using a function from an accompanying execution module. If the result of that function is enough to tell you whether or not an error occurred, then you can exit with a False result and a relevant comment to explain what happened. result = __salt__['modname.install'](name) Perform the same check from step 2 again to confirm whether or not the minion is in the desired state. Just as in step 2, this function should be able to tell you by its return data whether or not changes need to be made. ret['changes'] = __salt__['modname.check'](name) As you can see here, we are setting the changes key in the return dictionary to the result of the modname.check function (just as we did in step 4). The assumption here is that the information-gathering function will return a dictionary explaining what changes need to be made. This may or may not fit your use case. Set the return data and return! if ret['changes']: ret['comment'] = '{0} failed to install'.format(name) else: ret['result'] = True ret['comment'] = '{0} was installed'.format(name) return ret Before the state module can be used, it must be distributed to minions. This can be done by placing them into salt://_states/. They can then be distributed manually to minions by running saltutil.sync_states or saltutil.sync_all. Alternatively, when running a highstate custom types will automatically be synced. NOTE: Writing state modules with hyphens in the filename will cause issues with !pyobjects routines. Best practice to stick to underscores. Any custom states which have been synced to a minion, that are named the same as one of Salt's default set of states, will take the place of the default state with the same name. Note that a state module's name defaults to one based on its filename (i.e. foo.py becomes state module foo), but that its name can be overridden by using a __virtual__ function. As with Execution Modules, State Modules can also make use of the __salt__ and __grains__ data. See cross calling execution modules.. All of the Salt state modules are available to each other and state modules can call functions available in other state modules. The variable __states__ is packed into the modules after they are loaded into the Salt minion. The __states__ variable is a Python dictionary containing all of the state modules. Dictionary keys are strings representing the names of the modules and the values are the functions themselves. Salt state modules can be cross-called by accessing the value in the __states__ dict: ret = __states__['file.managed'](name='/tmp/myfile', source='salt://myfile') This code will call the managed function in the file state module and pass the arguments name and source to it. A State Module must return a dict containing the following keys/values: name: The same value passed to the state as "name". changes: A dict describing the changes made. Each thing changed should be a key, with its value being another dict with keys called "old" and "new" containing the old/new values. For example, the pkg state's changes dict has one key for each package changed, with the "old" and "new" keys in its sub-dict containing the old and new versions of the package. For example, the final changes dictionary for this scenario would look something like this: ret['changes'].update({'my_pkg_name': {'old': '', 'new': 'my_pkg_name-1.0'}}) result: A tristate value. True if the action was successful, False if it was not, or None if the state was run in test mode, test=True, and changes would have been made if the state was not run in test mode. Note Test mode does not predict if the changes will be successful or not, and hence the result for pending changes is usually None. However, if a state is going to fail and this can be determined in test mode without applying the change, False can be returned. comment: A list of strings or a single string summarizing the result. Note that support for lists of strings is available as of Salt 2018.3.0. Lists of strings will be joined with newlines to form the final comment; this is useful to allow multiple comments from subparts of a state. Prefer to keep line lengths short (use multiple lines as needed), and end with punctuation (e.g. a period) to delimit multiple comments. Note States should not return data which cannot be serialized such as frozensets.. Note Be sure to refer to the result table listed above and displaying any possible changes when writing support for test. Looking for changes in a state is essential to test=true functionality. If a state is predicted to have no changes when test=true (or test: true in a config file) is used, then the result of the final state should not be None.. You can call the logger from custom modules to write messages to the minion logs. The following code snippet demonstrates writing log messages: import logging log = logging.getLogger(__name__) log.info('Here is Some Information') log.warning('You Should Not Do That') log.error('It Is Busted') A state module author should always assume that strings fed to the module have already decoded from strings into Unicode. In Python 2, these will be of type 'Unicode' and in Python 3 they will be of type str. Calling from a state to other Salt sub-systems, such as execution modules__').
https://docs.saltstack.com/en/develop/ref/states/writing.html
CC-MAIN-2019-09
refinedweb
1,167
72.16
Hi! > Peter Funk wrote: > > Why should modules be moved into packages? I don't get it. > Fredrik Lundh: > fwiw, neither do I... Pheeewww... And I thought I'am the only one! ;-) > I'm not so sure that Python really needs a simple reorganization > of the existing set of standard library modules. just moving the > modules around won't solve the real problems with the 1.5.2 std > library... Right. I propose to leave the namespace flat. I like to argue with Brad J. Cox ---the author of the book "Object Oriented Programming - An Evolutionary Approach" Addison Wesley, 1987--- who proposes the idea of what he calls a "Software-IC": He looks closely to design process of electronic engineers which ussually deal with large data books with prefabricated components. There are often hundreds of them in such a databook and most of them have terse and not very mnemonic names. But the engineers using them all day *know* after a short while that a 7400 chip is a TTL-chip containing 4 NAND gates. Nearly the same holds true for software engineers using Software-IC like 're' or 'struct' as their daily building blocks. A software engineer who is already familar with his/her building blocks has absolutely no advantage from a deeply nested namespace. Now for something completely different: Fredrik Lundh about the library documentation: > here's one proposal: > Whether 'md5', 'getpass' and 'traceback' fit into a category 'Commonly Used Modules' is ....ummmm.... at least a bit questionable. But we should really focus the discussion on the structure of the documentation. Since many standard library modules belong into several logical catagories at once, a true tree structured organization is simply not sufficient to describe everything. So it is important to set up pointers between related functionality. For example 'string.replace' is somewhat related to 're.sub' or 'getpass' is related to 'crypt', however 'crypt' is related to 'md5' and so on. Regards, Peter -- Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260 office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen)
https://mail.python.org/pipermail/python-dev/2000-March/002908.html
CC-MAIN-2016-44
refinedweb
349
58.08
Yu Kobayashi's HotRuby is a little different from other Ruby implementations, though. First off, it doesn't come with a Ruby parser - instead it executes opcodes of the Ruby 1.9 VM. Ruby 1.9, like Rubinius, compiles Ruby source code to opcodes which are then executed by it's opcode interpreter. These opcodes can also be stored on disk. HotRuby is also remarkable simply because it's written in JavaScript. In fact the opcode interpreter, runtime, and implementations of a few miscellaneous classes fit inside a ~40 KB JavaScript file: HotRuby.js (or look at HotRuby's Google Code repository. HotRuby is written in JavaScript which also works in Flash, which is used for a few nice demos, eg. the Pinball Demo. The logic for these demos is written in Ruby, albeit it's important to mention that a lot of the functionality of the demos comes from libraries in Flash. However, this shows the close integration of the Ruby code with the underlying JavaScript platform. Using JavaScript classes and functions is very simple, both from an implementation and usage point of view. Eg. importing JavaScript classes in Ruby (from the PinBall Demo source): $n = $nativeIn this case, $n.import "Box2D.Dynamics.*" $n.import "Box2D.Collision.*" $n.import "Box2D.Collision.Shapes.*" $nativeis a global variable of type NativeEnvironment - as a matter of fact, it doesn't contain anything. In HotRuby, accessing members of objects of this type provides functionality. Eg. $n.import "Box2D.Dynamics.*"loads the given JavaScript classes. These can then be accessed the same way, ie. via the NativeEnvironment (in these samples, also from the PinBall Demo, it's stored in the variable $n): def add_sprite @sprite = $n.Sprite.new $n.Main.m_sprite = @sprite $n._root.addChild @sprite @input = $n.Input.new @sprite end Another way to see HotRuby in action is the Do It Yourself page. This allows to type in Ruby code and run it with HotRuby. The way this works is that the Ruby source is sent to a server side service that compiles the source to opcodes which are then returned and executed by HotRuby in the browser. One issue with HotRuby, at the moment, becomes obvious after trying to run a few bits of Ruby code, or simply by looking at the list of implemented classes/methods: the library support of HotRuby is minimal (actually, only a handful of methods of crucial classes are implemented). As a matter of fact, it's easy to see the implemented base classes, as their implementation can also be seen at the bottom of the HotRuby.js source file. This is, however, a problem that might not be that hard to solve anymore - at least when it comes to functionality (performance is a different question). The Rubinius project is hard at work reimplementing a lot of basic Ruby classes in Ruby, even basic Ruby library functionality which is usually implemented in C (for CRuby), Java (JRuby) or C# (IronRuby). The idea to implement as much of a language in the language is often referred to as "Turtles All The Way Down" (after a popular blog post by Avi Bryant, although the expression is older). Obviously, any parts interfacing with the outside (I/O, Operating System integration) would have to be ported and are specific to the underlying system. Also, unless the underlying runtime can optimize it, some base classes might need to be adapted to the underlying platform to allow for acceptable performance. The Turtles All the Way Down approach has been used by many systems for a long time, one example being the Squeak Smalltalk, which is very portable. This was again demonstrated, when Dan Ingalls managed to get a Squeak image to run on the JVM (includes a link to a Java Web Start-able version). Ruby libraries in pure Ruby also open the possibility of a standalone HotRuby in the future. One of the missing pieces is a full Ruby parser - but this is being created by Ryan Davis' in the form of 'ruby_parser', a project to write a Ruby parser in Ruby. Together with a Ruby based compiler that takes a Ruby AST (ruby_parser produces them in ParseTree notation) and returns Ruby 1.9 opcodes, HotRuby could then work stand alone and run Ruby sources directly. (Both parser and compiler would have to be pre-compiled into opcodes, which HotRuby would then load - as soon as this happens the first time, it would be self hosting). While HotRuby might not be able to run Rails yet - it allows to script objects accessible to JavaScript runtimes, such the ones found in browsers or Flash. It also makes it easy (it's only a 40 KB file) to take a look at the internals of a VM capable of running Ruby 1.9 opcodes. Community comments
https://www.infoq.com/news/2008/03/hotruby-ruby-yarv-in-javascript/
CC-MAIN-2021-31
refinedweb
803
60.55
Google Groups Re: [pedantic-web] The OWL Ontology URI Richard Cyganiak May 6, 2010 2:51 PM Posted in group: Pedantic Web Group Niklas, please go to and look at item #5. I removed semantic-web from the cc list. On 5 May 2010, at 19:54, Niklas Lindström wrote: > how come the OWL Ontology itself is defined as > < >, and not > < >? The latter (hash URI, ending in "#") > is linked to, via rdfs:isDefinedBy, for all classes and properties, > but the former is the resource described as the owl:Ontology. > > Is this really intentional? It seems to conflate the document and the > ontology. And ontologies aren't considered to be information > resources, are they? I think that ontologies are information resources. An ontology is a document, written down in OWL or another ontology language. > Neither classes and properties are, That may or may not be true, but it's besides the point. An ontology defines (and thereby describes) classes and properties. Information resources can describe any kind of thing, including classes and properties. > nor is the > thing linked to via isDefinedBy (itself not described further in the > document). I agree that this is a problem; the target of the isDefinedBy links should either be further described in the document, or it should be a document itself. (rdfs:isDefinedBy is a subproperty of rdfs:seeAlso, so it certainly makes sense that the target would be a document.) > Also notice that neither RDF nor RDFS are described like this -- they > both use the hash URI as identifier for the Ontology (also linked to > with rdfs:isDefinedBy from their classes/properties). This is an interesting question, so I did a little study using the collection of all namespace URIs from . The goal was to find out how owl:Ontology and rdfs:isDefinedBy are used in the wild. 346 total namespace URIs 238 are hash URIs 192 could be dereferenced and parsed as RDF (using any23) 105 contain rdfs:isDefinedBy or owl:Ontology Of the 102 unique resources typed as owl:Ontology, 80.4% are hash-less URIs, the other 19.6% end in a hash. Of the 97 unique resources that are targets of rdfs:isDefinedBy statements, 51.5% are hash URIs *with* fragment (most of them pointing to anchors within HTML documents), 26.8% are hash-less URIs, 21.6% are URIs ending in a hash. Finally I looked at those vocabularies where the namespace URI, owl:Ontology resource, and rdfs:isDefinedBy are either identical or differ only in the presence/absence of the trailing hash. There were 40 of those documents. Results: 47.5% - Ontology and isDefinedBy target do not end in hash 27.5% - Ontology does not end in hash, isDefinedBy ends in hash 22.5% - Neither ontology nor isDefinedBy target end in hash 2.5% - Ontology ends in hash, isDefinedBy target does not The detailed lists for each of these groups are attached at the end of this message. I conclude that pedants should not use trailing hashes -- neither for the owl:Ontology typed resource, nor for the target of rdfs:isDefinedBy triples. This is the popular thing anyway, it means that only 19.6% of owl:Ontology URIs and 21.6% of rdfs:isDefinedBy targets needs fixing, and it embraces the consistent view that classes and properties are defined by ontologies, which are documents. A number of high-profile vocabularies do not use this approach: OWL, RDF, RDFS, DOAP, SIOC. But all of these predate the W3C TAG's httpRange-14 decision, so they were designed at a time when the interactions between RDF and URIs and HTTP were not yet settled. So I'd say let's not copy their archaic style, but let's get them to fix it. Any disagreement? Bring it on ;-) Richard Ontology and isDefinedBy have no hash Ontology has no hash, isDefinedBy has hash Ontology and isDefinedBy have hash Ontology has hash, isDefinedBy has no hash > > Best regards, > Niklas Previous post Next post
https://groups.google.com/forum/?fromgroups=&_escaped_fragment_=msg/pedantic-web/RZ6kxlAVIy8/8r_JE4gVXFAJ
CC-MAIN-2016-40
refinedweb
660
55.44
Hi all, I can't get Quartz to work. At first, I've created a quartzservice.sar as described in. The Service starts and is bound to JNDI. But if I create a CronTrigger and it's time to start this trigger, it complaints about ClassNotFoundException. I think, the service runs in a different JVM than my EAR. If I unpack the quartz-service into my EAR, I get a NullPointerException while my service class is looking up the quartz service. Here is my application.xml (fragments) ... <module> <java>quartz-1.6.3.jar</java> </module> <module> <ejb>quartz-jboss-1.6.3.jar</ejb> </module> ... @Service (objectName = "jboss:custom=MyOwnService", name = "MyOwnService") public class MyOwnService implements MyOwnManagement { @Resource(mappedName="Quartz") private Scheduler sched;
https://developer.jboss.org/thread/65088
CC-MAIN-2018-39
refinedweb
124
52.76
Can you use a managed usercontrol in an Office document in the same way that you can use a native ActiveX control – all without using VSTO? Some time ago, I posted about how to use native ActiveX controls within a doc-level VSTO solution, by wrapping them in managed usercontrols. A reader (Casey) asked the question, “what about going the other way?” The answer is “maybe”. Or, to be more precise, “up to a point, but the technique is unsupported and probably won't work in most scenarios”. If you still want to play with this a bit, here’s how to start… First, create a managed usercontrol project – either a Windows Forms class library or control library project. Use the usercontrol designer to design your custom usercontrol the way you want it (using any standard controls you like). Second, in the project properties, on the Build tab, select the “Register for COM interop” option. This will register any COM-visible classes in the project when you build it, and will also build a COM typelib and register it. Third, attribute your usercontrol class to make it COM-visible. Also, specify a GUID, to avoid getting a fresh one on each build; and specify that you want the compiler to generate a dispatch class interface for the control. [ComVisible(true)] [Guid("0F2A2E9D-C79E-4b1b-9AF7-2D1487F29041")] [ClassInterface(ClassInterfaceType.AutoDispatch)] public partial class ManagedOcx : UserControl { Next, we need to provide some additional registry entries: Control, MiscStatus, TypeLib and Version. You can do this with a .REG script, but it’s generally better to write functions that will be called on registration/unregistration (attributed with ComRegisterFunction/ComUnregisterFunction). Control is an empty subkey. TypeLib is mapped to the GUID of the TypeLib (this is the assembly-level GUID in the assemblyinfo.cs). Version is the major and minor version numbers from the assembly version. The only mildly interesting subkey is MiscStatus. This needs to be set to a value composed of the (bitwise) values in the OLEMISC enumeration, documented here. To make this enum available, add a reference to Microsoft.VisualStudio.OLE.Interop (and a suitable ‘using’ statement for the namespace). [ComRegisterFunction] static void ComRegister(Type t) string keyName = @"CLSID\" + t.GUID.ToString("B"); using (RegistryKey key = Registry.ClassesRoot.OpenSubKey(keyName, true)) { key.CreateSubKey("Control").Close(); using (RegistryKey subkey = key.CreateSubKey("MiscStatus")) { // 131456 decimal == 0x20180. long val = (long) ( OLEMISC.OLEMISC_INSIDEOUT | OLEMISC.OLEMISC_ACTIVATEWHENVISIBLE | OLEMISC.OLEMISC_SETCLIENTSITEFIRST); subkey.SetValue("", val); }); subkey.SetValue("", version); } } [ComUnregisterFunction] static void ComUnregister(Type t) // Delete the entire CLSID\{clsid} subtree for this component. Registry.ClassesRoot.DeleteSubKeyTree(keyName); Build the project, then run Excel. From the Developer tab, go to the Controls group, and click the Insert button. This drops down a little gallery of available controls. The one in the bottom right-hand corner pops up the More Controls dialog, which offers a list of all suitably-registered ActiveX controls. You should find your custom control in this list. Note: this seems to work OK for Excel (with the very limited testing I've done), partly works with PowerPoint, but fails miserably with Word. Possibly, some more of the OLEMISC values might improve this; possibly there are some messages we need to hook; possibly there are some more interfaces we need to implement – I haven’t tried. Of course, the VSTO runtime has a nice set of hosting controls that enable this behavior for Excel and Word, but these are not usable outside the context of a VSTO solution. As I said, this is an entertaining avenue to explore, but it remains unsupported. The fact that I’ve only barely got it to work in a very limited way should tell you that this is probably not a technique you want to use in any serious way. PingBack from Hi andreww, I failed to found the customed control in the excel. Following is the steps I have done: 1. run VS 2008 in administrator permission on vista OS 2. copy your code in vs editor, and check the "register for com interop" 3. Follow your guid to find the control in the more control dialog. and nothing found. 4. search the regedit for the GUID, Nothing relative found I don't know what's the things goes wrong.Will you tell me what else I should set. any help is greate appreciated. jackson - what happens if you create a simple class library project, with one class, with the same ComVisible and Guid attributes, and with "Register for COM interop' checked? Do you see the ProgId and GUID in the registry after building the project? Hi Andrew, I tried this very long time ago and I stopped at one point: The Control crashed when the Resize event of the parent window was raised. The Control coordinates/structure seem to be different between native ActiveX controls and managed Controls. Is there a solution for this problem? Maybe to handle the resize event manualy. Greets - Helmut. Helmut - Thanks for your comment. As it happens, the simple control I built using this technique (with a MonthCalendar and a Button) doesn't crash when the parent's Resize is raised (for Excel and Word) - but I have no doubt that this might well be a problem in some scenarios. As I mentioned at the beginning of my post, this is really just an exercise for curiosity's sake - you'd have to do a huge amount of work to make this approach even half-way stable. Andrew, thanks for responding to my question! We looked into it and ultimately decided it would be easier for us to convert to VSTO. I'm glad your post helps validate our decision. Thanks again! Thank you for the GGGGreat tip! I have an additional question for this control. How to get the parent object(worksheet)? It's so difficult problem to me. please tell me how to do. thanks~ inhyuk - I don't know of any easy way to get to the worksheet from the control. I suspect you'd have to do something like passing the control's Handle to some Win32 function like GetParent. Perhaps use FindWindow to find the current host app process, and then iterate child windows to match up the one hosting the worksheet. As I said earlier, this post was really for entertainment value - I don't see this as a recommended technique for real use. Thanks for your answer. I've already tried that, but I can't get the worksheet. So I'm finding a solution, but I'm pessimistic. Thanks again! If you would like to receive an email when updates are made to this post, please register here RSS Trademarks | Privacy Statement
http://blogs.msdn.com/andreww/archive/2008/11/24/using-managed-controls-as-activex-controls.aspx
crawl-002
refinedweb
1,113
65.62
Patent application title: DYNAMIC COMPILING AND LOADING AT RUNTIME Inventors: James P. Schneider (Raleigh, NC, US) IPC8 Class: AG06F945FI USPC Class: 717148 Class name: Compiling code including intermediate code just-in-time compiling or dynamic compiling (e.g., compiling java bytecode on a virtual machine) Publication date: 2010-08-26 Patent application number: 20100218174 Abstract:. Claims: 1. A computer implemented instructions;loading the compiled code into memory;retrieving the new instructions; andexecuting the new instructions. 2. The method of claim 1, wherein the evaluation function loads the compiler during runtime of the program. 3. The method of claim 1, wherein the evaluation function has a scope that defines variables, and wherein the function is executed in the scope to provide the function with access to the variables. 4. The method of claim 1, wherein the evaluation function is a subroutine of a dynamically linked library. 5. The method of claim 1, wherein the compiler is a subroutine of a dynamically linked library, and wherein loading the compiler includes dynamically loading the compiler. 6. The method of claim 1, wherein the compiler is one of a C compiler, a C++ compiler or a Java compiler. 7. The method of claim 1, wherein the compiled code includes one of machine code, byte code, p-code, or threaded code. 8. A computer-readable storage medium that, when executed by a machine, causes the machine to perform a new instructions;loading the compiled code into memory;retrieving the new instructions; andexecuting the new instructions. 9. The computer-readable storage medium of claim 8, wherein the evaluation function loads the compiler during runtime of the program. 10. The computer-readable storage medium of claim 9, wherein the evaluation function has a scope that defines variables, and wherein the function is executed in the scope to provide the function with access to the variables. 11. The computer-readable storage medium of claim 8, wherein the evaluation function is a subroutine of a dynamically linked library. 12. The computer-readable storage medium of claim 8, wherein the compiler is a subroutine of a dynamically linked library, and wherein loading the compiler includes dynamically loading the compiler. 13. The computer-readable storage medium of claim 8, wherein the compiler is one of a C compiler, a C++ compiler or a Java compiler. 14. The computer-readable storage medium of claim 8, wherein the compiled code includes one of machine code, byte code, p-code, or threaded code. 15. A computing apparatus, comprising:a memory including instructions for an evaluation function; anda processor, connected with the memory, to execute the instructions for the evaluation function, wherein the evaluation function causes the processor to:load a compiler;cause the compiler to compile source code for a program that is in a runtime state, wherein the source code includes new instructions that are uncompiled, and wherein compiling the source code generates compiled code that includes the new instructions;load the compiled code into memory;retrieve the new instructions; andexecute the new instructions. 16. The computing apparatus of claim 15, wherein the memory includes a dynamically linked library, and wherein the evaluation function is a subroutine of the dynamically linked library. 17. The computing apparatus of claim 15, wherein the memory includes a dynamically linked library, wherein the compiler is a subroutine of the dynamically linked library, and wherein loading the compiler includes dynamically loading the compiler. 18. The computing apparatus of claim 15, wherein the compiler is one of a C compiler, a C++ compiler or a Java compiler. 19. The computing apparatus of claim 15, wherein the compiled code is one of machine code, byte code, p-code, or threaded code. 20. A computing apparatus, comprising:a memory including instructions for a compiler and a preprocessor; anda processor, connected with the memory, to execute the instructions for at least one of the compiler or the preprocessor;wherein the compiler is configured to:compile source code provided by an evaluation function to generate compiled code, the compiled code including new instructions that were included in the source code; andpass the compiled code to the evaluation function;wherein at least one of the compiler or the preprocessor is configured to:generate a frame descriptor that contains information that identifies a local variable in an enclosing scope of the evaluation function; andrecord information that identifies how to find at least one of a static variable or a global variable in the enclosing scope of the evaluation function;wherein the new instructions have access to the local variable via the frame descriptor and to the static variable and the global variable via the recorded information. 21. The computing apparatus of claim 20, wherein the evaluation function is a component of a program, the evaluation function to call the compiler and pass the source code to the compiler during runtime of the program. 22. The computing apparatus of claim 20, wherein the memory includes a dynamically linked library, and wherein at least one of the compiler or the preprocessor is a subroutine of the dynamically linked library. 23. The computing apparatus of claim 20, wherein the compiler and the preprocessor are designed for at least one of a C, C++ or Java programming language. 24. The computing apparatus of claim 20, wherein the compiled code is one of machine code, byte code, p-code, or threaded code. Description: TECHNICAL FIELD [0001]Embodiments of the present invention relate to dynamic loading, and more specifically to dynamically compiling and loading source code at run time. BACKGROUND [0002]A programming language is an artificial language designed to express computations that can be performed by a machine such as a computer. All programming languages translate code from human readable form to a non-human readable form. There are loosely two common classes of programming languages: interpreted languages and compiled languages. In a compiled language, source code is translated directly into a machine readable code (code that contains instructions that can be executed by a particular physical or virtual machine) once. Examples of compiled languages include C, C++ and Java. In an interpreted language, source code is translated into an intermediate form that is later further translated to machine readable code each time the code is to be run. Examples of interpreted languages include Perl and command shells. [0003]The distinction between compilers and interpreters is blurred by languages that translate human readable source code into machine readable form that's not actually machine code (e.g., Java). An important distinction between compiled languages such as Java and interpreted languages (e.g., Perl) is based on when translation occurs. If translation occurs once for a given piece of code (as in Java), then the language is a compiled language. If the translation occurs every time the code is run (e.g., every time a new process that executes the particular code starts), then the language is an interpreted language (e.g., as in Perl). [0004]Some interpreted languages include a function called the evaluate (`eval`) function, which is a mechanism for executing instructions that have not yet been transformed into an intermediate form. However, conventional compiled languages do not have an ability to convert programming statements into an executable form at runtime. BRIEF DESCRIPTION OF THE DRAWINGS [0005]The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which: [0006]FIG. 1 illustrates a diagrammatic representation of a machine in the exemplary form of a computer system, in accordance with one embodiment of the present invention; [0007]FIG. 2A illustrates software including instructions that may be executed by the computer system of FIG. 1 to perform actions in accordance with embodiments of the present invention; [0008]FIG. 2B illustrates additional software including instructions that may be executed by the computer system of FIG. 1 to perform actions in accordance with embodiments of the present invention; [0009]FIG. 3 illustrates a programming tool, in accordance with one embodiment of the present invention; and [0010]FIG. 4 illustrates a flow diagram of one embodiment for a method of adding functionality to a compiled process. DETAILED DESCRIPTION [0011]Described herein is a method and apparatus for dynamically loading uncompiled code or files. In one embodiment,. The compiled code may be machine code, byte code, p-code, or subroutine-threaded code. [001213 "calling", "loading", "compiling", "executing", "generating", "performing",. [00. [001718]FIG. 1 illustrates a diagrammatic representation of a machine in the exemplary form of a computer system 10019]The exemplary computer system 100 includes a processor 102, a main memory 104 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 106 (e.g., flash memory, static random access memory (SRAM), etc.), and a secondary memory 118 (e.g., a data storage device), which communicate with each other via a bus 130. [0020]Processor 102 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processor 102 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processor 102 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. Processor 102 is configured to execute the processing logic 126 for performing the operations and steps discussed herein. [0021]The computer system 100 may further include a network interface device 108. The computer system 100 also may include a video display unit 110 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 112 (e.g., a keyboard), a cursor control device 114 (e.g., a mouse), and a signal generation device 116 (e.g., a speaker). [0022]The secondary memory 118 may include a machine-readable storage medium (or more specifically a computer-readable storage medium) 131 on which is stored one or more sets of instructions (e.g., software 122) embodying any one or more of the methodologies or functions described herein. The software 122 may also reside, completely or at least partially, within the main memory 104 and/or within the processing device 102 during execution thereof by the computer system 100, the main memory 104 and the processing device 102 also constituting machine-readable storage media. The software 122 may further be transmitted or received over a network 120 via the network interface device 108. [0023]The machine-readable storage medium 131 may also be used to store a process 132, an uncompiled file (not shown) and/or a compiler 125 and/or a software library (not shown) containing methods that call, for example, the compiler 125. Embodiments of the process 132 and compiler 125 are described below with reference to FIGS. 2A-2B. [0024]Returning to FIG. 1, while the machine-readable storage medium 13125]FIGS. 2A-2B illustrate software including instructions that may be executed by the computer system 100 of FIG. 1 to perform actions in accordance with embodiments of the present invention. FIG. 2A includes a process 210, source code 215 and a compiler 220. [0026]Source code 215 is instructions that, once compiled, can be executed by a machine. The source code 215 may be written in any compiled language such as C, C++, BASIC, Fortran, Pascal, Java and so on. In one embodiment, source code 215 is included in an uncompiled file. [0027]The source code 215, once compiled, provides functions and/or variables that may be accessed and used (e.g., dynamically loaded) by an application, service or other software such as process 210. In one embodiment, the source code 215 is an uncompiled plugin to process 210. A plugin is a computer program that interacts with a host application or other software to provide a specific function to the host application/software. The plugin may rely on a host application's user interface (e.g., a user interface of process 210). In another embodiment, the source code 215 is an uncompiled extension to process 210. An extension is a computer program designed to enhance the functionalities of a host application (e.g., of process 210). Alternatively, the source code 215 may simply be one or a few strings of code that provide, for example, instructions for a single function or variable to process 210. [0028]Compiler 220 is a computer program that transforms source code written in a particular computer language (e.g., source code 215) into compiled or machine readable code (e.g., machine code, byte code, etc.). The machine readable code is then executable by a particular hardware platform's processor. Typical compilers perform one or more of lexical analysis, code parsing, code generation, code optimization, preprocessing and semantic analysis. [0029]In one embodiment, compiler 220 is a compiler for a traditional compiled language (e.g., C or C++) that has been modified to include eval function support 226. Compiler 220 may also be a compiler for Java that has been modified to include the eval function support 226. Alternatively, no compiler support may be necessary to implement the eval function. In such an embodiment, compiler 220 may be a traditional compiler that has not been modified to include any eval function support 226. [0030]Process 210 may be any computer program, application or service that includes instructions for an evaluate (eval) function 222 that enables the process 210 to evaluate a string (or multiple strings) as though it were one or more statements (an individual executable unit of code). For example, the string "return 0;" in C is a statement that contains the "return" keyword (which tells the compiler to generate code to execute the correct function) and the expression "0." The eval function 222 can supply syntactic wrappings so that a piece of functionality itself can be compiled as an independent unit. For example, in C/C++ a function definition structure may be wrapped around the code to be evaluated, and in Java a class definition structure may be wrapped around the code to be evaluated. [0031]After the source code 215 is evaluated (e.g., compiled), the eval function 222 causes it to be dynamically loaded. Dynamic loading is a mechanism that enables a process (e.g., an application or program) to add executable code to its executable image and configure it to execute, at runtime (the time during which the process is running). Typically, dynamic loading includes loading a file into memory during runtime. In conventional dynamic loading, the file and/or instructions that are loaded into memory must be executable (e.g., machine code, byte code, etc.). However, embodiments of the present invention enable process 210 to dynamically load uncompiled files and/or source code 215 into memory during runtime via the eval function 222. [0032]To dynamically load source code 215, eval function 222 calls compiler 220 at runtime of process 210, passes to compiler 220 source code 215, receives from compiler compiled code (e.g., machine code), and executes the compiled code. The precise mechanics of the eval function 222 would depend on how an implementer intended that it be used. At its simplest, it would just wrap and execute a simple string. For example, doing this in Java could look like this: TABLE-US-00001 public class EvalExample { public static void main(String [ ] args) { eval("System.out.println(\"Hello, World!\");"); } } [0033]More specifically, in Java the eval function 222 (e.g., the eval( ) method coded above) would receive a string to evaluate. This string in one embodiment resides in source code 215. The eval function 222 can create a temporary class, with a single method (e.g., doEval) that performs the code that was passed as a string to evaluate. The eval( ) method may create a file that gets compiled, or may operate on InputStream and Output Stream objects, which can be tied to arrays in memory. In Java, the anonymous class would look something like this: TABLE-US-00002 public class EvalTmpClassYuHkSieEUfgwdlFrAl3b1w { public static void doEval( ) { System.out.println("Hello, World!"); } } [0034]The eval function 222 would call compiler 220 (in this example a Java compiler) and pass to compiler 220 the created temporary class, along with an instruction to compile source code 215 and to return a compiled class. Upon receipt of the compiled class, the eval function 222 causes the compiled class to be loaded (most likely, by passing the class file data to a ClassLoader subclass defineClass( ) method). The eval function 222 would then call the doEval( ) method on the freshly loaded class. Any temporary files that were created during the compiling of the source code 215 could then be deleted. [0035]In one embodiment, the eval function is implemented in process 210 without requiring any modification to a compiler (e.g., compiler 220) that was used to compile process 210. However, in order for the eval function 222 to be more tightly integrated with process 210 (e.g., have the evaluated code execute in the same execution scope as the caller, which would let evaluated code access and modify the variables defined in the enclosing scope), some compiler eval function support 226 may be required, which is described in greater detail below. [0036]The above example shows how the eval function 222 and eval function support 226 may operate for the Java programming language. The case for a C program would be analogous, as shown in the following example. Again, at its simplest the eval function would just wrap and execute a simple string, as follows: TABLE-US-00003 int main(void) { eval("printf(\"Hello, World!\\n\");"); } [0037]The eval function 222 would create a subroutine around the passed-in string: [0038]void evalfnYuHkSieEUfgwdlFrA13b1w( ) {printf("Hello, World!\n";} [0039]The eval function 222 would call the compiler 220 and pass the generated source code to the compiler 220. This can be accomplished "off the shelf", if it is acceptable for the instructions in the source code 215 to be only loosely integrated with process 210. The compiler 220 would generate a compiled version of source code 215, and pass it back to process 210. The eval function 222 would receive and call the function included in the compiled version of source code 215, after which any temporary files could be cleaned up. [0040]In one embodiment, in order for instructions generated based on the eval function 222 to be tightly integrated with process 210, compiler 220 includes eval function support 226. Principally, the eval function support 226 records which variables are accessible from the calling scope, and provide some mechanism for the eval function 222 to use this information. This can be accomplished by an eval function support 226 included in the compiler 220, or by eval function support 228 that is included in a preprocessor 224. [0041]A preprocessor 224 is program that runs before the compiler 220 to preprocess code. The amount of processing and type of processing performed by the preprocessor can vary. Some preprocessors can be used to extend the functionality of a programming language. In one embodiment, the preprocessor 224 includes eval function support 228 that records which variables are accessible from the calling scope of eval function 222, and provides some mechanism for the eval function 222 to use this information. [0042]In a more detailed example of an eval function 222 implemented in the C programming language, the eval function 222 may appear as follows: TABLE-US-00004 int main( ) { int i = 42; eval("printf(\" i == %d\\n\",i++);"); printf("now, i == %d\n", i); return 0; } [0043]In this example, the eval function would need access to information about where i is being stored (either on the stack, or in a particular register set). This information can be provided by eval function support 226 or 228, as described below. [0044]Upon execution of the above coded eval function 222, the eval function 222 may create a function definition that looks like this: TABLE-US-00005 void evalfnVYQHZMJklF3j_YdfJ0Jg1w(frame_descriptor *f) { int *iptr = (int *)lookup(f, "i"); printf("i == %d", (*iptr)++); } [0045]The frame_descriptor would be a structure that contains the information necessary to find a particular local variable in an enclosing scope by name. This frame descriptor structure could be created at the time that process 210 was compiled by the compiler 220. The compiler 220 would also need to record information on how to find static and global variables in the scope of the calling function. In one embodiment, the eval function support 226 or eval function support 228 provides these capabilities. [0046]In phases, one embodiment of the operations would be: [0047]1) The compiler, using the eval function support 226 or 228, processes the file that contains the definition of the function main, turning it into an object file, and creating a frame descriptor that can be found, based on the name of the enclosing function (for example, it could be stored under the global variable name "_frame_descriptors_main"). It may also record information on how to find out the name of a function from an execution address. [0048]2) When the main function executes, it passes the string `printf("i==%d\n", i++);` to the eval runtime function. In one embodiment, the eval runtime is either directly passed the appropriate frame descriptor (likely because the compiler or a preprocessor replaced eval(string) with evalruntime(&_frame_descriptors_main, string)), or it figures out the correct descriptor name from the return address of the calling function (in this case, main) and the information recorded by the compiler in operation 1. [0049]3) The eval runtime does a simple parse of the string it's given to replace variable references with virtual references, and determine which variables it needs to look up. Essentially, it tokenizes the string, and replaces identifiers with virtual references. [0050]4) The eval runtime generates the code shown in paragraph [0043] above, causes it to be compiled, loads the compiled result, executes it, and then cleans up. [0051]In one embodiment, the eval runtime does the lookups before it generates the code. In such an embodiment, the generated code may resemble the following: TABLE-US-00006 void evalfnVYQHZMJklF3j_YdfJ0Jg1w(int *iptr) { printf("i == %d", (*iptr)++); } [0052]If the compiler 220 is a Java compiler, the eval function support 226 and 228 may differ from that described above. Specifically, Java does not directly support pointer or reference types, so the eval function support 226, 228 would include more extensive functionality. [0053]In an example, the eval function may appear as follows: TABLE-US-00007 public class Example { public static void main(String [ ] args) { int i = 42; eval("System.out.println(\"i==\"+i++);"); System.out.println("now, i == "+i); } } [0054]This could be translated by eval function support 228 to something like this: TABLE-US-00008 public class Example { public class Examplemainvariables { public String [ ] args; public int i; } public static void main(String [ ] args) { int i = 42; Examplemainvariables e = new Examplemainvariables( ); e.args = args; e.i = i; EvalRuntime.eval(e, "System.out.println(\"i == \"+i++);"); args = e.args; i = e.i; System.out.println("now, i == "+i); } } [0055]The eval method of the EvalRuntime class would be able to use the information provided by the object to pass information back and forth to the evaluated code. Such an implementation is not as efficient as using pointers and references (since the entire scope of the method would need to be copied twice), but is both simple and robust. [0056]EvalRuntime.eval would generate code that looks like this: TABLE-US-00009 public class EvalClassVYQHZMJklF3j_YdfJ0Jg1w implements EvalInterface { public evalfn(Object e) { int i = EvalRuntime.lookupInt(e, "i"); System.out.println("i=="+i++); EvalRuntime.save(e, "i", i); } } [0057]FIG. 2B illustrates similar components to those shown in FIG. 2A. However, in FIG. 2A compiler 220 is an independent application. Therefore, in the embodiment shown in FIG. 2A, a separate and distinct compiler application having an independent allocation of memory and other system resources is used to perform the compiling. This separate compiler application can be initiated by process 210 through a command line tool. [0058]In FIG. 2B, on the other hand, compiler 245 is included in a library 240. Library 240 is a collection of subroutines, classes, functions, variables and other data that can be used by multiple different programs. In one embodiment, the library 240 is a shared library. Components in a shared library can be shared (accessed and loaded into memory) by unrelated applications. In some shared libraries, the components can also be shared in memory (the components only need to be loaded into memory once, and each application can map a different address space to the same physical memory page). [0059]In one embodiment, the library 240 is a dynamically linked library. In a dynamically linked library, subroutines included in the library can be loaded into an application at runtime. Therefore, the subroutines can remain as separate files in the secondary memory, and do not need to be linked to process 230 at compile time. This enables process 230 to load compiler 245 at runtime via dynamic linking. Therefore, process 230 can access compiler 245 only when necessary. [0060]Providing compiler 245 as a component in a dynamically linked library provides a number of advantages. Since the compiler 245 is in the library 240, process 230 does not have to rely on having compiler development tools installed. The compiler's performance (e.g., speed, response time, etc.) can also be increased because process 230 communicates directly with compiler 245 through memory. Moreover, calls can be made to specific portions (e.g., bits of code) in the compiler 245 during different phases of compilation, rather than loading the entire compiler 245 into memory. By providing compiler 245 as a subroutine/component in a dynamically linked library, compiler 245 can be implemented as an extension to applications that are configured to dynamically compile and load files. [0061]In one embodiment, the eval function 250 is also a component of library 240. Therefore, process 230 can load the eval function 250 at runtime and pass it variables, such as an identification of source code 235. This enables process 230 to use the eval function without requiring programmers to program eval functionality into process 230. The loaded eval function 250 can then cause the compiler 245 to also be loaded at runtime, pass compiler 245 source code 235 (e.g., from an uncompiled file), receive compiled instructions, and execute the compiled instructions. [0062]FIG. 3 illustrates a programming tool 300, in accordance with one embodiment of the present invention. The programming tool 300 may be, for example, an integrated development environment (IDE) or a component of an IDE. The programming tool 300 includes a collection of provided functions 310 for which code has already been written. In one embodiment, in which the programming tool 300 includes a graphical user interface, in order for a programmer to add a provided function (e.g., Function I) to a program that he/she is writing, he/she simply needs to drag an icon or text representing the provided function from a provided functions 310 area of the programming tool 300 to a source code 320 area of the programming tool 300. The code of the function then gets copied into the source code. In one embodiment, the provided functions 310 include an eval function. Therefore, a programmer need not understand how to code an evaluation function for a program written in a programming language that does not include a native eval function (e.g., in a compiled language). The provided eval function may correspond to the eval functions described with reference to FIGS. 2A-2B. [0063]FIG. 4 illustrates a flow diagram of one embodiment for a method 400 of adding functionality to a compiled process. Method 400 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (such as instructions run on a processing device), or a combination thereof. In one embodiment, method 400 is performed by computer system 100 of FIG. 1. [0064]Referring to FIG. 4, at block 405 processing logic calls new instructions that may include a function and/or variable included in source code. The processing logic may be a process that is in a runtime state (e.g., a process that is being executed by a processor). The source code may be a user readable string of high level language computer code (e.g., source code). In one embodiment, the source code is included in an uncompiled file. The uncompiled file may be, for example, a text file that includes instructions that can be read by a compiler such as a C/C++ compiler, a Java compiler, etc. [0065]At block 410, processing logic loads a compiler. The compiler may be an independent program (e.g., a separate executable file that is not part of a library) and/or may be a component of a library such as a dynamically linked library and/or a shared library. If the compiler is an independent program, it may be executed with a command line that is different from what it would have if it were being executed in the ordinary course of compiling a unit of code. For example in the GNU Compiler Collection (GCC), it may be passed "-fpipe" to tell it to read from its standard input instead of a file, or it may be passed "-fpic" or "-fPIC" to produce position independent code. For other compilers, other commands may be passed. If the compiler is a subroutine of a dynamically linked library, it may be loaded using standard application programming interfaces (APIs) provided by an operating system on which processing logic operates. [0066]At block 415, processing logic compiles the source code using the compiler to generate compiled machine code. Alternatively, the processing logic may compile the source code into byte code (e.g., Java byte code). The processing logic may also compile the source code into some other appropriate machine readable form. For example, in the Pascal language, the source code may be compiled into a variable length coding called p-code. P-code (also known as pseudo-code) is a form of compiled code that is designed to execute on a virtual machine called a pseudo-code machine. In another example, in the Forth programming language, the source code may be compiled into subroutine-threaded code that includes a sequence of machine code call instructions and/or address references. [0067]At block 420, processing logic unloads the compiler. [0068]At block 425, processing logic loads the compiled code into memory. [0069]At block 430, processing logic retrieves the function and/or variable from the compiled code. [0070]At block 435, processing logic executes the function and/or accesses the variable. Therefore, the functionality of the processing logic can be extended without a need to recompile, relink or change the code of the processing logic. New code can be repeatedly added on to the processing logic without a need for recompiling. This permits applications to be developed that call on functions that have not yet been completed when the applications are compiled, with the assumption that the functions will be created down the line. [0071]At block 440, processing logic determines whether the function and/or variable is still needed. If the function and/or variable are still needed, the method may end. If the function and/or variable are no longer needed, then it may be unloaded from memory. The method then ends. [0072 James P. Schneider, Raleigh, NC US:
http://www.faqs.org/patents/app/20100218174
CC-MAIN-2014-52
refinedweb
5,308
51.48
This is my first post on a problem thats driving me slowly mad so here goes. What im trying to do is to write a method that takes 2 char parameters called filenameIn and filenameOut, so that i can read from an input file and copy the contents into an output file. The code i have written so far compiles and links fine but doesnt do anything-i have programmed before but only in java so this problem is one that i want to try and solve as its holding me up finishing the program. This is the code i have so far: That is my main.cpp, then there's textfilecopy.h:That is my main.cpp, then there's textfilecopy.h:Code:#include <iostream> #include "textfilecopy.h" using namespace std; /* * Partially completed program * The program should copy a text file. * */ int main(int argc, char **argv) { if (argc !=3) { cerr << "Usage: " << argv[0] << " <input filename> <output filename>" << endl; int keypress; cin >> keypress; return -1; } textfilecopy tfc; int keypress; cin >> keypress; } and finally textfilecopy.cppand finally textfilecopy.cppCode:#pragma once #include <fstream> #include <iostream> #include <string> using namespace std; class textfilecopy { public: textfilecopy(); ~textfilecopy(); Copy ( char* filenameIn, char* filenameOut) { ifstream infile ("C:\\Documents and Settings\\Paul McKenzie\\Programming Work\\Parser\\Debug\\input.txt"); if (!infile) { cerr << "Can't open input file " << filenameIn << endl; exit(1); } ofstream outfile("C:\\Documents and Settings\\Paul McKenzie\\Programming Work\\Parser\\Debug\\output.txt"); if (!outfile) { cerr << "Can't create output file " << filenameOut << endl; exit(1); } string s; while (infile >> s) { outfile << s << endl; } infile.close(); outfile.close(); } }; If anyone could help me out id be so grateful, as i say im new to C++ so it wouldnt suprise me if i missed something basic, or put things in the wrong place lolIf anyone could help me out id be so grateful, as i say im new to C++ so it wouldnt suprise me if i missed something basic, or put things in the wrong place lolCode:#include ".\textfilecopy.h" textfilecopy::textfilecopy() { } textfilecopy::~textfilecopy() { } Thanks a lot, Paul
http://cboard.cprogramming.com/cplusplus-programming/57972-read-write-method.html
CC-MAIN-2016-07
refinedweb
344
61.87
Hi Evgeniy Kolmakov, Actually i need to read two different or dynamic xml namespaces from single structure (2 XSD are concatined into single data type) and populate into target, source & target structure contains attribute name as 'xmlns'. Regards, Samir. Hi, Please read what is the significance of XMLNS first. This is what you are trying to achieve "xmlns:xmlns". You are trying to wrap/designate your xml element within "xmlns" itself. This is not allowed in XML world. Its a reserved keyword. You can also relate it like, you are trying to use a primitive data type name itself in java (int, float etc.) for your variable name. int int=8; //Wrong, not allowed, your compiler is confused int a=8; //Correct, allowed, compiler is happy Hope you would have understood the issue here. Thanks, Ambuj It reflects. Click on the SRC button in test tab. You will get the picture.That was my point. you can not use xmlns for your custom attribute name, choose a different name. PI won't recognize it. Just like you can not define a variable name with int, float etc. Hi Samir, As I understand, your problem is related to the fact that your attribute: "xmlns" is a special one - it defines an XML namespace. Why do you want to set/change the namespace in your mapping? If it's really necessary then you can do it in a java mapping, for example. I'm curious how your test message looks like. Could you please press "Src" button and attach the screenshot? Regards, Andrzej Hi Samir, Thank you. As I wrote before, if you really need to put some value to this attribute you can do it using a java mapping for example. But to be quite honest, this issue smells like a bad design to me. Regards, Andrzej Add comment
https://answers.sap.com/questions/384389/attribute-name-with-xmlns-not-allowing-in-sap-pipo.html
CC-MAIN-2019-09
refinedweb
308
76.22
Ask Kathleen Learn to pass anonymous types outside the method in which they're created; take advantage of closures when working with lambda expressions; drilldown on overloading; initialize static fields properly; and see where KeyedCollections improve performance. Q I've heard the term "closure" mentioned in relation to lambda expressions. What is a closure? A Closures are fragments of code--special delegates --that capture variables in the scope where the closure is defined. This means a closure can contain a local variable and access that variable even where the variable is not otherwise in scope. In .NET, code fragments are delegates, and the in-line delegates--such as lambda expressions and C#'s anonymous delegates--are closures. A few examples can help you understand the difference. This snippet is a little contrived, but the lambda expression is passed out of the scope of the current methods and takes the variable i with it: Private Sub Test1() Dim i = 1 Dim lambda1 = _ Function(val As Int32) val + i i = UseAsDelegate( _ 10, lambda1) Console.WriteLine(i) End Sub Private Function UseAsDelegate( _ ByVal val As Int32, ByVal del _ As Func(Of Int32, Int32)) As Int32 Return del(val) End Function Lambda expressions become either delegates or expression trees depending on the type of the variable they're assigned to. Lambdas are inferred to be delegates by default in VB (C# doesn't allow lambda inference), so the variable lambda1 is a delegate. Func(Of Int32, Int32) is one of the new delegate types created in .NET 3.5 to make it more convenient to use lambdas. The last type parameter defines the return value, and all other type parameters define the types of the delegate's parameters. It's important to recognize that the variable is contained in the lambda; the value of two is not copied into the closure delegate. You can see this if you change the variable value: Private Sub Test2() Dim i = 2 Dim lambda1 = Function(val As Int32) val + i i = UseAsDelegate(10, lambda1) Console.WriteLine(i) i = UseAsDelegate(100, lambda1) Console.WriteLine(i) End Sub After the first call, the output value is 12. If the value two was passed, the second call would result in 102. However, the variable itself is captured in the closure, so the output after the second call to the lambda is 112. One way you can take advantage of closures is to have delegates interact in ways that aren't supported by the method using the delegate. For example, you can write a sort that cancels if a particular condition fails (see Listing 1) to give a new feature to the List.Sort method. Q Can you pass anonymous types outside the method in which they're created? A The scope of an anonymous type is the method in which it's created. Code outside the method doesn't understand the structure of the anonymous type. However, like all types in .NET, instances of anonymous types can be cast implicitly to System.Object, so you can pass them between methods as objects. This wouldn't be useful if you couldn't cast them back to a meaningful type. Recreating a meaningful type takes a few generic tricks, and the code where you recreate the type has to know the target structure. This sets up a dependency that can be hard to maintain, so I'd restrict using these conversion tricks to code that's already tightly coupled. Most of the time, it's better to create simple named types that can be passed around the system with normal semantics and without performing expensive casts. Anonymous types are reused throughout the assembly based on the signature made up of their property names and types. Thus, if you have a type with a string Name property and a date property named Date used in two different places in the assembly, the same anonymous type is used. You can take advantage of this to recreate a known anonymous type within the same assembly (see Listing 2). The DoSomething method receives an array of System.Object that contains instances of the anonymous type. The DoSomething method creates a dummy instance with matching FirstName and LastName properties to define the type within the Convert method. The Convert method uses type parameter inference to establish the type from the dummy parameter which matches the expected anonymous type signature. The type inference system determines the type you're working with, and after the instance is cast to this type it is fully functional as the original anonymous type. Checking for a successful conversion is important because the conversion will fail if the LINQ statement changes and the dummy instance in DoSomething isn't updated. Q I added a generic method overload to my project to replace an earlier method that took "Object" as a parameter. A different overload of the same method is no longer being called. What's going on here? A Overloading lets you call the same method with different parameters. .NET resolves to the correct implementation based on the concept of a close match. For example, if you have overloads for Int32, Int64, and System.Object, and you call the method passing an Int16, .NET executes the overload for Int32 because it's the closest match. For classes, the concept of closest match is the nearest base class. For example, if C derives from B and B derives from A, and a method has overloads for A and B, passing C causes the overload for B to be called because it's the closer ancestor. This has been basic overloads behavior since .NET 1.0. If you have overloads for A, B, and System.Object, and you pass C, .NET uses the overload for B. Creating a generic overload throws an extra spin into the mix whether or not you remove the Object overload: public static void MyMethod<T>(T param) { Console.WriteLine("In generic overload"); } .NET can construct a perfect match and calls the generic overload rather than the overload for B. .NET uses the generic overload unless a perfect match overload exists which changed the method called when you passed B to the method. Q I'm getting an FxCop error: "Initialize reference type static fields inline." Can you explain what the problem is and how to fix it? I'm using C#. A The same rules for initializing static fields apply to both Visual Basic and C#, so this answer covers both languages. The rule you encountered removes a performance hit due to wasteful static constructor behavior. You can almost always avoid using a static constructor. There are five different ways to supply initial values to static fields (see Listing 3). The const approach shown for the First and Sixth fields in Listing 3 has the least impact on performance because the literal value is included directly in the assembly as metadata. Assigning a literal as shown in the Second and Sixth fields is slightly slower and has no added benefit. If you're using FxCop 1.35, it will flag the literals and request you use a constant. You're currently calling a static constructor, similar to the Fourth and Ninth fields because you're setting values that you must calculate at runtime. This is the reason for FxCop's complaint. There are subtle differences between calling a function to set a static field and setting the field in a static constructor. Static constructors are generally expensive because code is added during JIT to ensure the static constructor is called before the class is used in any way. While generally preferable, in-line declarations such as the Third and Eighth fields have two issues: They might be called when they're not needed, and they might not be called before constructors on the class or static methods are called. In-line declarations can be called when no instance of the class is instantiated and no static method of the class is used. This is a problem if setting the initial field value is expensive. Inline assignments are guaranteed to occur before fields are used, but not guaranteed before instance constructors or static methods are called. This can be a problem in the rare situation where you need to set some other type of global state such as deleting a previously created file. It's also a bit inconsistent with good development practices to set state in a function whose apparent purpose is getting the value of a static field. So in rare situations, static constructors might be appropriate. In your case, you create a new list, and filling it might be expensive. So it makes sense to avoid that step unless you use the list. Assuming you're relatively disciplined and have a naming strategy to help you recognize backing fields that shouldn't be used directly, the approach used in the Fifth field is probably the best approach. This just-in-time creation of the list gives you precise control over when the list is created. You can either wait until you need the list, or retrieve it early if there is a better time to construct it. Q I used to have a column in the Exceptions dialog of Visual Studio titled "User Unhandled." But, it's disappeared and now I only have one that says "Thrown." Do you know how I can get the "User Unhandled" column back? I'm using Visual Studio 2005. A You're using the exception dialog that resides under the Debug menu. This dialog lets you control what happens when certain types of exceptions occur during debugging, and it's also one of the fastest references to the exception hierarchy. During debugging, the User Unhandled and Thrown columns of this dialog control when the debugger breaks. However, the User Unhandled column doesn't make sense when you turn off the "Just My Code" option so Visual Studio removes it. Turn on "Just My Code" in Tools/Options/Debugging/General, and the column will reappear. Q I'm trying to figure out whether to use a generic List, generic Dictionary, or implement my own class based on a generic IList. I have a large number of collections that will be used frequently at runtime. Each item in the collections contains a GUID, which I'll be searching on. The majority of these collections have only one or two items. Perhaps 10 percent to 15 percent of them contain more than 100 to 500 items. I'm worried about performance, but I'm also worried that I won't be able to change from List to Dictionary later if I make the wrong decision because a lot of code will call these methods. Do you have any suggestions? A I'd suggest creating your own collections derived from .NET Framework classes. System.ObjectModel.KeyedCollection is probably your best base class. It provides both sequential and lookup access and holds list items with each item containing its own key. You derive your class from KeyedCollection and retrieve the key by overriding the GetKeyForItem method: public class CustomerCollection : KeyedCollection<Guid, Customer> { protected override Guid GetKeyForItem( Customer item) { return item.CustomerId; } } For your scenario, a major advantage of the KeyedCollection is the ability to control whether it creates an internal dictionary. You can set a threshold value, probably around 10 to 20, and collections with fewer items will not use a dictionary. Instead, the collection iterates to find specific keys. For small collections, this is faster than maintaining the dictionary. There are a couple of caveats to keep in mind. Internally, KeyedCollection is a collection and a dictionary. This means that items exist in two places. For reference types, this means the reference is duplicated. The reference is an integer, which is small and of no concern from a memory perspective, unless you're using enormous collections. However, value types can use much more space, so this isn't a good approach for very large value types. KeyedCollection has one ugly wart: It inherits Con-tains<itemType> from Collection, but it also overloads it with Contains<keyType> for keyed access. That's problematic because your fellow programmers will probably expect a Con-tainsKey method that more closely parallels the Dictionary class. If the key and item are the same type, which can occur with string items, the keyed access wins unless you cast to Collection. Whew! And on top of that, help documents the Contains method incorrectly. You can't remove a method defined in a base class, but you can hide it partially. All you need to do is add a ContainsKey method and mark the overloaded Contains method as obsolete: public bool ContainsKey(Guid id) { return base.Contains(id); } [Obsolete("Use ContainsKey instead")] [EditorBrowsable(EditorBrowsableState.Never)] public new bool Contains(Guid id) { throw new NotImplementedException( "Use ContainsKey instead"); } The new modifier is similar to the Overloads modifier in VB and generates the hidebysig modifier in IL. It indicates that the Contains method in your class replaces the base class method that searched through the key. The EditorBrowsable attribute should hide the method from IntelliSense, but C# has more trouble picking up this attribute than VB, so you're likely to still see the method. The compiler will generate a warning if the programmer inadvertently uses the Contains method because of the Obsolete attribute. You don't currently have any code written against your class, so you can also throw an exception. This approach discourages the use of the base class method, although a determined programmer can still cast to the base class and access its method. Thanks to Alex James for reporting on the anonymous type trick he learned from Wes Dyer , which I was happy to turn into VB code and explain a bit further. Printable Format > More TechLibrary I agree to this site's Privacy Policy. > More Webcasts
https://visualstudiomagazine.com/articles/2008/02/01/capture-variables-with-closures.aspx
CC-MAIN-2018-26
refinedweb
2,318
62.78
Manually counting of books in bookshelves may take you hours before you count it all. Worst, you might be forgetful and you need to start counting again all the books from the start. But, now you have nothing to worry about because this simple automated method will help you out. This method will be the one to calculate the quantity of books that are available. All you have to do is fill out the book names and author in the database and once you clicked the “calculate” button the total number of books will be calculated and displayed in the dataGridview. Let’s Begin: 1. Create a database in the SQL server 2005 Express Edition and name it “dbbooks“. 2. Create a table in the database that you have created. 3. Open Microsoft Visual Studio and create a new Windows Form Application for C#. Then, do the form just like this. 4. Go to the solution explorer, click the “code view” to fire the code editor. 5. Initialize the classes and declare the variables that you’re going to use. Note: Put “using System.Data.SqlClient;” above the namespace to access sql server library. 6. Create a method to display the data in the dataGridview from SQL Server database. 7. Set up the connection between SQL server and C#.net. Then, call a method that you have created in the fist load of the form. 8. Go back to the design view, double-click the “Calculate” button and do the following codes for calculating the total quantity of books. Output: For all students who need a programmer for your thesis system or anyone who needs a source code in any a programming languages. You can contact me @ : Mobile No. – 09305235027 – TNT 1 thought on “C#: Calculate the Total Quantity of Books Based on DataGridview with SQL Server”
https://itsourcecode.com/free-projects/csharp/c-calculate-total-quantity-books-based-datagridview-sql-server/
CC-MAIN-2021-43
refinedweb
307
73.88
Since Groovy 1.8 we can check if a Map is equal to another Map if the keys and values are the same. Very convenient in tests for example. def someMap = [age: 34, name: "Ted"] assert someMap == [name: "Ted", age: 34] Today I kept staring at a failure, while testing some x and y graph data points returned by a Grails controller, where two Maps were somehow not equal according to Spock, while even the assertion’s output looked ‘equal’. when: controller.milkYield() then: response.json and: "series are present" def series = response.json series.size() == 2 and: "realized series is correct" ... and: "predicted series is correct" def predictedSeries = series[1] predictedSeries.values.size() == 2 predictedSeries.values[0] == [y:null, x:'Mar'] predictedSeries.values[1] == [y:121, x:'Apr'] resulted in: Condition not satisfied: predictedSeries.values[0] == [y:null, x:'Mar'] | | | | | | | false | | [y:null, x:Mar] | [[y:null, x:Mar], [y:121, x:Apr]] [values:[[y:null, x:Mar], [y:121, x:Apr]]] Why isn’t [y:null, x:Mar] equal to [y:null, x:Mar]? After having checked explicitly with predictedSeries.values[0].x == 'Mar' predictedSeries.values[0].y == null // <- better be null! I remembered again I was dealing with JSON data in Grails, which uses the Null Object pattern. Keeps biting me every now and then 🙂 Condition not satisfied: predictedSeries.values[0].y == null | | | | | | | | | false | | | null (org.codehaus.groovy.grails.web.json.JSONObject$Null) | | [y:null, x:Mar] | [[y:null, x:Mar], [y:121, x:Apr]] [values:[[y:null, x:Mar], [y:121, x:Apr]]] It’s a JSONObject$Null instance. It exists because it’s equivalent to the value that JavaScript calls null, whilst Java’s null is equivalent to the value that JavaScript calls undefined. There are some long time posts already describing this behaviour that JSONObject.NULL was not equal to null… JSONObject.NULL.equals(null) // true JSONObject.NULL == null // false!! and some meta-class changes you could do were suggested at the time. Seems reported GRAILS-7739 (Wrong == and asBoolean behavior for JSONObject.Null) says it’s been fixed a few years back in Grails 2.2. Atleast something has been fixed. Although you still can not do JSONObject.NULL == null, you can change the assertion to using Groovy Truth since !JSONObject.NULL does work. !predictedSeries.values[0].y // or predictedSeries.values[0] == [y: JSONObject.NULL, x:'Mar'] if you like And of course I should remember null in (console) output is always a String representation leading to “null” – such as JSONObject.NULL‘s toString() returns. One thought on “Grails’ JSONObject.NULL More or Less Equal To Null”
https://tedvinke.wordpress.com/2015/05/22/grails-jsonobject-null-more-or-less-equal-to-null/
CC-MAIN-2018-05
refinedweb
434
62.04
Created on 2008-01-28 14:39 by agoucher, last changed 2008-01-28 15:57 by gvanrossum. This issue is now closed. There are a couple places in unittest where 'issubclass(something, TestCase)' is used. This prevents you from organizing your test code via class hierarchies. To solve this problem, issubclass should be looking whether the object is a subclass of unittest.TestCase to walk the inheritance tree all the way up and not just a single level. Currently, this will not work. module A.. class A(unittest.TestCase): pass module B... import A class B(A.A) def testFoo(self): print "blah blah blah I have attached a patch which will address all locations where this could happen. I don't really understand what problem you are trying to solve. Can you attach a sample script to show it more clearly? Also, the only thing your patch does is rename Test(Case|Suite) references to unittest.Test(Case|Suite)... I doubt it would have any effect unless you were monkeypatching the unittest module to replace those classes with other ones (which should certainly be considered very dirty ;-)). This patch seems to be based upon a misunderstanding of how Python namespaces work.
https://bugs.python.org/issue1955
CC-MAIN-2020-45
refinedweb
203
75.91
15 March 2012 06:39 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The source did not give details of the upstream plant. The producer’s downstream 200,000 tonne/year acrylonitrile-butadiene-styrene (ABS) unit and 120,000 tonne/year expandable polystyrene (EPS) unit will also be shut at the same time, the source added. In addition, Dagu Chemical has decided to cancel its contract supply and suspend spot sales in April, according to the source. The company will reduce its traditional SM supply to eastern and southern As a result of the shutdown, some downstream producers will be actively stocking up on SM cargoes, said a market player. North China SM prices are expected to be boosted by the reduced supply and recovered demand from the downstream EPS sector, the player
http://www.icis.com/Articles/2012/03/15/9541754/chinas-tianjin-dagu-to-shut-sm-abs-eps-units-for-maintenance.html
CC-MAIN-2013-48
refinedweb
132
59.33
Your The japanese car owners club and forum free, forum, japan, owners, japanese, club, cars, #honda, mitsubishi, toyota, subaru, google Free forum : Discovering Alabama off road ride, alabama, discovering, road, dirt, bike, yamaha, #honda, suzuki, kawasaki Free forum : Honda Civic Type R community forum. A place for us to share and exchange knowledges. free, ..::, s'pore, #honda, civic, type-r, club, ::.. Welcome to Wales biggest trail & green laning forum where you can chat and join rides with people that share the same common interest in motorcycles. green, laning, trail, riding, wales, lanes, #honda, roading, dirt, cadets Forum for Honda Accord Owners #honda, accord, forum, A forum dedicated to japenese performance cars. TOA - The Orient Academy t.o.a, orient, academy, #honda, mazda, civic, celica, mugen, spoon, toms, st205, cars, turbo Website dedicated to all Honda/Acura enthusiasts Our first day live was January 2nd 2008! #honda, acura, hondaforum import car website. import-zone. com. import cars hondas acuras, import, cars, hondas, acuras
https://www.womanboard.com/tag/honda
CC-MAIN-2019-39
refinedweb
162
53.92
Precision. Float x; printf("%.6f", x); The above will display 6 digits after the decimal point. Here, the field width is not specified. Consider the following example where both are specified: printf("%6.2f", x); The above code will display the number x in width 6 with two digits after the decimal point. Both are placed between the % symbol and the conversion character. It is also illustrated in the following program. The bars have been included with the purpose of showing the placement of output between the specified width. The space between bars represents the width setting. The following program illustrates the application of precision setting. Illustrates width and precision setting for a decimal point number. #include <stdio.h> main() { double Pi = 3.141592653589793238; printf("|%-lf|\n", Pi); printf("|%lf|\n", Pi); // default width setting in computer printf("|%30lf|\n", Pi); // width 30 right justification printf("|l%-30lf|\n", Pi); //width 30 left justification printf("|%030.10f|\n", Pi); /*width 30 precision 10 right justification. Filling blank space with '0' */ printf("|%-6.3f|\n", Pi); printf("|%-0.15f|\n", Pi); printf("|%-030.15f|\n", Pi); } The expected output is as given below The first two lines of the output are due to default setting in the computer. The width is as much as the number of digits, therefore, right and left justification are same. The third and forth lines of the output are due to specified width of 30, but no precision is specified. So, we get 6 digits after the decimal point (default setting in computer) with right and left justification, respectively. The fifth line of the output is due to the following code: printf("|%030.10f|\n", Pi); which specifies a total width of 30 with 10 digits after the decimal, right justification and the empty spaces are to be filled by '0'. For the sixth line of the output, the specifications are (i) width 6, (ii) precision 3, and (iii) left justification. In the output there is only one empty space. For the last but one line of the output, the specifications are (i) no width specified and (ii) precision 15. The width is by default. So, there are no empty spaces. For such a case, the left and right specification is immaterial. The last line of the output corresponds to (i) width= 30, (ii) precision= 15 and left justification and (iii) character '0' to fill empty spaces. But since 0s are added only with right justification, there are no 0s in
http://ecomputernotes.com/what-is-c/types-and-variables/precision-setting
CC-MAIN-2018-17
refinedweb
415
67.25
Hi, I have one problem after the installation of Intel PROSet/Wireless Software (14.2.0.010) for Windows 7 x64. I have changed the software from 13.5 to 14.2 in my laptop after the replacement of wifi adapter from 3945abg to 4965agn. And after the installation of the last version of PROSet/Wireless Software, in the control panel of Windows 7 SP1 x64 there is a blank icon. How to remove it? When I remove the software the icon disappears. Best regards, Mark I had the same problem (though I think that, for me, it appeared after installing revision 14.3.x.x of PROSet/Wireless, but I could be wrong and I might also have skipped revisions). Looking in the registry, I found a seemingly unused control panel namespace labelled "Intel® WiFi Connection Utility", with key "{B9F96805-88EC-4952-929F-397985D7B2D7}" (in "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\ControlPanel\NameSpace\"). That key did not appear anywhere else in my registry, so I surmised that it was likely to be unused. I first renamed the key to something invalid (prefixing it with "aaa") and the blank icon disappeared. The "Intel PROSet/Wireless tools" control panel was not affected (still present and working). I then searched the web for "B9F96805-88EC-4952-929F-397985D7B2D7" and found a post on Lenovo's forum where another user found the same thing. So I backed up ("export") the key to be safe and then deleted it. That seems to have resolved the problem.
https://communities.intel.com/message/158004
CC-MAIN-2016-40
refinedweb
252
66.54
#include <AsympEdgesTest.hh> Definition at line 19 of file AsympEdgesTest.hh. Definition at line 22 of file AsympEdgesTest.hh. Explicitly succeds a test. Definition at line 109 of file testcase.hh. Create the sheet object.. Loads a mesh. Load thickness information. Shows the information for one cell. Prints a report on the number of passed and failed tests to the output stream. Resets the counters for the failed and passed tests. Definition at line 116 of file testcase.hh. Runs the tests. Must be overwritten by the specialization. Implements test::TestCase. Scans over all cells and show if an edge is at one sheet. Sets the output stream. Definition at line 106 of file testcase.hh. Tests different spd matrices with CG. Tests different spd matrices with CG. Thickness. Definition at line 38 of file AsympEdgesTest.hh. Mesh. Definition at line 36 of file AsympEdgesTest.hh. Directory with the meshes. Definition at line 34 of file AsympEdgesTest.hh. Sheet object. Definition at line 40 of file AsympEdgesTest.hh.
http://www.math.ethz.ch/~concepts/doxygen/html/classtest_1_1AsympEdgesTest.html
crawl-003
refinedweb
168
73.03
Please login Prime Prepinsta Prime Video courses for company/skill based Preparation (Check all courses)Get Prime Video Prime Prepinsta Prime Purchase mock tests for company/skill building (Check all mocks)Get Prime mock Replace each element by its rank given in array using C++ Replace each element by its rank given in array in C++ In this article, we will brief in on how to replace each element by its rank in the given array. There is an array of n elements, replace each element of the array by its corresponding rank.The minimum value element will have the highest rank. /> Algorithm - Use Map to replace each element by it’s rank. - The array element and its index are stored in two columns in the map - Store the element in sorted order . - Start iterating it, the elements will be in an increasing order sequence. - Assign values for each element in increasing order - Starting from 1 to k incrementing by 1 for each element. - Later the index is replaced by the element’s rank C++ Code Based on above algorithm #include <bits/stdc++.h> using namespace std; void change (int array[], int k) { map < int, int >map; for (int x = 0; x < k; x++) map[array[x]] = x; int rank = 1; for (auto x:map) array[x.second] = rank++; } int main () { int k; cout << "Enter the size :"; cin >> k; int array[k]; cout << "Array elements : "; for (int x = 0; x < k; x++) cin >> array[x]; change (array, k); cout << "Elements afer replacement : "; for (int x = 0; x < k; x++) cout << array[x] << " "; return 0; } Input: Enter the size: 6 Array elements: 9 8 7 6 5 4 Output: Elements after relacement: 6 5 4 3 2 1 Login/Signup to comment
https://prepinsta.com/cpp-program/replace-each-element-by-its-rank-given-in-array/
CC-MAIN-2021-43
refinedweb
291
52.23
Python binding to gorilla-audio library pyrilla pyrilla is a self-contained statically linked binding to gorilla-audio library - “an attempt to make a free, straightforward, cross-platform, high-level software audio mixer that supports playback of both static and streaming sounds”. Like the original it is intended for video game development. pyrilla’s goal is to provide a python audio package that can be installed without any external dependencies with single pip install pyrilla command. It is built with cython and its API is inspired by part of great but unmantained bacon game engine. It works without any problems on OS X, Windows, and Linux. Officially supported Python versions are py27, py33, py34, py35. pyrilla on PyPI and supported systems Pyrilla is wrapper on Gorilla Audio C library that is statically linked during installation. For developers convenience it is distributed on PyPI as binary wheels for Windows, OS X, and Linux. Its extensive built pipeline targets different system flavours (32/64bit) and different Python versions. On supported systems it can be easily instaled with pip: pip install pyrilla Most up-to-date list of provided distributions is available on pyrilla’s project page on PyPI. Depending on target platform the underlying Gorilla Audio library is compiled with slightly different settings: If you really need support for other platform or more Python versions then fill issue on GitHub repository for this project so I can prioritize my work. I don’t want to spend my time on providing more distributions not knowing if anyone really needs them. Note: Linux wheels for pyrilla on PyPI are portable Linux build distributions (i.e. manylinux1) as described by PEP 513. Source distribution (sdist) for pyrilla available on PyPI is still a bit broken. Generally it is not supposed to compile on Linux. This is going to change in future. If you want to use pyrilla on Linux you need to build it by yourself on your platform. The process is preety straightforward and described in building section of this README. Last but not least, there is also some support for cygwin. Unfortunately there is no binary wheel on PyPI for this environment yet. If you want to use pyrilla under cygwin then you need to compile it manually. usage The easiest way to play single sound is to use Sound class: from pyrilla import core def finished(sound): print("sound %s finished playing" % sound) quit() # note: sound file extension must be explicitely provided sound = core.Sound("soundfile.ogg", "ogg") # optional callback will be called when sound finishes to play sound.play(finished) while True: # update internal state of default audio manager and mixer # this mixes all currently played sounds, pushes buffers etc. core.update() To play looped audio you need to use Voice instance that can be created from existing sound. from pyrilla import core sound = core.Sound("soundfile.ogg", "ogg") voice = core.Voice(sound, loop=True) voice.play() while True: core.update() For more features like custom managers/mixers, voice control (pitch, gain, pan) or stop/play see code samples in examples directory of this repo. building Building pyrilla prerequisites: - cython - cmake - make If you are going to build this package then remeber that Gorilla Audio is bundled with this repository as Git submodule from my unofficial fork on GitHub (under gorilla-audio directory). You need to eaither clone this repository with --recursive Git flag or init submodules manually: git submodule update --init --recursive Use cmake to build build gorilla-audio cmake gorilla-audio/build cmake --build . --config Release python setup.py build For Windows (also on cygwin): cmake -DENABLE_OPENAL:STRING=0 -DENABLE_XAUDIO2:STRING=1 -DENABLE_DIRECTSOUND:STRING=0 . cmake --config Release --build . Then build and install the python extension: python setup.py build python setup.py install Note that building for Windows may be bit trickier. If your personal environment is broken and compilation step for Gorilla Audio does not find the correct path for DirectX SDK and/or XAudio2 lib file. If you have same problems as I have then you probably need to provide this path manually to first cmake call: -DDIRECTX_XAUDIO2_LIBRARY=path/to/the/DirectXSdk/Lib/x86/xapobase.lib Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pyrilla/
CC-MAIN-2017-51
refinedweb
711
55.95
I came up with this challenge when I had to write a function to divide a sequence to percentiles. I needed this to calculate some statistics over trips for plnnr.com. “This sounds trivial” I thought, and reached for my simple blocks function: def blocks(seq, block_len): """blocks(range(5),2) -> [[0, 1], [2, 3], [4]]""" seq_len = len(seq) if seq_len%block_len == 0: num_blocks = seq_len/block_len else: num_blocks = 1 + (seq_len/block_len) result = [[] for i in xrange(num_blocks)] for idx, obj in enumerate(seq): result[idx/block_len].append(obj) return result. First stab at it: Then I did a recursive solution to see if I could make it more compact: How’s this? A quickie, I’m not sure if it’s elegant enough… Thank you all for your solutions! Jeremy: I think I didn’t explain myself well enough: I wasn’t aiming for a better “blocks” function. I was aiming for a “nblocks” function, whose signature is nblocks(seq, num_blocks). Lucas: your solution is problematic, as for nblocks(range(10), 3), the last element was dropped. Yuv: try to write it as nblocks(seq, num_blocks) :) I will add an edit to the blog post to make the requried signature more clear :) Ok, let’s try… Hmm, not really elegant. Having to know the length of the sequence seems unavoidable. And izip(*[seq] * block_len) looks promising, but i dunno how to make it work :). Ephes: Thanks for the solution! However, that’s not quite what I was looking for :) See again my update and my comment above: the signature should be nblocks(seq, num_blocks). So for the “usual” percentiles, it’d be nblocks(seq, 10). Regarding the length of the sequence: you are right, it’s unavoidable to know it but that’s ok. (It’s avoidable only if you’re willing to do some slightly ugly hacks :) Oh, I see I misread what you were trying to do. Will work on a better solution. Is that the function you’re looking for? I’m not sure what you mean by “As a general rule, the extra items should be spread as evenly as possible.” Here’s my solution with my own interpretation of that statement: def blocks(seq, block_len): rest = seq result = [] while len(rest) > block_len: result.append(rest[:block_len]) rest = rest[block_len:] #now distribute for i, v in enumerate(rest): result[-(i+1)].append(v) return result Oops. Let me try it again with the pre tag. Breshenham to the rescue! def nblocks( seq, block_len ): assert seq l = [] error = 0.0 deltaerr = float(block_len) / len(seq) for item in seq: l.append( item ) error += deltaerr if error >= 1.0: yield l l = [] error -= 1.0 if l: yield l ( Provides better distribution ) Shouldn’t have used the code tag… Trying again: Breshenham to the rescue! >>> list( try2.nblocks( range(10), 3) ) [[0, 1, 2, 3], [4, 5, 6], [7, 8, 9]] >>> list( try2.nblocks( range(9), 3) ) [[0, 1, 2], [3, 4, 5], [6, 7, 8]] >>> map(len, try2.nblocks(range(17*9), 10)) [16, 15, 15, 16, 15, 15, 16, 15, 15, 15] ( Provides better distribution ) Everyone: It seems I wasn’t clear in my explanation of the challenge – it is to write a function that receives as parameter the *number of blocks*, and not the length of the block. I tried to improve the situation in the blog post, by emphasizing the required signature, let me know if you think it’s still unclear. Yeah, my bad. Rename block_len to block_count. It does what you asked, I just didn’t pay enough attention to the naming. For instance: here are my two cents : the idea is to work on “lengths” first: average is length/nb_blocks, and extra, is to be dispatched. Then I cumulate the lengths to turn them in indices that’s the lblock function. the nblock is then straightforward. little improvement. I didn’t get the “even” part ! it uses rounding error to dispatch evenly the extras, this time. oups, sorry, needed a little renaming to keep the code readable Maybe I’m late to the party, but I wrote this last night: Ah ok, hope to understand the problem better, now. A straightforward approach (thanks Thilo): A generator of blocks: So that seq doesn’t have to be a list: Here is my solution: I do believe that there were other solutions prettier than this. What I really liked about this challenge is that it sounds really easy, and even once you define the problem, it takes more time than you expected. I took a small liberty here and relaxed the requirements by saying that nblocksmust return an iterator (which is not necessarily a list). This is the best I came up with:
http://www.algorithm.co.il/blogs/challenges/small-programming-challenge-no-6-nblocks/
CC-MAIN-2016-26
refinedweb
790
72.87
Creating a list using content of active buttons Hello! Currently I am working on a form with many buttons on it (see below). The first 15 buttons become active when they are clicked on with the mouse. In the end, all active buttons will add one point to the total score. However, I do not know afterwards which exact buttons were clicked (I just know how many have been clicked/activated). What I would like to do, is add an function which makes it possible to see the words that have been clicked on. For example, if you click on "Gordijn", "Bril" and "Winkel", that you create a list which contains these words (e.g. wordlist = [Gordijn, Bril, Winkel]). Does anyone know a way how to do this? Thank you in advance! Jenna My inline script so far: from libopensesame import widgets class touch_button(widgets.button): """A form button that needs to be clicked to be triggered.""" def __init__(self, *arglist, **kwdict): """Start inactive.""" widgets.button.__init__(self, *arglist, **kwdict) self._active = False def on_mouse_click(self, pos): # When the touch_button is active already, deactivate it if self._active == True: self._active = False var.correct -= 1 # Otherwise, activate touch_button else: self._active = True var.correct +=1 def draw_frame(self, rect=None, style='normal'): """Draw a normal frame when active, and a plain frame otherwise.""" if self._active: widgets.button.draw_frame(self, rect, style='normal') else: widgets.button.draw_frame(self, rect, style='plain') class incorrect_button(widgets.button): def on_mouse_click(self,pos): var.incorrect +=1 def draw_frame(self, rect=None, style='normal'): widgets.button.draw_frame(self, rect, style='plain') class double_button(widgets.button): def on_mouse_click(self,pos): var.double +=1 def draw_frame(self, rect=None, style='normal'): widgets.button.draw_frame(self, rect, style='plain') class custom_label(widgets.label): def render(self): self.text = 'Incorrect: %d<br />Double: %d' % (var.incorrect, var.double) widgets.label.render(self) class title_label(widgets.label): def render(self): self.text = "Direct Recall %d / 5" % (var.herhaling) widgets.label.render(self) # Create a form form = widgets.form(exp, cols=[1,1,1,1], rows=[1,1,1,1,1,1], margins=(50,50,50,50), spacing=10) woorden = ["Gordijn", "Vogel", "Potlood", "Bril", "Winkel", "Spons", "Rivier", "Kleur", "Fruit", "Plant", "Koffie", "Stoel", "Trommel", "Schoen", "Lucht"] rij = 1 col = 0 counter = 0 for woord in woorden: label = touch_button(form, text = woord) form.set_widget(label, (col,rij)) if str(exp.get("aangeklikt")) == 1: var.wordlist.append(woord) #adds whole string to list var.aangeklikt = 0 if col == 3: rij +=1 col = -1 col +=1 counter +=1 titel = title_label(form) form.set_widget(titel, (0,0), colspan=4) tekst = custom_label(form) incorrect = incorrect_button(form, text='Incorrect') dubbel = double_button(form, text='Dubbel') nextButton = widgets.button(form, text='Volgende') form.set_widget(tekst, (0,5)) form.set_widget(incorrect, (1,5)) form.set_widget(dubbel, (2,5)) form.set_widget(nextButton, (3,5)) form._exec() Hi Jenna, I was trying to run your code, but it doesn't work because some variables that are used are not defined. Would you mind uploading your entire experiment, so that I can have a look? But basically, the idea would be to initialize an empty list in the beginning, and use something like list.append(word), in your on_clickfunction. Does that make sense? Eduard Hi Eduard, Thank you for your response! I hereby send you the entire experiment, hope it works now! I've tried to build a list.append(word) in the on_click function, but it did not work so far.. Hi Jenna, Here you go, I changed the init of your touch_button to set a label of your buttons. This label I include to or remove from a list when it is clicked. Let me know if you need more help. Eudard Thank you, it totally works now!! Hi Eduard, Can I ask you one more question? I have this other form, in which I ask participants if they have heard a certain word before or not (by pressing 'Yes' or 'No'). Sometimes 'Yes' is the correct answer, sometimes 'No' is the correct answer. Is there also a way to include the words (labels) before the buttons when a participant presses 'Yes'? I tried to include these words in a list, but since the word is not in the button itself but before the button this time, I could not figure out how to do this. Thank you in advance!! Hi Jenna, sorry for my late reply. I'm kinda busy these weeks. I haven't implemented what you need, but you should be able to do it yourself. What you need is a common index of Ja/nee button and the label, because you can access the label of the text by label.text. So whenever, you click on ja/nee, you can use the index of the current ja/nee, to access the label with the same index. Maybe you have to make them a feature of each object, but in general, it should work. Do you see what I mean? Eduard Hi Eduard, Thank you for your response! I now figured out how I can access the label of the text by using label.text. However, I can't figure out how to implement the index of the current ja/nee into the button definition. Therefore, I'm not sure how to select only the labels that have received the answer 'Ja'. Could you specify where in the code you would implement the code? Thank you in advance! Jenna HI Jenna, Every ja/nee button has a column and a row index. These indices are unique, so if you match up these indices with the words ( by means of a dictionary for example: dict('boom' = (1,1),'fluit'=(1,2), ...)). You can always find the corresponding word to the index, or index to the word. So every time, a button is active, you add its column and row value to a list and once you are done. You loop through the list and find the corresponding label to these coordinates. Does this makes sense? Eduard Hi Eduard, Thanks again for your help! I now created a dictionary containing the words and positions of the words. However, I can't figured out how to add the column and row values of the buttons to a list. I can only get the coordinates from the buttons (e.g. (-304, -75)). Do you have any idea how to get these values? Thank you in advance! Jenna Hi Jenna, Easy solution: Make a dictionary that maps, the the positions to the coordinates dict((1,)=(-304,-75),(1,2)=(-260,-75), ...). It is a little dirty to hardcode iit, and a lot of typing, but I think it will get the job done! Proper (ugly) solution (see first attachment): I define a sort of global variable (it is not declared as global though), but when Python can't find the variable textsin the class definition it will look for it outside and find it. As this is messing with namespace, it is considered bad style, but as far as I can see, it works for you ( you have to test it though, I wasn't sure exactly what the intended behaviour is). Proper (nice) solution (see 2nd attachment das.osexp): I changed the classes you define a little and pass the indices as arguments to the functions, and basically do the exact same as you do with the labels. Btw. I wasn't sure whether you need the coordiantes of the labels or of the 'yes's and 'no's. Currently, I add the latter, but I'm sure you can figure it out how to change it if necessary. Good luck, Eduard Hi Eduard, Thank you for all your help! All functions work now!! Jenna
http://forum.cogsci.nl/index.php?p=/discussion/3055/creating-a-list-using-content-of-active-buttons
CC-MAIN-2017-47
refinedweb
1,292
68.36
There is an open source program called MSWordView that is able to convert a MS Word 97 table into HTML. I tried version 0.7.4 of abiword and it does not handle tables well (if at all). I have a friend who could end his dual boot days if he had a piece of software on Linux that could import and export tables in MS Word 97 format. Is there a plan to work on tables in the future, which has just not been begun? Didn't there used to be a roadmap of features on the abisource web site? I can't find it. Also, there was a great article at the inception of Abiword about a certain feature that was in Word that would never be included in Abiword. It was a feature that was incredibly obscure and of little use to most people (if anyone). I can't find that article, if anyone knows what I am babbling about. Anyway, back to my original point. MSWordView's home page is at and it is GPL'd so the Abiword developers should be able to use the code to improve the Word 97 import features. Maybe they already know about this code and it is just incompatible with the abiword architecture. I don't know. I'd like to know. Keith Wear
http://www.abisource.com/mailinglists/abiword-user/99/August/0013.html
CC-MAIN-2016-30
refinedweb
225
82.24
*autocmd.txt* Nvim VIM REFERENCE MANUAL by Bram Moolenaar Automatic commands *autocommand* For a basic explanation, see section |40.3| in the user manual. Type <M-]> to see the table of contents. ============================================================================== 1. Introduction *autocmd-intro* You can specify commands to be executed automatically when reading or writing a file, when entering or leaving a buffer or window, and when exiting Vim. For example, you can create an autocommand to set the 'cindent' option for files matching *.c. You can also use autocommands to implement advanced features, such as editing compressed files (see |gzip-example|). The usual place to put autocommands is in your vimrc file. *E203* *E204* *E143* *E855* *E937* WARNING: Using autocommands is very powerful, and may lead to unexpected side effects. Be careful not to destroy your text. - It's a good idea to do some testing on an expendable copy of a file first. For example: If you use autocommands to decompress a file when starting to edit it, make sure that the autocommands for compressing when writing work correctly. - Be prepared for an error halfway through (e.g., disk full). Vim will mostly be able to undo the changes to the buffer, but you may have to clean up the changes to other files by hand (e.g., compress a file that has been decompressed). - If the BufRead* events allow you to edit a compressed file, the FileRead* events should do the same (this makes recovery possible in some rare cases). It's a good idea to use the same autocommands for the File* and Buf* events when possible. ==============================================================================]. Note that special characters (e.g., "%", "<cword>") in the ":autocmd" arguments are not expanded when the autocommand is defined. These will be expanded when the Event is recognized, and the {cmd} is executed. The only exception is that "<sfile>" is expanded when the autocmd is defined. Example: :au BufNewFile,BufRead *.html so <sfile>:h/html.vim Here Vim expands <sfile> to the name of the file containing this line. `:autocmd` adds to the list of autocommands regardless of whether they are already present. When your .vimrc file is sourced twice, the autocommands will appear twice. To avoid this, define your autocommands in a group, so that you can easily clear them: augroup vimrc autocmd! " Remove all vimrc autocommands au BufNewFile,BufRead *.html so <sfile>:h/html.vim augroup END If you don't want to remove all autocommands, you can instead use a variable to ensure that Vim includes the autocommands only once: :if !exists("autocommands_loaded") : let autocommands_loaded = 1 : au ... :endif When the [group] argument is not given, Vim uses the current group (as defined with ":augroup"); otherwise, Vim uses the group defined with [group]. Note that [group] must have been defined before. You cannot define a new group with ":au group ..."; use ":augroup" for that. While testing autocommands, you might find the 'verbose' option to be useful: :set verbose=9 This setting makes Vim echo the autocommands as it executes them. When defining an autocommand in a script, it will be able to call functions local to the script and use mappings local to the script. When the event is triggered and the command executed, it will run in the context of the script it was defined in. This matters if |<SID>| is used in a command. When executing the commands, the message from one command overwrites a previous message. This is different from when executing the commands manually. Mostly the screen will not scroll up, thus there is no hit-enter prompt. When one command outputs two messages this can happen anyway. ============================================================================== 3. Removing autocommands *autocmd-remove* :au[tocmd]! [group] {event} {pat} [nested] {cmd} Remove all autocommands associated with {event} and {pat}, and add the command {cmd}. See |autocmd-nested| for [nested]. :au[tocmd]! [group] {event} {pat} Remove all autocommands associated with {event} and {pat}. :au[tocmd]! [group] * {pat} Remove all autocommands associated with {pat} for all events. :au[tocmd]! [group] {event} Remove ALL autocommands for {event}. Warning: You should not do this without a group for |BufRead| and other common events, it can break plugins, syntax highlighting, etc. :au[tocmd]! [group] Remove ALL autocommands. Warning: You should normally not do this without a group, it breaks plugins, syntax highlighting, etc. When the [group] argument is not given, Vim uses the current group (as defined with ":augroup"); otherwise, Vim uses the group defined with [group]. ============================================================================== 4. Listing autocommands *autocmd-list* :au[tocmd] [group] {event} {pat} Show the autocommands associated with {event} and {pat}. :au[tocmd] [group] * {pat} Show the autocommands associated with {pat} for all events. :au[tocmd] [group] {event} Show all autocommands for {event}. :au[tocmd] [group] Show all autocommands. If you provide the [group] argument, Vim lists only the autocommands for [group]; otherwise, Vim lists the autocommands for ALL groups. Note that this argument behavior differs from that for defining and removing autocommands.* You can specify a comma-separated list of event names. No white space can be used in this list. The command applies to all the events in the list. For READING FILES there are four kinds of events possible: BufNewFile starting to edit a non-existent file BufReadPre BufReadPost starting to edit an existing file FilterReadPre FilterReadPost read the temp file with filter output FileReadPre FileReadPost any other file read Vim uses only one of these four kinds when reading a file. The "Pre" and "Post" events are both triggered, before and after reading the file. Note that the autocommands for the *ReadPre events and all the Filter events are not allowed to change the current buffer (you will get an error message if this happens). This is to prevent the file to be read into the wrong buffer. Note that the 'modified' flag is reset AFTER executing the BufReadPost and BufNewFile autocommands. But when the 'modified' option was set by the autocommands, this doesn't happen. You can use the 'eventignore' option to ignore a number of events or all events. |TermOpen| when a terminal buffer is starting |TermClose| when a terminal buffer ends Options |FileType| when the 'filetype' option has been set |Syntax| when the 'syntax' option has been set shada file |VimLeave| before exiting Vim, after writing the shada file Various |DirChanged| after the |current-directory| was changed |WinEnter| after entering another window |WinLeave| before leaving a window |TabEnter| after entering another tab page |TabLeave| before leaving a tab page |TabNew| when creating a new tab page |TabNewEntered| after entering a new tab page |TabClosed| after closingYankPost| when some text is yanked or deleted * *BufCreate* *BufAdd* BufAdd or BufCreate Just after creating a new buffer which is added to the buffer list, or adding a buffer to the buffer list. Also used just after a buffer in the buffer list has been renamed. The BufCreate event is for historic reasons. NOTE: When this autocommand is executed, the current buffer "%" may be different from the buffer being created "<afile>". *BufDelete* BufDelete Before deleting a buffer from the buffer list. The BufUnload may be called first (if the buffer was loaded). Also used just before a buffer in the buffer list is renamed. NOTE: When this autocommand is executed, the current buffer "%" may be different from the buffer being deleted "<afile>". *BufHidden* BufHidden Just after a buffer has become hidden. That is, when there are no longer windows that show the buffer, but the buffer is not unloaded or deleted. Not used for ":qa" or ":q" when exiting Vim. NOTE: When this autocommand is executed, the current buffer "%" may be different from the buffer being unloaded "<afile>". *BufLeave* BufLeave Before leaving to another buffer. Also when leaving or closing the current window and the new current window is not for the same buffer. Not used for ":qa" or ":q" when exiting Vim. *BufNew* BufNew Just after creating a new buffer. Also used just after a buffer has been renamed. When the buffer is added to the buffer list BufAdd will be triggered too. NOTE: When this autocommand is executed, the current buffer "%" may be different from the buffer being created "<afile>". *BufNewFile* BufNewFile When starting to edit a file that doesn't exist. Can be used to read in a skeleton file. *BufRead* *BufReadPost* BufRead or BufReadPost When starting to edit a new buffer, after reading the file into the buffer, before executing the modelines. See |BufWinEnter| for when you need to do something after processing the modelines. This does NOT work for ":r file". Not used when the file doesn't exist. Also used after successfully recovering a. *BufUnload* BufUnload Before unloading a buffer. This is when the text in the buffer is going to be freed. This may be after a BufWritePost and before a BufDelete. Also used for all buffers that are loaded when Vim is going to exit. NOTE: When this autocommand is executed, the current buffer "%" may be different from the buffer being unloaded "<afile>".. *BufWinLeave* BufWinLeave Before a buffer is removed from a window. Not when it's still visible in another window. Also triggered when exiting. It's triggered before BufUnload or BufHidden. NOTE: When this autocommand is executed, the current buffer "%" may be different from the buffer being unloaded "<afile>". When exiting and v:dying is 2 or more this event is not triggered. *BufWipeout* BufWipeout Before completely deleting a buffer. The BufUnload and BufDelete events may be called first (if the buffer was loaded and was in the buffer list). Also used just before a buffer is renamed (also when it's not in the buffer list). NOTE: When this autocommand is executed, the current buffer "%" may be different from the buffer being deleted "<afile>".|. *CmdwinEnter* CmdwinEnter After entering the command-line window. Useful for setting options specifically for this special type of window. This is triggered _instead_ of BufEnter and WinEnter. <afile> is set to a single character, indicating the type of command-line. |cmdwin-char| *CmdwinLeave* CmdwinLeave Before leaving the command-line window. Useful to clean up any global setting done with CmdwinEnter. This is triggered _instead_ of BufLeave and WinLeave. <afile> is set to a single character, indicating the type of command-line. |cmdwin the completed item. *CursorHold* CursorHold When the user doesn't press a key for the time specified with 'updatetime'. Not re-triggered until the user has pressed a key (i.e. doesn't fire every 'updatetime' ms if you leave Vim to make some coffee. :) See |CursorHold-example| for previewing tags. This event is only triggered in Normal mode.. Note: Interactive commands cannot be used for this event. There is no hit-enter prompt, the screen is updated directly (when needed). Note: In the future there will probably be another option to set the time. Hint: to force an update of the status lines use: :let &ro = &ro . *DirChanged* DirChanged After the |current-directory| was changed. Sets these |v:event| keys: cwd: current working directory scope: "global", "tab", "window" Recursion is ignored. regains input focus. This autocommand is triggered for each changed file. It is not used when 'autoread' is set and the buffer was not changed. If a FileChangedShell autocommand is present the warning message and prompt is not given.. *FocusGained* FocusGained When Vim got input focus. Only for the GUI version and a few console versions where this can be detected. *FocusLost* FocusLost When Vim lost input focus. Only for the GUI version and a few console versions where this can be detected.IEnter* GUIEnter After starting the GUI successfully, and after opening the window. It is triggered before VimEnter when using gvim. Can be used to position the window from a gvimrc file: :autocmd GUIEnter * winpos 100 50 . *TextYankPost* TextYankPost Just after a |yank| or |deleting| command, but not if the black hole register |quote_| is used nor for |setreg()|. Pattern must be *. Sets these |v:event| keys: operator regcontents regname regtype Recursion is ignored. It is not allowed to change the text |textlock|. |, |:make| and |:grep|. Can be used to check for any changed files. For non-blocking shell commands, see |job-control|. Enter* TabEnter Just after entering a tab page. |tab-page| After triggering the WinEnter and before triggering the BufEnter event. *TabLeave* TabLeave Just before leaving a tab page. |tab-page| A WinLeave event will have been triggered first. {Nvim} *TabNew* TabNew When creating a new tab page. |tab-page| After WinEnter and before TabEnter. {Nvim} *TabNewEntered* TabNewEntered After entering a new tab page. |tab-page| After BufEnter. {Nvim} *TabClosed* TabClosed After closing a tab page. <afile> can be used for the tab page number. *TermChanged* TermChanged After the value of 'term' has changed. Useful for re-loading the syntax file to update the colors, fonts and other terminal-dependent settings. Executed for all loaded buffers. {Nvim} *TermClose* TermClose When a terminal buffer ends. {Nvim} *TermOpen* TermOpen When a terminal buffer is starting. This can be used to configure the terminal emulator by setting buffer variables. |terminal| *TermResponse* TermResponse After the response to |t_RV| is received from the terminal. The value of |v:termresponse| can be used to do things depending on the terminal version. Note that this event may be triggered halfway through another event (especially if file I/O, a shell command, or anything else that takes time is involved). .shada file. Executed only once, like VimLeavePre. < Use |v:dying| to detect an abnormal exit. Use |v:exiting| to get the exit code. Not triggered if |v:dying| is 2 or more. *VimLeavePre* VimLeavePre Before exiting Vim, just before writing the .shada file. This is executed only once, if there is a match with the name of what happens to be the current buffer when exiting. Mostly useful with a "*" pattern. :autocmd VimLeavePre * call CleanupStuff() Use |v:dying| to detect an abnormal exit. Use |v:exiting| to get the exit code. Not triggered if |v:dying| is 2 or more. *VimResized* VimResized After the Vim window was resized, thus 'lines' and/or 'columns' changed. Not when starting up though. *WinEnter* WinEnter After entering another window. Not done for the first window, when Vim has just started. Useful for setting the window height. If the window is for another buffer, Vim executes the BufEnter autocommands after the WinEnter autocommands. Note: When using ":split fname" the WinEnter event is triggered after the split but before the file "fname" is loaded. *WinLeave* WinLeave Before leaving a window. If the window to be entered next is for a different buffer, Vim executes the BufLeave autocommands before the WinLeave autocommands (but not for ":new"). Not used for ":qa" or ":q" when exiting Vim. *WinNew* WinNew When a new window was created. Not done for the first The file pattern {pat} is tested for a match against the file name in one of two ways: 1. When there is no '/' in the pattern, Vim checks for a match against only the tail part of the file name (without its leading directory path). 2. When there is a '/' in the pattern, Vim checks for a match against. Examples: :autocmd BufRead *.txt set et Set the 'et' option for all text files. :autocmd BufRead /vim/src/*.c set cindent Set the 'cindent' option for C files in the /vim/src directory. :autocmd BufRead /tmp/*.c set ts=5 If you have a link from "/tmp/test.c" to "/home/nobody/vim/src/test.c", and you start editing "/tmp/test.c", this autocommand will match. Note: To match part of a path, but not from the root directory, use a '*' as the first character. Example: :autocmd BufRead */doc/*.txt set tw=78 This autocommand will for example be executed for "/tmp/doc/xx.txt" and "/usr/home/piet/doc/yy.txt". The number of directories does not matter here. The file name that the pattern is matched against is after expanding wildcards. Thus if you issue this command: :e $ROOTDIR/main.$EXT The argument is first expanded to: /usr/root/main.py Before it's matched with the pattern of the autocommand. Careful with this when using events like FileReadCmd, the value of <amatch> may not be what you expect. Environment variables can be used in a pattern: :autocmd BufRead $VIMRUNTIME/doc/*.txt set expandtab And ~ can be used for the home directory (if $HOME is defined): :autocmd BufWritePost ~/.config/nvim/init.vim so <afile> :autocmd BufRead ~archive/* set readonly The environment variable is expanded when the autocommand is defined, not when the autocommand is executed. This is different from the command! *file-pattern* The pattern is interpreted like mostly used in file names: * matches any sequence of characters; Unusual: includes path separators ? matches any single character \? matches a '?' . matches a '.' ~ matches a '~' , separates patterns \, matches a ',' { } like \( \) in a |pattern| , inside { }: like \| in a |pattern|||| \} literal } \{ literal { \\\{n,m\} like \{n,m} in a |pattern| \ special meaning like in a |pattern| [ch] matches 'c' or 'h' [^ch] match any character but 'c' and 'h' Note that for all systems the '/' character is used for path separator (even Windows). This was done because the backslash is difficult to use in a pattern and to make the autocommands portable across different systems. It is possible to use |pattern| items, but they may not work as expected, because of the translation done for the above. *autocmd-changes* Matching with the pattern is done when an event is triggered. Changing the buffer name in one of the autocommands, or even deleting the buffer, does not change which autocommands will be executed. Example: au BufEnter *.foo bdel au BufEnter *.foo set modified This will delete the current buffer and then set 'modified' in what has become the current buffer instead. Vim doesn't take into account that "*.foo" doesn't match with that buffer name. It matches "*.foo" with the name of the buffer at the moment the event was triggered.. ============================================================================== 8. Groups *autocmd-groups* Autocommands can be put together in a group. This is useful for removing or executing a group of autocommands. For example, all the autocommands for syntax highlighting are put in the "highlight" group, to be able to execute ":doautoall highlight BufRead" when the GUI starts. When no specific group is selected, Vim uses the default group. The default group does not have a name. You cannot execute the autocommands from the default group separately; you can execute them only by executing autocommands for all groups. Normally, when executing autocommands automatically, Vim uses the autocommands for all groups. The group only matters when executing autocommands with ":doautocmd" or ":doautoall", or when defining or deleting autocommands. The group name can contain any characters except white space. The group name "end" is reserved (also in uppercase). The group name is case sensitive. Note that this is different from the event name! *:aug* *:augroup* :aug[roup] {name} Define the autocmd group name for the following ":autocmd" commands. The name "end" or "END" selects the default group.. To enter autocommands for a specific group, use this method: 1. Select the group with ":augroup {name}". 2. Delete any old autocommands with ":au!". 3. Define the autocommands. 4. Go back to the default group with "augroup END". Example: :augroup uncompress : au! : au BufEnter *.gz %!gunzip :augroup END This prevents having the autocommands defined twice (e.g., after sourcing the vimrc file again). ============================================================================== 9. Executing autocommands *autocmd-execute* Vim can also execute Autocommands non-automatically. This is useful if you have changed autocommands, or when Vim has executed the wrong autocommands (e.g., the file pattern match was wrong). Note that the 'eventignore' option applies here too. Events listed in this option will not cause any commands to be executed. *:do* *:doau* *:doautocmd* *E217* :do[autocmd] [<nomodeline>] [group] {event} [fname] Apply the autocommands matching [fname] (default: current file name) for {event} to the current buffer. You can use this when the current file name does not match the right pattern, after changing settings, or to execute autocommands for a certain event. It's possible to use this inside an autocommand too, so you can base the autocommands for one extension on another extension. Example: :au BufEnter *.cpp so ~/.config/nvim/init_cpp.vim :au BufEnter *.cpp doau BufEnter x.c Be careful to avoid endless loops. See |autocmd-nested|. When the [group] argument is not given, Vim executes the autocommands for all groups. When the [group] argument is included, Vim executes only the matching autocommands for that group. Note: if you use an undefined group name, Vim gives you an error message. *>] [group] {event} [fname] Like ":doautocmd", but apply the autocommands to each loaded buffer. Note that [fname] is used to select the autocommands, not the buffers to which they are applied. Careful: Don't use this for autocommands that delete a buffer, change to another buffer or change the contents of a buffer; the result is unpredictable. This command is intended for autocommands that set options, change highlighting, and things like that. ============================================================================== 10. Using autocommands *autocmd-use* For WRITING FILES there are four possible sets of events. Vim uses only one of these sets for a write command: BufWriteCmd BufWritePre BufWritePost writing the whole buffer FilterWritePre FilterWritePost writing to filter temp file FileAppendCmd FileAppendPre FileAppendPost appending to a file FileWriteCmd FileWritePre FileWritePost any other file write When there is a matching "*Cmd" autocommand, it is assumed it will do the writing. No further writing is done and the other events are not triggered. |Cmd-event| Note that the *WritePost commands should undo any changes to the buffer that were caused by the *WritePre commands; otherwise, writing the file will have the side effect of changing the buffer. Before executing the autocommands, the buffer from which the lines are to be written temporarily becomes the current buffer. Unless the autocommands change the current buffer or delete the previously current buffer, the previously current buffer is made the current buffer again. The *WritePre and *AppendPre autocommands must not delete the buffer from which the lines are to be written. The '[ and '] marks have a special position: - Before the *ReadPre event the '[ mark is set to the line just above where the new lines will be inserted. - Before the *ReadPost event the '[ mark is set to the first line that was just read, the '] mark to the last line. - Before executing the *WriteCmd, *WritePre and *AppendPre autocommands the '[ mark is set to the first line that will be written, the '] mark to the last line. Careful: '[ and '] change when using commands that change the buffer. In commands which expect a file name, you can use "<afile>" for the file name that is being read |:<afile>| (you can also use "%" for the current file name). "<abuf>" can be used for the buffer number of the currently effective buffer. This also works for buffers that doesn't have a name. But it doesn't work for files without a buffer (e.g., with ":r file"). *gzip-example* Examples for reading and writing compressed files: :augroup gzip : autocmd! : autocmd BufReadPre,FileReadPre *.gz set bin : autocmd BufReadPost,FileReadPost *.gz '[,']!gunzip : autocmd BufReadPost,FileReadPost *.gz set nobin : autocmd BufReadPost,FileReadPost *.gz execute ":doautocmd BufReadPost " . expand("%:r") : :augroup END The "gzip" group is used to be able to delete any existing autocommands with ":autocmd!", for when the file is sourced twice. ("<afile>:r" is the file name without the extension, see |:_%:|) The commands executed for the BufNewFile, BufRead/BufReadPost, BufWritePost, FileAppendPost and VimLeave events do not set or reset the changed flag of the buffer. When you decompress the buffer with the BufReadPost autocommands, you can still exit with ":q". When you use ":undo" in BufWritePost to undo the changes made by BufWritePre commands, you can still do ":q" (this also makes "ZZ" work). If you do want the buffer to be marked as modified, set the 'modified' option. To execute Normal mode commands from an autocommand, use the ":normal" command. Use with care! If the Normal mode command is not finished, the user needs to type characters (e.g., after ":normal m" you need to type a mark name). If you want the buffer to be unmodified after changing it, reset the 'modified' option. This makes it possible to exit the buffer with ":q" instead of ":q!". *autocmd-nested* *E218* By default, autocommands do not nest. If you use ":e" or ":w" in an autocommand, Vim does not execute the BufRead and BufWrite autocommands for those commands. If you do want this, use the "nested" flag for those commands in which you want nesting. For example: :autocmd FileChangedShell *.c nested e! The nesting is limited to 10 levels to get out of recursive loops. It's possible to use the ":au" command in an autocommand. This can be a self-modifying command! This can be useful for an autocommand that should execute only once. If you want to skip autocommands for one command, use the |:noautocmd| command modifier or the 'eventignore' option. Note: When reading a file (with ":read file" or with a filter command) and the last line in the file does not have an <EOL>, Vim remembers this. At the next write (with ":write file" or with a filter command), if the same line is written again as the last line in a file AND 'binary' is set, Vim does not supply an <EOL>. This makes a filter command on the just read lines write the same file as was read, and makes a write command on just filtered lines write the same file as was read from the filter. For example, another way to write a compressed file: :autocmd FileWritePre *.gz set bin|'[,']!gzip :autocmd FileWritePost *.gz undo|set nobin *autocommand-pattern* You can specify multiple patterns, separated by commas. Here are some examples: :autocmd BufRead * set tw=79 nocin ic infercase fo=2croq :autocmd BufRead .letter set tw=72 fo=2tcrq :autocmd BufEnter .letter set dict=/usr/lib/dict/words :autocmd BufLeave .letter set dict= :autocmd BufRead,BufNewFile *.c,*.h set tw=0 cin noic :autocmd BufEnter *.c,*.h abbr FOR for (i = 0; i < 3; ++i)<CR>{<CR>}<Esc>O :autocmd BufLeave *.c,*.h unabbr FOR For makefiles (makefile, Makefile, imakefile, makefile.unix, etc.): :autocmd BufEnter ?akefile* set include=^s\=include :autocmd BufLeave ?akefile* set include& To always start editing C files at the first function: :autocmd BufRead *.c,*.h 1;/^{ Without the "1;" above, the search would start from wherever the file was entered, rather than from the start of the file. *skeleton* *template* To read a skeleton (template) file when opening a new file: :autocmd BufNewFile *.c 0r ~/vim/skeleton.c :autocmd BufNewFile *.h 0r ~/vim/skeleton.h :autocmd BufNewFile *.java 0r ~/vim/skeleton.java To insert the current date and time in a *.html file when writing it: :autocmd BufWritePre,FileWritePre *.html ks|call LastMod()|'s :fun LastMod() : if line("$") > 20 : let l = 20 : else : let l = line("$") : endif : exe "1," . l . "g/Last modified: /s/Last modified: .*/Last modified: " . : \ strftime("%Y %b %d") :endfun You need to have a line "Last modified: <date time>" in the first 20 lines of the file for this to work. Vim replaces <date time> (and anything in the same line after it) with the current date and time. Explanation: ks mark current position with mark 's' call LastMod() call the LastMod() function to do the work 's return the cursor to the old position The LastMod() function checks if the file is shorter than 20 lines, and then uses the ":g" command to find lines that contain "Last modified: ". For those lines the ":s" command is executed to replace the existing date with the current one. The ":execute" command is used to be able to use an expression for the ":g" and ":s" commands. The date is obtained with the strftime() function. You can change its argument to get another date string. When entering :autocmd on the command-line, completion of events and command names may be done (with <Tab>, CTRL-D, etc.) where appropriate. Vim executes all matching autocommands in the order that you specify them. It is recommended that your first autocommand be used for all files by using "*" as the file pattern. This means that you can define defaults you like here for any settings, and if there is another matching autocommand it will override these. But if there is no other matching autocommand, then at least your default settings are recovered (if entering this file from another for which autocommands did match). Note that "*" will also match files starting with ".", unlike Unix shells. *autocmd-searchpat* Autocommands do not change the current search patterns. Vim saves the current search patterns before executing autocommands then restores them after the autocommands finish. This means that autocommands do not affect the strings highlighted with the 'hlsearch' option. Within autocommands, you can still use search patterns normally, e.g., with the "n" command. If you want an autocommand to set the search pattern, such that it is used after the autocommand finishes, use the ":let @/ =" command. The search-highlighting cannot be switched off with ":nohlsearch" in an autocommand. Use the 'h' flag in the 'shada' option to disable search- highlighting when starting Vim. *Cmd-event* When using one of the "*Cmd" events, the matching autocommands are expected to do the file reading, writing or sourcing. This can be used when working with a special kind of file, for example on a remote system. CAREFUL: If you use these events in a wrong way, it may have the effect of making it impossible to read or write the matching files! Make sure you test your autocommands properly. Best is to use a pattern that will never match a normal file name, for example "ftp://*". When defining a BufReadCmd it will be difficult for Vim to recover a crashed editing session. When recovering from the original file, Vim reads only those parts of a file that are not found in the swap file. Since that is not possible with a BufReadCmd, use the |:preserve| command to make sure the original file isn't needed for recovery. You might want to do this only when you expect the file to be modified.
https://neovim.io/doc/user/autocmd.html
CC-MAIN-2017-43
refinedweb
5,007
66.64
text or texture label on screen. Labels have no user interaction, do not catch mouse clicks and are always rendered in normal style. If you want to make a control that responds visually to user input, use a Box control. Example: Draw the classic Hello World! string: Text label on the Game View. using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { void OnGUI() { GUI.Label(new Rect(10, 10, 100, 20), "Hello World!"); } } Example: Draw a texture on-screen. Labels are also used to display textures, instead of a string, simply pass in a texture: Texture Label. using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { public Texture2D textureToDisplay; void OnGUI() { GUI.Label(new Rect(10, 40, textureToDisplay.width, textureToDisplay.height), textureToDisplay); } }
https://docs.unity3d.com/2019.4/Documentation/ScriptReference/GUI.Label.html
CC-MAIN-2021-04
refinedweb
123
54.29
To the software development, naming convention is very important for maintaining code easily, reducing time to read code, … So, when we work on web development, naming convention in css is needed to know. In this article, we will find out about BEM methodology for CSS. All this information is referenced from. Table of contents Introduction to BEM BEM methodology, an abbreviation of Block, Element, Modifier, is a popular naming convention for classes in html and css. It developed by the team at Yandex, its goal is to help developers better understand the relationship between the html and css in a given project. Block A block represents an object in our website. Most common blocks on website are header, body, content, sidebar, footer, column in the grid, and search. It means that a block must contain at least one element inside it. Blocks in BEM are always a starting point of chaning our CSS classes. For example: .header { ... } .container { ... } .footer { ... } Element An element is a component within the block that performs a particular function. For example: .header__logo { ... } .header__search { ... } Modifier A modifier is how we represent the variations of a block. For example: .header__logo--red { ... } .header__search__input--icon { ... } Naming conventions Blockname is usually a single word like .header, but if we have longer block definition then it is divided with a single hyphen -. .lang-switcher { ... } Elementname starts with double underscore __. .lang-switcher__flag { ... } Modifier name starts with double hyphen --. .lang-switcher__flag--basic { ... } Rule for modifier: a modifier can not used outside of the context of its owner. Some examples about modifier can be saw in Bootstrap framework. Example about BEM Use BEM with CSS preprocessor .person { &__hand {/* Styles */} &__leg {/* Styles */} &--male { /* Styles */ &__hand { /* Styles */ &--left {/* Styles */} &--right {/* Styles */} } &__leg { /* Styles */ &--left {/* Styles */} &--right {/* Styles */} } } &--female { /* Styles */ &__hand { /* Styles */ &--left {/* Styles */} &--right {/* Styles */} } &__leg { /* Styles */ &--left {/* Styles */} &--right {/* Styles */} } } } If we use mixinfor BEM, we have: /// Block Element /// @param {String} $element - Element's name @mixin element($element) { &__#{$element} { @content; } } /// Block Modifier /// @param {String} $modifier - Modifier's name @mixin modifier($modifier) { &--#{$modifier} { @content; } } Then, we have a result for this mixin: .person { @include element('hand') {/* Person hand */} @include element('leg') {/* Person leg */} @include modifier('male') { /* Person male */ @include element('hand') { /* Person male hand */ @include modifier('left') { /* Person male left hand */ } @include modifier('right') { /* Person male right hand */ } } } } Use namespace techniques in BEM From this link, we will have some namespace techniques for BEM. l-: signify that something is a Layout module. These modules have no cosmetics and are purely used to position c-components and structure an application’s layout. .l-grid { ... } .l-container { ... }. .c-card { ... } .c-checklist { ... } u-: signify that this class is a Utility class. It has a very specific role (often providing only one declaration) and should not be bound onto or changed. It can be reused and. It also indicates different states that a c-component can have. .is-visible { ... } .has-loaded { ... } _:. js-tab-switcher qa-: signify that a QA or Test Engineering team is running an automated UI test which needs to find or bind onto these parts of the DOM. Like the JavaScript namespace, this basically just reserves hooks in the DOM for non-CSS purposes. To go the details into namespace techniques, visit webiste. For example: <ul class="l-grid"> <li class="l-grid__item"> > </li> ... </ul> Wrapping up - The best practice is to use BEM only with classes, and not IDs because classes allow us to repeat names if necessary and create more consistent coding structure. Refer:
https://ducmanhphan.github.io/2019-02-23-Use-BEM-with-CSS/
CC-MAIN-2021-25
refinedweb
581
53.51
Last time around, I suggested, “Let’s see if I can talk about something other than rendering text next time around…” It looks like the answer to that one is no. Windows Vista has reached Beta 2, and we are already beginning to see documentation around lighting up your applications on Windows Vista. One of the ways to make your application feel more like Vista is to use fonts consistent with the UI: Segoe UI. The fonts section of the Windows Vista User Experience Guideliness discusses using Segoe UI – the system font. This documentation specifies, “From code, you can determine the system font properties (including its size) using the GetThemeFont API function.” Unfortunately, this is all of the direction that it provides. Anybody who has played around with the GetThemeFont API realizes that this is easier said than done. There are a respectably large number of permutations of classes, parts, and states, and anybody who is working in managed code will need to rifle through include files from the Platform SDK to locate all of the values to pass in. Once you have that working, you will then discover that the vast majority of permutations do not contain a font value at all, and the documentation is pretty much silent on the topic of which values to use to improve your chances of success, making this all the more challenging to the developer. You can avoid a significant amount of p/invoke code by using the Visual Styles API – the VisualStylesRenderer class provides a GetFont method that wraps the unmanaged GetThemeFont API for you, but you are still left with the problem of locating a permutation of class, part, and state that return a Font at all, let alone the one you are hoping to use! Personally, I find it much easier to use the GetThemeSysFont API to retrieve the system font. With this API, you specify whether you want to retreive the caption, small caption, menu, status, message box, or icon title font. Personally, I use the message box font, given that this font is also designed to display readable, sentence-formatted text. Lets’ take a look at how this looks: GetThemeSysFont on Windows XP SP-2 GetThemeSysFont on Windows Vista Beta 2 The source code to construct this simple example follows: namespace GetThemeSysFont { using System; using System.Drawing; using System.Runtime.InteropServices; using System.Windows.Forms; using System.Windows.Forms.VisualStyles; public partial class Form1 : Form { #region p/invoke declarations [DllImport("uxtheme.dll", ExactSpelling = true, CharSet = CharSet.Unicode)] private static extern IntPtr OpenThemeData(IntPtr hWnd, String classList); [DllImport("uxtheme", ExactSpelling = true, CharSet = CharSet.Unicode)] private extern static Int32 GetThemeSysFont(IntPtr hTheme, int iFontId, out LOGFONT plf); ; } private const int TMT_CAPTIONFONT = 801; private const int TMT_SMALLCAPTIONFONT = 802; private const int TMT_MENUFONT = 803; private const int TMT_STATUSFONT = 804; private const int TMT_MSGBOXFONT = 805; private const int TMT_ICONTITLEFONT = 806; #endregion public Form1() { InitializeComponent(); Font themeFont = messageBoxFont(); platformLabel.Font = themeFont; platformLabel.Text = Environment.OSVersion.VersionString; themeFontLabel.Font = themeFont; themeFontLabel.Text = "Message Box Font: " + themeFont.Name + " " + themeFont.SizeInPoints + "pt"; } private Font messageBoxFont() { // You can avoid p/invoke altogether in managed code // using the following: // return SystemFonts.MessageBoxFont; LOGFONT pFont; IntPtr hTheme = OpenThemeData(this.Handle, "WINDOW"); GetThemeSysFont(hTheme, TMT_MSGBOXFONT, out pFont); return Font.FromLogFont(pFont); } } } This approach will work well in both managed code and unmanaged code. Unfortunately, I am not aware of a wrapper in the managed Visual Styles APIs for GetThemeSysFont, so I went the p/invoke route. Are there other alternatives to getting the system font? Yes there are! (If you read the commented out code, you will even see one – we’ll get to that one in a bit.) Well, we are getting the message box font. If you poke around the NONCLIENTMETRICS structure you can retrieve from a call to SystemParametersInfo, you will notice that there is a lfMessageFont LOGFONT that returns the message box font. This works in both managed and unmanaged code, and has the distinct advantage of not taking a dependency on uxtheme.dll – which you will only find on Windows XP and above. What about nice shortcuts in the managed world? The SystemFonts class, by its very name, suggests that it will return the correct system font. Unfortunately, if you go with the most obvious choice, the DefaultFont property, you will be handed a font that is very clearly not Segoe UI. What has happened here? Well, internally, the DefaultFont property has a very strong tendency to return the hard-coded value of Tahoma (depending on your locale). Otherwise, you get the outcome GetStockObject(DEFAULT_GUI_FONT). Wait a minute, the DEFAULT_GUI_FONT? The one Raymond Chen describes as, .”? Yes, that’s the one. OK, so DefaultFont isn’t a particularly good choice. That’s unfortunate, because it just “feels” like it should be the right choice. However, the MessageBoxFont property does, indeed, wrap the SystemParametersInfo function with an argument of NONCLIENTMETRICS, giving you a straightforward way to go after this font without having to use p/invoke. So, in the end, MessageBoxFont is probably the easiest option to go after Segoe UI on Vista using managed code, while providing acceptable results on existing platforms. Very usefull article. It help me to create LOGFONT structure correctly in my programm. Use "MS Shell Dlg" as font name, then all Windows versions will substitute this font name with the right font name (also in Japan, Russia,…). See
https://blogs.msdn.microsoft.com/cjacks/2006/06/02/light-up-your-fonts-on-vista-selecting-segoe-ui-using-getthemesysfont-nonclientmetrics-or-systemfonts-messageboxfont/
CC-MAIN-2016-40
refinedweb
897
54.83
Each Answer to this Q is separated by one/two green lines. I’d like to do something like this: class X: @classmethod def id(cls): return cls.__name__ def id(self): return self.__class__.__name__ And now call id() for either the class or an instance of it: >>> X.id() 'X' >>> X().id() 'X' Obviously, this exact code doesn’t work, but is there a similar way to make it work? Or any other workarounds to get such behavior without too much “hacky” stuff? Class and instance methods live in the same namespace and you cannot reuse names like that; the last definition of id will win in that case. The class method will continue to work on instances however, there is no need to create a separate instance method; just use: class X: @classmethod def id(cls): return cls.__name__ because the method continues to be bound to the class: >>> class X: ... @classmethod ... def id(cls): ... return cls.__name__ ... >>> X.id() 'X' >>> X().id() 'X' This is explicitly documented: It can be called either on the class (such as C.f()) or on an instance (such as C().f()). The instance is ignored except for its class. If you do need distinguish between binding to the class and an instance If you need a method to work differently based on where it is being used on; bound to a class when accessed on the class, bound to the instance when accessed on the instance, you’ll need to create a custom descriptor object. The descriptor API is how Python causes functions to be bound as methods, and bind classmethod objects to the class; see the descriptor howto. You can provide your own descriptor for methods by creating an object that has a __get__ method. Here is a simple one that switches what the method is bound to based on context, if the first argument to __get__ is None, then the descriptor is being bound to a class, otherwise it is being bound to an instance: class class_or_instancemethod(classmethod): def __get__(self, instance, type_): descr_get = super().__get__ if instance is None else self.__func__.__get__ return descr_get(instance, type_) This re-uses classmethod and only re-defines how it handles binding, delegating the original implementation for instance is None, and to the standard function __get__ implementation otherwise. Note that in the method itself, you may then have to test, what it is bound to. isinstance(firstargument, type) is a good test for this: >>> class X: ... @class_or_instancemethod ... def foo(self_or_cls): ... if isinstance(self_or_cls, type): ... return f"bound to the class, {self_or_cls}" ... else: ... return f"bound to the instance, {self_or_cls" ... >>> X.foo() "bound to the class, <class '__main__.X'>" >>> X().foo() 'bound to the instance, <__main__.X object at 0x10ac7d580>' An alternative implementation could use two functions, one for when bound to a class, the other when bound to an instance: class hybridmethod: def __init__(self, fclass, finstance=None, doc=None): self.fclass = fclass self.finstance = finstance self.__doc__ = doc or fclass.__doc__ # support use on abstract base classes self.__isabstractmethod__ = bool( getattr(fclass, '__isabstractmethod__', False) ) def classmethod(self, fclass): return type(self)(fclass, self.finstance, None) def instancemethod(self, finstance): return type(self)(self.fclass, finstance, self.__doc__) def __get__(self, instance, cls): if instance is None or self.finstance is None: # either bound to the class, or no instance method available return self.fclass.__get__(cls, None) return self.finstance.__get__(instance, cls) This then is a classmethod with an optional instance method. Use it like you’d use a property object; decorate the instance method with @<name>.instancemethod: >>> class X: ... @hybridmethod ... def bar(cls): ... return f"bound to the class, {cls}" ... @bar.instancemethod ... def bar(self): ... return f"bound to the instance, {self}" ... >>> X.bar() "bound to the class, <class '__main__.X'>" >>> X().bar() 'bound to the instance, <__main__.X object at 0x10a010f70>' Personally, my advice is to be cautious about using this; the exact same method altering behaviour based on the context can be confusing to use. However, there are use-cases for this, such as SQLAlchemy’s differentiation between SQL objects and SQL values, where column objects in a model switch behaviour like this; see their Hybrid Attributes documentation. The implementation for this follows the exact same pattern as my hybridmethod class above. I have no idea what’s your actual use case is, but you can do something like this using a descriptor: class Desc(object): def __get__(self, ins, typ): if ins is None: print 'Called by a class.' return lambda : typ.__name__ else: print 'Called by an instance.' return lambda : ins.__class__.__name__ class X(object): id = Desc() x = X() print x.id() print X.id() Output Called by an instance. X Called by a class. X It can be done, quite succinctly, by binding the instance-bound version of your method explicitly to the instance (rather than to the class). Python will invoke the instance attribute found in Class().__dict__ when Class().foo() is called (because it searches the instance’s __dict__ before the class’), and the class-bound method found in Class.__dict__ when Class.foo() is called. This has a number of potential use cases, though whether they are anti-patterns is open for debate: class Test: def __init__(self): self.check = self.__check @staticmethod def check(): print('Called as class') def __check(self): print('Called as instance, probably') >>> Test.check() Called as class >>> Test().check() Called as instance, probably Or… let’s say we want to be able to abuse stuff like map(): class Str(str): def __init__(self, *args): self.split = self.__split @staticmethod def split(sep=None, maxsplit=-1): return lambda string: string.split(sep, maxsplit) def __split(self, sep=None, maxsplit=-1): return super().split(sep, maxsplit) >>> s = Str('w-o-w') >>> s.split('-') ['w', 'o', 'w'] >>> Str.split('-')(s) ['w', 'o', 'w'] >>> list(map(Str.split('-'), [s]*3)) [['w', 'o', 'w'], ['w', 'o', 'w'], ['w', 'o', 'w']] “types” provides something quite interesting since Python 3.4: DynamicClassAttribute It is not doing 100% of what you had in mind, but it seems to be closely related, and you might need to tweak a bit my metaclass but, rougly, you can have this; from types import DynamicClassAttribute class XMeta(type): def __getattr__(self, value): if value == 'id': return XMeta.id # You may want to change a bit that line. @property def id(self): return "Class {}".format(self.__name__) That would define your class attribute. For the instance attribute: class X(metaclass=XMeta): @DynamicClassAttribute def id(self): return "Instance {}".format(self.__class__.__name__) It might be a bit overkill especially if you want to stay away from metaclasses. It’s a trick I’d like to explore on my side, so I just wanted to share this hidden jewel, in case you can polish it and make it shine! >>> X().id 'Instance X' >>> X.id 'Class X' Voila… In your example, you could simply delete the second method entirely, since both the staticmethod and the class method do the same thing. If you wanted them to do different things: class X: def id(self=None): if self is None: # It's being called as a static method else: # It's being called as an instance method (Python 3 only) Elaborating on the idea of a pure-Python implementation of @classmethod, we can declare an @class_or_instance_method as a decorator, which is actually a class implementing the attribute descriptor protocol: import inspect class class_or_instance_method(object): def __init__(self, f): self.f = f def __get__(self, instance, owner): if instance is not None: class_or_instance = instance else: class_or_instance = owner def newfunc(*args, **kwargs): return self.f(class_or_instance, *args, **kwargs) return newfunc class A: @class_or_instance_method def foo(self_or_cls, a, b, c=None): if inspect.isclass(self_or_cls): print("Called as a class method") else: print("Called as an instance method")
https://techstalking.com/programming/python/same-name-for-classmethod-and-instancemethod/
CC-MAIN-2022-40
refinedweb
1,306
66.23
[Solved] Problem with dll files during compiling Hello, I can't compile an easy code with Visual basic 2012 and qt for 32 bits. Some error messages with dll come to stop the process. I have already tried to add QT += core gui, QT += widgets, QT += sql and DEFINES += QT_NODLL in my .pro file, but it didn't change anything. My libpath is: D:\Qt\5.2.1\msvc2012\lib\cmake\Qt5Gui;D:\Qt\5.2.1\msvc2012\lib\cmake\Qt5Sql;D:\Qt\5.2.1\msvc2012\lib\cmake;D:\Qt\5.2.1\msvc2012\lib\cmake\Qt5Declarative;D:\Qt\5.2.1\msvc2012\lib\cmake\Qt5Designer;D:\Qt\5.2.1\msvc2012\lib\cmake\Qt5Test;D:\Qt\5.2.1\msvc2012\lib\cmake\Qt5;D:\Qt\5.2.1\msvc2012\lib\cmake\Qt5Widgets;D:\Qt\5.2.1\msvc2012\lib\cmake\Qt5Core;D:\Qt\5.2.1\msvc2012\bin;D:\Qt\5.2.1\msvc2012\lib;$(LibraryPath) The output message is: @cl -c -nologo -Zm200 -Zc:wchar_t -Zi -MDd -GR -W3 -w34100 -w34189 -EHsc /Fddebug\Project1.pdb -DUNICODE -DWIN32 -DQT_GUI_LIB -DQT_CORE_LIB -DQT_OPENGL_ES_2 -DQT_OPENGL_ES_2_ANGLE -I"." -I"D:\Qt\5.2.1\msvc2012\include" -I"D:\Qt\5.2.1\msvc2012\include\QtGui" -I"D:\Qt\5.2.1\msvc2012\include\QtANGLE" -I"D:\Qt\5.2.1\msvc2012\include\QtCore" -I"debug" -I"D:\Qt\5.2.1\msvc2012\mkspecs\win32-msvc2012" -Fodebug\ @D:\Users\fpellecc\AppData\Local\Temp\nm29B0.tmp 1> main.cpp 1> echo 1 /* CREATEPROCESS_MANIFEST_RESOURCE_ID / 24 / RT_MANIFEST / "debug\Project1.exe.embed.manifest">debug\Project1.exe_manifest.rc 1> if not exist debug\Project1.exe if exist debug\Project1.exe.embed.manifest del debug\Project1.exe.embed.manifest 1> if exist debug\Project1.exe.embed.manifest copy /Y debug\Project1.exe.embed.manifest debug\Project1.exe_manifest.bak 1> link /NOLOGO /DYNAMICBASE /NXCOMPAT /DEBUG /SUBSYSTEM:WINDOWS "/MANIFESTDEPENDENCY:type='win32' name='Microsoft.Windows.Common-Controls' version='6.0.0.0' publicKeyToken='6595b64144ccf1df' language='' processorArchitecture='*'" /MANIFEST /MANIFESTFILE:debug\Project1.exe.embed.manifest /OUT:debug\Project1.exe @D:\Users\fpellecc\AppData\Local\Temp\nm5755.tmp 1>main.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: __thiscall QApplication::QApplication(int &,char * *,int)" (_imp??0QApplication@@QAE@AAHPAPADH@Z) referenced in function _main 1>main.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: virtual __thiscall QApplication::~QApplication(void)" (_imp??1QApplication@@UAE@XZ) referenced in function _main 1>main.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: static int __cdecl QApplication::exec(void)" (_imp?exec@QApplication@@SAHXZ) referenced in function _main 1>debug\Project1.exe : fatal error LNK1120: 3 unresolved externals 1>NMAKE : fatal error U1077: '"C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\bin\link.EXE"' : return code '0x460'(43,5): error MSB3073: The command "qmake && nmake debug-clean debug" exited with code 2. ========== Rebuild All: 0 succeeded, 1 failed, 0 skipped ========== @ The code is: @#include <QApplication> #include <QtWidgets> int main (int argc, char *argv[]) { QApplication app(argc, argv); // QPushButton bouton("Coucou!"); // bouton.show(); return app.exec(); }@ Would anyone have a solution about this problem? Hi, if you only want to try such a simple program as your main.cpp, then you shouldn't have to change anything in your .pro file. Maybe, test that your QtCreator etc. works ok, make another simple program, for example do New Project, select a Qt Widgets application. Don't edit it, just try to build it and see if it runs ok. I tried to compile a new Qt widget application in QtCreator, but it didn't work because of the include path: @D:\Qt\5.2.1\msvc2012\include\QtCore\qglobal.h:46: error: C1083: Cannot open include file: 'stddef.h': No such file or directory@ I tried to add this line in the .pro file, but still no changes: @INCLUDEPATH += "C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\include"@ The stddef.h is in this include folder I assume that you use QtCreator. Have you checked that compiler, qt versions, and qt kits are correctly recognized by QtCreator. Take a look at Options/Build & Run and see if there are any errors, warnings in the "Compilers", "Qt Versions", "Kits" tabs. Thank you for your reply! There is only a warning for kits. It says @Desktop Qt 5.2.1 MSVC2012 32bit (default)@ For the compiler there is no error, there are four lines: @Microsoft Visual C++ Compiler 11.0 (x86) MSVC Microsoft Visual C++ Compiler 11.0 (amd64) MSVC Microsoft Visual C++ Compiler 11.0 (arm) MSVC MinGW (x86 32bit in C:\Programs\MinGW32-xy\bin) MinGW@ Maybe Qt's initialization for MSVC2012 is broken: check in the Options, Build & Run, Compilers tab, in the list of your 4 compilers, click in the first (1. in your list above). Check "Initialization:" below, it should say "C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\vcvarsall.bat x86" This is the .bat file that QtCreator runs to set the include path to stddef.h (among other things). If you have that line ok, start a cmd window, and see what happens if you start it by hand. It should call the vcvars32.bat file in the bin directory below. The initialization is ok for the first line. I'm sorry, I'm a beginner: how could I start a command prompt window and start it by hand? I don't know if it's right, but I've read that I should compile my program with MinGW instead of MSVC2012.. But I've just one choice for "Debug", MinGW doesn't appear If you click on "Desktop Qt 5.2.1 MSVC2012 32bit (default)" then what "Compiler" and "Qt version" are selected in the fields below ? To start a cmd shell press "Windows Logo+R", type cmd.exe in the dialog that will appear and press Enter. Hmm, I think getting MinGW to work could be easier than Visual Studio. Perhaps you try uninstalling Qt, and installing it again choosing only the MingGW 4.8 compiler. I agree with hskoglund that it is easier to deal with the only one Qt kit. I tried to uninstall and install again QtCreator, but I had no choice for the compiler type. The problem is that I have to program with Visual Studio 2012, then I must install the version qt-opensource-windows-x86-msvc2012-5.2.1.exe. But with this version, both msvc and MinGW compiler are installed, and I still can't choose MinGW.. Can I install the version Qt 5.2.1 for Windows 32-bit (MinGW 4.8, OpenGL, 634 MB) and use Qt in my Visual Basic 2012 program? Sorry I didn't see before you want to use Qt in Visual Basic, that's a bit complicated! You mean you want to call Qt functions from a Visual Basic program or the other way around, i.e. using Visual Basic functions from a Qt program? I would like to call qt functions from Visual Basic 2012. But I need Qt Creator to create my graphic interface with qt designer too I see! Ok back to your problem: not being able to select MinGW, I don't really see why? Anyway, I think perhaps to simplify, try uninstalling, but this time uninstall both Qt and Visual Studio 2012. Now when you have your Windows PC without any development tools installed, then install first Qt with MinGW. After you've verified that you can compile and run a Qt program (with MinGW selected) you then proceed to install Visual Studio 2012 to get Visual Basic 2012 up and running again :-) I've just installed the Qt version with MinGW and QtCreator works! Thank you for the idea Now I have two version, one to use QtCreator, and one to be used in Visual Basic. I tried to launch a small qt program in Visual Basic 2012: there are no erros, but the application can't be opened after the compilation.. It says: @Unable to start program 'h:\Visual Studio 2012\Projects\Project1\Project1.exe'. The system cannot find the file specified@
https://forum.qt.io/topic/39001/solved-problem-with-dll-files-during-compiling
CC-MAIN-2020-10
refinedweb
1,343
59.5
A domain-specific language for modeling convex optimization problems in Python. Project description CVXPY The CVXPY documentation is at cvxpy.org. We are building a CVXPY community on Discord. Join the conversation! For issues and long-form discussions, use Github Issues and Github Discussions. Contents where the variable is constrained by lower and upper bounds: import cvxpy as cp import numpy # Problem data. m = 30 n = 20 numpy.random.seed(1) A = numpy.random.randn(m, n) b = numpy.random.randn(m) # Construct the problem. x = cp.Variable(n) objective = cp.Minimize(cp.sum_squares(A @ x - b)) constraints = [0 <= x, x <= 1] prob = cp.Problem(objective, constraints) # The optimal objective is returned by prob.solve(). result = prob.solve() # The optimal value for x is stored in x.value. print(x.value) # The optimal Lagrange multiplier for a constraint # is stored in constraint.dual_value. print(constraints[0].dual_value) With CVXPY, you can model - convex optimization problems, - mixed-integer convex optimization problems, - geometric programs, and - quasiconvex programs. CVXPY is not a solver. It relies upon the open source solvers ECOS, SCS, and OSQP. Additional solvers are available, but must be installed separately. CVXPY began as a Stanford University research project. It is now developed by many people, across many institutions and countries. Installation CVXPY is available on PyPI, and can be installed with pip install cvxpy CVXPY can also be installed with conda, using conda install -c conda-forge cvxpy CVXPY has the following dependencies: - Python >= 3.6 - OSQP >= 0.4.1 - ECOS >= 2 - SCS >= 1.1.6 - NumPy >= 1.15 - SciPy >= 1.1.0 For detailed instructions, see the installation guide. Getting started To get started with CVXPY, check out the following: Issues We encourage you to report issues using the Github tracker. We welcome all kinds of issues, especially those related to correctness, documentation, performance, and feature requests. For basic usage questions (e.g., "Why isn't my problem DCP?"), please use StackOverflow instead. Community The CVXPY community consists of researchers, data scientists, software engineers, and students from all over the world. We welcome you to join us! - To chat with the CVXPY community in real-time, join us on Discord. - To have longer, in-depth discussions with the CVXPY community, use Github Discussions. - To share feature requests and bug reports, use Github Issues. Please be respectful in your communications with the CVXPY community, and make sure to abide by our code of conduct. Contributing We appreciate all contributions. You don't need to be an expert in convex optimization to help out. You should first install CVXPY from source. Here are some simple ways to start contributing immediately: - Read the CVXPY source code and improve the documentation, or address TODOs - Enhance the website documentation - Browse the issue tracker, and look for issues tagged as "help wanted" - Polish the example library - Add a benchmark If you'd like to add a new example to our library, or implement a new feature, please get in touch with us first to make sure that your priorities align with ours. Contributions should be submitted as pull requests. A member of the CVXPY development team will review the pull request and guide you through the contributing process. Before starting work on your contribution, please read the contributing guide. Team CVXPY is a community project, built from the contributions of many researchers and engineers. CVXPY is developed and maintained by Steven Diamond, Akshay Agrawal, Riley Murray, and Bartolomeo Stellato, with many others contributing significantly. A non-exhaustive list of people who have shaped CVXPY over the years includes Stephen Boyd, Eric Chu, Robin Verschueren, Michael Sommerauer, Jaehyun Park, Enzo Busseti, AJ Friend, Judson Wilson, Chris Dembia, and Philipp Schiele. For more information about the team and our processes, see our governance document. Citing If you use CVXPY for academic work, we encourage you to cite our papers. If you use CVXPY in industry, we'd love to hear from you as well, on Discord or over email. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/cvxpy/
CC-MAIN-2022-21
refinedweb
691
50.73