text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
To better understand Observables, we’re going to write our own! But first, let’s take a look at an example with a subscription to grasp the bigger picture: const node = document.querySelector('input[type=text]'); const input$ = Rx.Observable.fromEvent(node, 'input'); input$.subscribe({ next: event => console.log(`You just typed ${event.target.value}!`), error: err => console.log(`Oops... ${err}`), complete: () => console.log(`Complete!`), }); This example takes an <input type="text"> element and passes it into Rx.Observable.fromEvent(), which returns us an Observable of our input’s Event object when the event name we specified emits (which is why we’re using ${event.target.value} in the console). When the input’s event listener fires, the Observable passes the value to the observer. An observer is quite simple, in the above example the observer is the object literal we pass into our .subscribe() (subscribe will invoke our Observable). .subscribe(next, error, complete)is also valid syntax, but we’ll be exploring the object literal form in this post When an Observable produces values, it then informs the observer, calling .next() when a new value was successfully captured and .error() when an error occurs. When we subscribe to an Observable, it will keep passing any values to an observer until one of two things happens. Either the producer says there are no more values to be sent, in which case it will call .complete() on our observer, or we (as the “consumers”) decide we are no longer interested in the values and we unsubscribe. When we want to compose the values returned from an Observable, before they reach our final .subscribe() block, the value is passed (or can be passed) through a chain of Observables, which is typically done via “operators”. This chain is what we call an Observable sequence. Each operator returns a new Observable to continue our sequence - also known as a “stream”. As we’ve mentioned, Observables can be chained, which means we can do something like this: const input$ = Rx.Observable.fromEvent(node, 'input') .map(event => event.target.value) .filter(value => value.length >= 2) .subscribe(value => { // use the `value` }); Here are the steps of this sequence: .map(), which is subscribing to our initial observable .map()returns a new Observable of event.target.valueand calls .next()on it’s observer .next()call will invoke .filter(), which is subscribing to .map(), with the resulting value of the .map()call .filter()will then return another Observable with the filtered results, calling .next()with the value if the .lengthis 2 or above .subscribe( .next(). So, let’s get started and write our own Observable implementation. It won’t be as advanced as Rx’s implementation, but we’ll hopefully build the picture enough. First, we’ll create an Observable constructor function that takes a subscribe function as its only argument. We’ll store the subscribe property on the instance of Observable, so that we can call it later with an observer: function Observable(subscribe) { this.subscribe = subscribe; } Each subscribe callback that we assign to this.subscribe will be invoked either by us or another Observable. This will make more sense as we continue. Before we dive into our real world example, let’s give a basic one. As we’ve setup our Observable function, we can now invoke our observer, passing in 1 as a value and subscribe to it: const one$ = new Observable(observer => { observer.next(1); observer.complete(); }); one$.subscribe({ next: value => console.log(value), // 1 }); We subscribe to the Observable instance, and pass our observer (object literal) into the constructor (which is then assigned to this.subscribe). That’s all we actually needed to create the basis of our Observable, the next piece we need is a static method on the Observable: Observable.fromEvent = (element, name) => {}; We’re going to use our Observable just like in RxJS: const node = document.querySelector('input'); const input$ = Observable.fromEvent(node, 'input'); Which means we need to return a new Observable and pass a function in as the argument: Observable.fromEvent = (element, name) => { return new Observable(observer => {}); }; This then passes our function to our this.subscribe in the constructor. Next up, we need to hook our event in: Observable.fromEvent = (element, name) => { return new Observable(observer => { element.addEventListener(name, event => {}, false); }); }; So, what’s this observer argument, and where does it come from? The observer is actually your object literal with error and complete on. Here is the interesting piece. The observeris never passed through until .subscribe()is invoked. This means the addEventListeneris never “setup” by our Observable until it’s subscribed to. Once subscribe is invoked, inside the Observable’s constructor the this.subscribe is then called, which invokes the callback we passed to new Observable(callback) and also passes through our observer literal. This then allows the Observable to do it’s thing and once it’s done, it’ll .next() on our observer with the updated value. Okay so what now? We’ve got an event listener setup, but nothing is calling .next(), let’s fix that: Observable.fromEvent = (element, name) => { return new Observable(observer => { element.addEventListener( name, event => { observer.next(event); }, false ); }); }; As we know, Observables need a “tear down” function which is called when the Observable is destroyed, in our case we’ll remove the event: Observable.fromEvent = (element, name) => { return new Observable(observer => { const callback = event => observer.next(event); element.addEventListener(name, callback, false); return () => element.removeEventListener(name, callback, false); }); }; We’ve not called .complete() because this Observable is dealing with DOM APIs and events, so technically they’re infinitely available. Let’s try it out! Here’s the full code of what we’ve done: const node = document.querySelector('input'); const p = document.querySelector('p'); function Observable(subscribe) { this.subscribe = subscribe; } Observable.fromEvent = (element, name) => { return new Observable(observer => { const callback = event => observer.next(event); element.addEventListener(name, callback, false); return () => element.removeEventListener(name, callback, false); }); }; const input$ = Observable.fromEvent(node, 'input'); const unsubscribe = input$.subscribe({ next: event => { p.innerHTML = event.target.value; }, }); // automatically unsub after 5s setTimeout(unsubscribe, 5000); Live example (type, then watch): Building our own operator should be a little easier now we understand the concepts behind an Observable and observer. On our Observable object, we’ll add a new prototype method: Observable.prototype.map = function(mapFn) {}; This method will be used as such, pretty much like Array.prototype.map in JavaScript but for any value: const input$ = Observable.fromEvent(node, 'input').map( event => event.target .map() operator. Because it’s on the prototype we can do exactly that: Observable.prototype.map = function(mapFn) { const input = this; }; Ready for more funk? Now we subscribe inside a returned Observable: Observable.prototype.map = function(mapFn) { const input = this; return new Observable(observer => { return input.subscribe(); }); }; We are returning the input.subscribe()because when we unsubscribe, the unsubscriptions (is that a word?) will flow up the chain, unsubscribing from each Observable. This subscription will allow us to be passed the previous value from our Observable.fromEvent, because it returns a new Observable with a subscribe property in the constructor, we can simply subscribe to any updates it makes! Let’s finish this off by invoking our mapFn() passed through map: Observable.prototype.map = function(mapFn) { const input = this; return new Observable(observer => { return input.subscribe({ next: value => observer.next(mapFn(value)), error: err => observer.error(err), complete: () => observer.complete(), }); }); }; Now we can chain it! const input$ = Observable.fromEvent(node, 'input').map( event => event.target.value ); input$.subscribe({ next: value => { p.innerHTML = value; }, }); Notice how the final .subscribe() block is passed only the value and not the Event object like before? You’ve successfully created an Observable stream. Try it again: Hopefully this post was good fun for you :) come learn more RxJS with us!
https://ultimatecourses.com/blog/rxjs-observables-observers-operators
CC-MAIN-2020-40
refinedweb
1,291
52.66
QextSerialPort missing data [solved] Hi all, I’m using QextSerial port to read data from RS-232 at 400hz rate, most of the time I got currected data but sometimes I got invalid data (missing lines from the packet). I checked the device with realTerm and the data is currect so it’s not a hardware bug. what can cose this missing line error? some suggestions? Thanks. More info: The algo: If (start recording) 1. clear buffer 2. read data and sync on the packet header 3. check sum 4. save data to files if (stop recording) 1. read data (clear buffer) My setup QT 4.7 QextSerialPort 1.2.0 Win xp sp3 using sea-level (rs-232 to usb) Open port function: Source code - this->setState(STOP_RECORDING); - port = new QextSerialPort(portName,QextSerialPort::EventDriven); - - connect( port, SIGNAL(readyRead()), this, SLOT(onReadyRead())); - } - else { - return -1; - } - port->setBaudRate(BAUD691200); - port->setDataBits(DATA_8); - port->setParity(PAR_NONE); - port->setStopBits(STOP_2); - port->setFlowControl(FLOW_OFF); - port->setRts( true ); - port->setDtr( true ); - connect(&timer, SIGNAL(timeout()), this, SLOT(procTimerOut())); - this->timer.setInterval(50); - return 0; Reading from port: - #define PACKET_SIZE 40 - void Driver::onReadyRead() - {/* Read data from port */ - if (this->state() == STOP_RECORDING) { - this->port->read(port->bytesAvailable()); - return; - } - if (this->rxBuffer.isEmpty()) { - this->timer.start(); - } - QByteArray data = this->port->read(port->bytesAvailable()); - this->rxBuffer.append(data); - } - void Driver::procTimerOut() - { - this->timer.stop(); - /* Print raw data to file */ - files_data.rawData.write(rxBuffer.toHex()); - if (!this->last_line.isEmpty()){ - rxBuffer.prepend(this->last_line); - } - if (rxBuffer.size() < PACKET_SIZE) { - qDebug("buffer is small [%d]",rxBuffer.size()); - return; - } - if (this->sync(rxBuffer) != SUCCESS) { - emit actionDone(EINVAL,"Cannot sync on packet"); - return; - } - while (rxBuffer.size() > PACKET_SIZE) { - const QByteArray sample = rxBuffer.mid(0, PACKET_SIZE); - rxBuffer.remove(0, PACKET_SIZE); - if (this->addSample(sample) == INVALID_CHECKSUM) { - - - - emit actionDone(INVALID_LINE,errMsg); - rxBuffer.clear(); - return; - } - } - } - if (!rxBuffer.isEmpty()) { - this->last_line = rxBuffer; //read all data that left - rxBuffer.clear(); //remove all data that left - } - else { - this->last_line.clear(); - } - } - int Driver::sync(QByteArray &rxBuffer) - { - if (!rxBuffer.startsWith(this->key)){//Note: Sync on packet - int prefix_offset = rxBuffer.indexOf(this->key); - if (prefix_offset == -1){ - return EINVAL; - } - rxBuffer.remove(0,prefix_offset); - if (!rxBuffer.startsWith(this->key)) { - return EINVAL; - } - } - return SUCCESS; - } - inline int Driver::addSample(const QByteArray & sample) - { - const struct packet *pkt; - pkt = (const struct packet *)sample.constData(); - int checksum_bytes = sizeof(struct packet) - sizeof(pkt->checksum); - u16 crc = checksum((const unsigned char *)pkt, checksum_bytes); - if ( crc != pkt->checksum ) { - return INVALID_CHECKSUM; - } - this->print_analyzed_sample(pkt); - this->sampleCounter++; - return SUCCESS; - } - } - void Driver::startRecording() - { - this->prepareForRecord(); - this->setState(START_RECORDING); - } - void Driver::stopRecording() - { - this->setState(STOP_RECORDING); - this->closeFiles(); - } - void Driver::prepareForRecord() - { - this->genetate_output_Files(); //creat output files, and open to write data - //reset data - QByteArray buf; - do { - buf = port->readAll(); - } while (buf.size() > 0); - this->rxBuffer.clear(); - } 15 replies Hello, - Maybe you can give a try to QextSerialPort::Polling mode with a QTimer in order to see whether this issue still exists. - If you are using Qt SDK which does not contains a qwineventnotifier_p.h under the “include/QtCore/private” directory, QextWinEventNotifier will be used. If so, you can disable this by coping qwineventnofifier_p.h form src\corelib\kernel to that directory. Debao Hi, I have submitted a bug several day ago, but seems that others do not come with the similar problem. So I do not how to do with this. However, I found a similar issue at stackoverflow which has been solved. You can give a try to it. Debao welcome to devnet There are at least two different Qt-based implementations available. There have been already a number of threads concerning the topic. One of the implementations or also both are mentioned in following threads with following tags: QExtSerialPort [qt-project.org] QSerialDevice [qt-project.org] [edit] A good start might be this overview [qt-project.org] 2 Dantcho but the QSerialDevice can be the right for my project Instead of QSerialDevice – use QtSerialPort [qt-project.org]. QSerialDevice is no longer growing (develop is freezing). His code was used as the basis QtSerialPort. Now QtSerialPort is a child. Hi all, I have to work with serial port and need to to install it on my computer. What i have to do to install it? I download this: “qtplayground-qtserialport” from here: but I am a little confuse because the installing guide is for linux not for windows. How to install qt serial port on windows? Thanks! Hi everybody, I createt tree windows with the QtDesigner and from windows one of them start the another windows, from windows one I have connection to the another windows (include the header files), so I include the the header file from the windows one to the another windows to have connection from every window to the window one, but wenn I declare filetype from windows one I get the following error. “Fehler:ISO C++ forbids declaration of ‘Com_Net’ with no type” this is a C++ error, but I include the header files :-(((. Wath cann I do to solve this problem.Here is the header file from the another windows. - #ifndef DICS_H - #define DICS_H - #include <QWidget> - #include <QSizePolicy> - #include <QMoveEvent> - #include "com_net.h" bq. here include the header files from the windows one - #include "ui_com_net.h" - namespace Ui { - class DICS; - } - - { - Q_OBJECT - public: - - ~DICS(); - private slots: - void on_External_Button_clicked(); - void on_Internal_Button_clicked(); - void on_PABX_Button_clicked(); - void on_PAS_Button_clicked(); - private: - Ui::DICS *ui; - Com_Net *active_com_net; bq. and here is the pointer declaration from windows one. - virtual void moveEvent(QMoveEvent *event); - }; - #endif // DICS_H Thanks very much, for the help. Dear community,I work with the “qextserialport-1.2beta2” and all work well, but I have one problem, I want to send in binary mode, ones I can set is “port->setTextModeEnabled(false);” but the serial port stil interpret the data “o” like a null terminator and so cut the string after the every “0”. my question is it possible to send by “qextserialport” binary mode, or I have to look for a another way. thanks for the answer, :-))). You must log in to post a reply. Not a member yet? Register here!
http://qt-project.org/forums/viewthread/15580
CC-MAIN-2014-42
refinedweb
992
50.63
public class HitCounter { class Tuple { int time; int count; public Tuple(int time, int count) { this.time = time; this.count = count; } } Queue<Tuple> q; int currCount; public HitCounter() { q = new LinkedList<>(); currCount = 0; } public void hit(int timestamp) { advance(timestamp); if(!q.isEmpty() && q.peek().time==timestamp) { q.peek().count += 1; } else { q.offer(new Tuple(timestamp, 1)); } currCount += 1; } private void advance(int timestamp) { while(!q.isEmpty() && q.peek().time <= timestamp-300) { currCount -= q.poll().count; } } public int getHits(int timestamp) { advance(timestamp); return currCount; } } What does this statement mean? What will happen if it were removed? if(!q.isEmpty() && q.peek().time==timestamp) { q.peek().count += 1; } I don't think you should peek the first element: if(!q.isEmpty() && q.peek().time==timestamp) { q.peek().count += 1; } Instead, you should peek the last element, like in a Deque. Otherwise, assume you get one hit at time 5, and then you get 1 million hits at time 6. Time 5 will remain in the queue because it is only 1 sec away. You only peek the first (time 5) and see the timestamp is not 6. As a result for the 1 million hits you will add 1 million Tuples to the queue with timestamp 6. If you peek the last element, you can keep modifying the last element for the 1 million hits. @pinkfloyda Is this solution scalable? Also what are the running times for the functions hits() and getHits()? @pinkfloyda If hit() is anyway calling advance() to poll the timestamps above 300 seconds, then why do we need call advance() again in the getHits() function? Could you please present an example where such need arises? @acheiver advance() is necessary in getHit. An example is [hit(1), getHits(301)]. It is optional in hit(). But better to have to keep queue size less than 300. wei88 is right. It should be if(!q.isEmpty() && q.peekLast().time==timestamp) { q.peekLast().count += 1; } I prefer to add a increase() method into Tuple and change constructor to Tuple(int time) Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/48909/straight-forward-java-solution-using-queue
CC-MAIN-2017-51
refinedweb
357
67.96
On Mon, Jul 02, 2001 at 10:37:26PM -0600, Jason Gunthorpe wrote: > > They don't appear to be mentioned in Maximum RPM per say, however.. > > You just said RPMv2, which is what Maximum RPM describes. But the appendix > linked from the LSB actually describes RPMv3. The confusion here seems to be caused by an ambiguity between the version of the RPM *format* which is V3, and the version of the RPM *implementation* which as of the writing of Maximum RPM was RPM v2.3. Part of the problem here is that the RPM maintainers wasn't very careful, especially in the early days of RPM development, to appropriately flag format changes as they were made. (And most of those format changes were documented except in the source code). > I reviewed it, I knew, I brought it and lots of other issues up. That > resulted in lsb-taskforces 1 through 3 being formed, and then nothing was > accomplished. That was in December. 6 months later the spec was published > unchanged. What went wrong? I'm not sure what happened with LSB Taskforces. There was apparently some problems with the task forces reaching consensus, and I have some private e-mail blaming some specific individuals, but I haven't looked at the mail archives so I don't have enough data to form a fair opinion. (Nor is it clear that this would be especially fruitful). What happened to me was that just before the LSB 1.0 was released, I noticed some specific problems with the chapter 13, and did some last minute fixups that fixed the namespace problems that it had, and I eliminated references to "RPM v3 format", since that was clearly ill-defined, and replaced it with "what is defined in the appendix to Maximum RPM". I think the problem was that none of the people who had been participating on the LSB taskforce were involved with the actual LSB standards writing effort, and so no one summarized the discussions on the taskforce mailing list or provided text to the people who actually had commit privs on the LSB CVS tree. If someone had simply posted a message to the LSB-spec mailing list that the LSB-taskforce1 efforts weren't converging on a solution, that might have brought this problem to our attention sooner, but as it was, I think it was simply a problem of there not being a good liason between packaging group and the people actually writing the standard. As far as simply deleting chapter 13 from the specification, that possibility did cross my mind, and I did think for a while that it might have been the best choice. But others argued that having somethign which specified things in an exteremly restrictive way was still useful, since that meant that ISV's could package up their software in a way so that users could easily manage it (i.e., delete the package later, upgrade the package), in a way that would be better than just simply telling ISV's that the only way they could package LSB software for Linux was via binary tarballs. And, those arguments carried a lot of weight. - Ted
http://lists.debian.org/debian-devel/2001/07/msg00176.html
CC-MAIN-2013-48
refinedweb
531
64.75
The. The App Engine Datastore may be more appropriate for applications that need to retrieve very large result sets.Note: The Search API is available only to applications using the High Replication Datastore (HRD). If your application uses the now-deprecated Master/Slave Datastore, migrate to HRD. - Overview - Documents and fields - Creating a document - Working with an index - Index schemas - Viewing indexes in the Admin Console - Search API quotas and pricing. However, the total size of all the documents in a single index cannot be more than 10GB.": index.search("rose water"); This one searches for documents with date fields that contain the date July 4, 1776, or text fields that include the string "1776-07-04":: Query options, as the name implies, are not required. They enable a variety of features:Query options, as the name implies, are not required. They enable a variety of features: // search for documents with pianos that cost less than $5000 index.search("product = piano AND price < 5000"); - Resultsclass, which contains information about how many documents were found and how many were returned, along with the list of returned documents. You can repeat the same search, using cursors or offsets to retrieve the complete set of matching documents. Additional training material In additional to this documentation, you can read the two-part training class on the Search API at the Google Developer's Academy. (Although the class uses the Python API, you may find the additional discussion of the Search concepts useful.) There is a limit of 1000 unique field names over all the documents in an index. Note the limit is imposed on field names, not fields. java.lang.String java.util.Date. - Geopoint Field: A point on earth described by latitude and longitude coordinates The field types are specified using the Field.FieldType enums TEXT, HTML, ATOM, NUMBER, DATE, and GEO_POINT.. Note that the underscore (_) and ampersand (&) characters do not break words, and non-western languages, like Japanese and Chinese, use other tokenization rules. Date field accuracy When you create a date field in a document you set its value to a java.util. If you specify sort options, you can use the rank as a sort key. Note that when rank is used in a sort expression or field expression it is referenced as _rank. The. import com.google.appengine.api.search.Document; import com.google.appengine.api.search.Field; import com.google.appengine.api.users.User; import com.google.appengine.api.users.UserServiceFactory; ... User currentUser = UserServiceFactory.getUserService().getCurrentUser(); String myDocId = "PA6-5000"; Document doc = Document.newBuilder() .setId(myDocId) // Setting the document identifer is optional. If omitted, the search service will create an identifier. .addField(Field.newBuilder().setName("content").setText("the rain in spain")) .addField(Field.newBuilder().setName("email") .setText(currentUser.getEmail())) .addField(Field.newBuilder().setName("domain") .setAtom(currentUser.getAuthDomain())) .addField(Field.newBuilder().setName("published").setDate(new Date())) .build(); To access fields within the document, use getOnlyField(): Document. import com.google.appengine.api.search.Index; import com.google.appengine.api.search.IndexSpec; import com.google.appengine.api.search.SearchServiceFactory; import com.google.appengine.api.search.PutException; import com.google.appengine.api.search.StatusCode; public void IndexADocument(String indexName, Document document) { IndexSpec indexSpec = IndexSpec.newBuilder().setName(indexName).build(); Index index = SearchServiceFactory.getSearchService().getIndex(indexSpec); try { index.put(document); } catch (PutException e) { if (StatusCode.TRANSIENT_ERROR.equals(e.getOperationResult().getCode())) { // retry putting the document } } }You can pass up to 200 documents at a time to theService.getIndexes() method.Range()to retrieve a group of consecutive documents ordered by doc_id. Each call is demonstrated in the example below. IndexSpec indexSpec = IndexSpec.newBuilder().setName(indexName)());. import com.google.appengine.api.search.Results; import com.google.appengine.api.search.ScoredDocument; import com.google.appengine.api.search.SearchException; ... try { String queryString = "product: piano AND price < 5000"; Results<ScoredDocument> results = getIndex().search(queryString); // Iterate over the documents in the results for (ScoredDocument document : results) { // handle results } } catch (SearchException e) { if (StatusCode.TRANSIENT_ERROR.equals(e.getOperationResult().getCode())) { // retry } } Deleting documents from an index You can delete documents in an index by specifying the doc_id of one or more documents you wish to delete to the delete() method. To get a range of document ids in an index, invoke the setReturningIdsOnly() method of the GetRequest.Builder, which is then given to the getRange() method. When you invoke this method, the API returns document objects populated only with the doc_id. You can then delete the documents by passing those document identifiers to the delete() method: You can pass up to 200 documents at a time to theYou can pass up to 200 documents at a time to the import com.google.appengine.api.search.Document; import com.google.appengine.api.search.GetRequest; import com.google.appengine.api.search.GetResponse; ... try { // looping because getRange by default returns up to 100 documents at a time while (true) { List<String> docIds = new ArrayList<String>(); //", e); } delete()method. Batching deletes is more efficient than handling them one at a time.. Determining the size of an index The total size of all documents in an index cannot be more than 10GB. (The method Index.getStorageLimit() returns the maximum allowable size of an index.) The method.: import com.google.appengine.api.search.Field.FieldType; import com.google.appengine.api.search.Index; import com.google.appengine.api.search.GetIndexesRequest; import com.google.appengine.api.search.GetIndexesResponse; import com.google.appengine.api.search.Schema;); } } When you use the GetRequest.Builder, the argument for getLimit and getOffset cannot be larger than 1000. This means that even if you have more than 1000 indexes, you can never retrieve more than 1000 indexes at a time, and you'll never never be able to retrieve more than the first 2000 indexes (with offset=1000 and limit=1000). Admin Console You can view information about your application's indexes, and the documents they contain, by clicking the application's name in the App Engine Administration Console. In the sidebar section labeled Data, click the Text Search link to see a list of the application's indexes. Clicking an index name displays the documents that index contains. You'll see all the defined schema fields for the index; for each document with a field of that name, you'll see the field's value. You can also issue queries on the index data directly from the Administration Console. Search API quotas and pricing The Search API has a free quota of 20,000 API calls per day. The quota only applies to these calls: Index.put()(and variants) Index.delete()(and variants) Index.search()(and variants) Index.getRange()(and variants) SearchService.getIndexes()(and variants) When indexing multiple documents in a single call, the call count is increased by the number of documents indexed. If you enable billing for your app you will be charged for additional usage. API usage is counted and billed in different ways depending on the type of call: Index.search(): There are separate quotas for simple and complex queries. A query is complex if its query string includes the name of a geopoint field or at least one OR or NOT boolean operator. A query is also complex if it uses query options to specify non-default sorting or scoring, field expressions, or snippets. Otherwise the query is simple. Index.put(): When you add documents to indexes the size of each document counts towards the indexing quota. - All other Search API calls are counted by the number of operations they involve. These calls are subject to a daily limit of of 1,000 operations per day. The number of operations charged depends on the call: SearchService.getIndexes(): 1 op billed for each index actually returned, or 1 op if nothing is returned. Index.get()and Index.getRange(): 1 op billed for each document actually returned, or 1 op if nothing is returned. Index.delete(): 1 op billed for each document in the request, or 1 op if the request is empty. Index.deleteSchema(): 1 op billed for each call. Free Quotas and pricing are detailed in the table below. For more information on quotas, see Quotas, and the Quota Details section of the Admin Console. The Search API imposes these throughput limits to ensure the reliability of the service: - 100 minutes of search execution time per minute - 15K Documents added/deleted per minute Note that although these limits are enforced by the minute, the Admin Console displays the daily totals for each. Customers with Silver, Gold, or Platinum support can request higher throughput limits by contacting their support representative.
https://developers.google.com/appengine/docs/java/search/
CC-MAIN-2014-15
refinedweb
1,419
50.02
16 February 2011 09:44 [Source: ICIS news] SINGAPORE (ICIS)--China’s linear low density polyethylene (LLDPE) futures fell by 0.7% on the Dalian Commodity Exchange (DCE) on Wednesday, as the arbitrage opportunity for physical cargoes led investors to take “sell” positions in the futures trade, local brokers said. With the most actively traded May LLDPE futures contracts priced at yuan (CNY) 12,350/tonne ($1,874/tonne), importers could book a physical cargo in the local retail market for prompt delivery and lock in their profits by taking a “sell” position on the futures market, the brokers said. The cost of holding the physical cargo was around CNY500/tonne, while the price gap between the futures and physical trades was around CNY1,000/tonne, based on current discussions in the physical market, they said. That means investors could lock in a profit of CNY500/tonne by taking a ‘sell’ position on the futures market. But the build-up of “sell” positions on the futures market weighed down on the futures price, they said. Liquidity focused on the May contract, which closed at CNY12,350/tonne, down 0.7% or CNY85/tonne from Tuesday’s settlement price of CNY12,435/tonne, according to DCE data. LLDPE was selling at around CNY11,300/tonne ?xml:namespace> The LLDPE futures price is likely to fall further if its premium over the physical market widens, said Li Chen Ming, a petrochemical analyst at Guang Fu Futures in ($1 = CNY6.59) For more
http://www.icis.com/Articles/2011/02/16/9435720/china-lldpe-futures-down-0.7-on-arbitrage-opportunity.html
CC-MAIN-2015-18
refinedweb
250
59.13
The C API for Collator performs locale-sensitive string comparison. You use this service to build searching and sorting routines for natural language text. Important: The ICU collation service has been reimplemented in order to achieve better performance and UCA compliance. For details, see the collation design document. For more information about the collation service see the users guide. Collation service provides correct sorting orders for most locales supported in ICU. If specific data for a locale is not available, the orders eventually falls back to the UCA sort order. Sort ordering may be customized by providing your own set of rules. For more on this subject see the Collation customization section of the users guide. Definition in file ucol.h. #include "unicode/utypes.h" #include "unicode/unorm.h" #include "unicode/localpointer.h" #include "unicode/parseerr.h" #include "unicode/uloc.h" #include "unicode/uset.h" Go to the source code of this file.
http://icu.sourcearchive.com/documentation/4.4~rc1-1/ucol_8h.html
CC-MAIN-2017-39
refinedweb
153
54.18
Issues ZF-2103: Zend_Session_Namespace - Optional argument to create namespace in deeper nodes Description Currently, the namespace component creates namespaces only at the root of the session: $user = new Zend_Session_Namespace( 'user' ); => $_SESSION['user'] = array() What if I want to store data deeper in the auth node to just group it and reduce clutter (aka modules) and to be able to count the number of items/namespaces in a node: $_SESSION['auth']['user']['id'] = 1; ([node][namespace][key] = value) In the above case, it would be much more powerful to be able to create the namespace under the 'auth' node so we could configure the keys in the 'user' namespace. We could introduce an optional 'node' argument to the Zend_Session_Namespace::__construct(), like so: $user = new Zend_Session_Namespace( 'user', 'auth' ); [create the user namespace under auth node - auth node created automatically if it does not exist in session] We can even go as deeper as we could by specifying an intelligent node-path: $user = new Zend_Session_Namespace( 'user', 'auth:node2:node3' ); (create the user namespace under $_SESSION[auth][node2][node3] - all created automatically if they do not exist in session) I guess the above feature adds more muscle to the namespace component and enables us to use it more powerfully. Thoughts...? Posted by Wil Sinclair (wil) on 2008-04-18T13:11:51.000+0000 This doesn't appear to have been fixed in 1.5.0. Please update if this is not correct. Posted by Ralph Schindler (ralph) on 2008-07-21T14:08:35.000+0000 How does this benefit over using a smarter naming convention for the namespaces? For example like OR over ? The bigger problem with adding more/deeper nodes is that the serialized version will take up more space in the file/database whatever. Ill categorize as nice to have / next minor, but at currently I am not completely convinced yet. Perhaps talk it over in #zftalk.dev with some other people. -ralph Posted by Shekar Reddy (zendfw) on 2008-08-12T18:29:45.000+0000 That's what I've been doing these days but it involves a lot of consistent code and work to maintain data in appropriate nodes. If we have the namespace created in a deeper node, keys added to the namespace automatically appear in the child node of the namespaced node and so it becomes much easier to maintain data and focus on business logic. Furthermore, the items under the namespace node can now be counted, can be disabled from adding more keys than a specified threshold, configured to expire, etc. and so we can create categorized namespaces with typical behaviors for the data inside the namespaces. Posted by Ralph Schindler (ralph) on 2009-01-10T10:20:39.000+0000 Interesing feature, but I am not sure we'll have time to implement this within the 1.0 branch. We concede that this component needs a rewrite, so ideas are being logged here:…
http://framework.zend.com/issues/browse/ZF-2103
CC-MAIN-2015-27
refinedweb
486
57.71
Objective In this article, I will show different way of parsing string to create XML tree using LINQ to XML. What is Parsing of XML document? Parsing of XML document means reading XML document, identifies the function of each of the document and then makes this information available in memory for rest of the program. XElement.Parse () method - This method is used to parse a string. - This is an overloaded method. Methods are as below. 2nd overloaded method is having a parameter LoadOptions; this parameter defines whether to preserve space line information or not. LoadOptions enum - This is inside System.Linq namespace. - This enum is having 4 properties. Way # 1 Parsing String to create XML Tree In this sample, I will create a XML tree from string. - Using first method to create XML Tree. - There is only one parameter being passed. Output In this sample, I will create a XML tree from string. - Using second method to create XML Tree. - There is two parameter being passed. - We are passing preserve space as load options. Output We can see the difference in output. That white space is preserved. Conclusion In this article, I explained how to parse a string to create XML tree. Thanks for reading.
https://debugmode.net/2010/02/21/linq-to-xml-part-4-different-way-of-parsing-string-to-create-xml-tree/
CC-MAIN-2022-40
refinedweb
205
69.58
Currently I've a programme taking up too much of CPU. How could I limit the amount of CPU it occupies? OS is Linux, Fedora. I can't modify the source code of that programme. What I need is a Bash command. You can try limiting your program by lowering the priority with nice. No programming involved there. Running at "50%" CPU isn't that meaningful. You want the program to use every resource possible when it's available. If the CPU isn't doing anything else, that program might as well make full use of it. If you wanted the program to really do nothing at all, you'd have to modify the source code and put in pauses/sleeps where possible. What you want is to have everything else have higher priority. See the manpage for the nice command, run it at nice 19 Another and possibly more effective way of limiting resources is to install the schedutils package, and run the program using the SCHED_BATCH process scheduler. You're looking for something simple and fast? Try the cpulimit program. Just run: cpulimit name-of-program and voila, it's limited. Setrlimit and co ... #include <sys/resource.h> #include <sys/time.h> #include <unistd.h> int main () { struct rlimit rl; /* Obtain the current limits. */ getrlimit (RLIMIT_CPU, &rl); /* Set a CPU limit of 1 second. */ rl.rlim_cur = 1; setrlimit (RLIMIT_CPU, &rl); /* Do busy work. */ while (1); return 0; } From here If you don't want to modify the program another option to consider is virtualisation. If you want to limit a process's cpu, based on the concept of percentage, consider cpulimit. You can manually do time-slicing within the application by using a high-performance timer and measuring how much time each iteration of a top level loop is taking, then put in appropriate sleeps (or nanosleep's) in that loop. This wont correlate directly to a percentage of CPU, especially across machines, but it will limit the cpu resources that the program takes. Take a look at Control Groups. LWN has an article about them. Redhat, Fedora, CentOS have an RPM package named libcgroup that has several handy command-line tools, a system daemon and some config files to manage control groups. This is based on libcg hosted on SourceForge. By posting your answer, you agree to the privacy policy and terms of service. asked 6 years ago viewed 436 times active 5 years ago
http://serverfault.com/questions/25025/is-it-possible-to-make-a-programme-not-occupy-more-than-50-of-cpu/26847
CC-MAIN-2016-07
refinedweb
408
67.76
Hi, In Visual Studio 2010, I have a webform, with a button that attempts to manipulate a SQL table that has a lot of data (several years & archives of 12 people's emails: emailID, bodies, to, cc, bcc, date/time). Problem has always been that with so many emails (500,000), I had to break up the manipulation jobs a bit, or VisualStudio/ASP.Net development server would freeze up running those SQL stored procedures. So I determined, for each (.pst) email store, a unique series that was in each emailID - for instance, person A would have 00000195195xxxxx as their emailIDs, and person B would have 00000123456xxxxxx as their email IDs - and used those 30 [or so] identifying flags to recursively call a function that until yesterday was running my SQL stored procedures just fine. YEsterday, while that was running, for giggles, I rebuilt the index on my table, and freaked out the development server by accident. I had to rollback the database using SQL Server Management Studio, or the database was "Suspect". Now, my SQL Stored Procedure in that recursively called function won't run, despite the fact that the original SQL Query [that gets the identifying flags into a Datatable, and runs the sproc on each "row"] works just fine. I think it's a setting, to protect me ... but can't? Hi Everyone,! Problem Conditions I have a very simple Oracle (11g) Stored Procedure that is declared like so: CREATE OR REPLACE PROCEDURE pr_myproc(L_CURSOR out SYS_REFCURSOR)is BEGIN OPEN L_CURSOR FOR SELECT * FROM MyTable; END; This compiles correctly. The cursor contains col1, col2 and col3. In SSRS, i have a Shared Data Source that uses Oracle OLEDB Provider for Oracle 11g: Provider=OraOLEDB.Oracle() developed a web app in VS 2010 using master pages. I am converting it to Ajax. Did an extensive amount of ClientScriptManager.RegisterHiddenField need to convert to ScriptManager.RegisterHiddenField. However, the Class SctipManager does not appear in the object browser under System.Web.UI. Am I missing an assembly? I go into MSDN System.Web.UI namespace and it appears in the documentaion under that namespace. Googling for information I keep running accross GAC something to do with Ajax extension. Looked in windows/assemblies and did not see GAC. What is GAC? Cannot locate ScriptManager in System.Web.UI framework 4.0. My Configuragion: Running in any cpu mode on a 64bit machine. I believe VS 2010 is running in x86 mode. Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/links/63264-stored-procedure-stopped-working-visual.aspx
CC-MAIN-2017-04
refinedweb
425
65.32
Get a character from stdin #include <stdio.h> int getchar( void ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The getchar() function is equivalent to getc() on the stdin stream. The next character from the input stream pointed to by stdin, cast as (int)(unsigned char), or EOF if an end-of-file or error condition occurs (errno is set). #include <stdio.h> #include <stdlib.h> int main( void ) { FILE *fp; int c; /* Get characters from "file" instead of * stdin. */ fp = freopen( "file", "r", stdin ); while( ( c = getchar() ) != EOF ) { putchar(c); } fclose( fp ); return EXIT_SUCCESS; }
http://www.qnx.com/developers/docs/qnxcar2/topic/com.qnx.doc.neutrino.lib_ref/topic/g/getchar.html
CC-MAIN-2022-33
refinedweb
106
77.23
As .NET Core has evolved, Entity Framework Core (EF Core) has grown increasingly sophisticated from one version to the next. EF Core was a ground-up rewrite of the tried-and-true Entity Framework that began its life in 2008 and matured as it grew to version 6, originally released in 2013. The vision with EF Core was to remove barriers to modernization by shedding EF’s old code base. The first version was, well, a v1 product, and made great headway allowing teams building modern software with .NET Core to leverage the new EF Core APIs. As EF Core progressed to versions 2.0 and 2.1, it became, as I noted in the July/August 2018 issue of CODE Magazine, definitely production ready. In addition to some long-requested features that EF’s architecture had prevented, the second generation of EF Core brought in many (but not all) of the features that EF users had relied on and were missing in the earlier EF Core. The Big Picture EF Core 3.0 was released in late September 2019 along with .NET Core 3.0 and ASP.NET Core 3.0. The focus of EF Core 3.0 was to tune it up to be a stable and reliable foundation for its future evolution. Although there are some new and interesting features implemented (for example, support for mapping to Azure Cosmos DB), the new feature list is overshadowed by tweaks, fixes, and the major work item of EF Core 3, which was to revisit how LINQ queries are executed by EF Core. Some of the work done to improve the APIs did result in something the EF team has been very cautious about since the beginning of EF: breaking changes. These weren’t taken lightly. And the team has handled the breaking changes in a really transparent and responsible way. There is a detailed list of the breaking changes in the EF Core documentation (), categorizing each as either low, medium, or high impact. And for each change, you’ll find descriptions of the old and new behavior, the reasoning behind the change, and how you can mitigate the effect of those changes in your existing code base. I’ll discuss some of the breaking changes in this article, but I recommend reading through that list to be aware of all these items. The fact that there are a slew of breaking changes is interesting, especially because the team has been so conservative over the iterations. EF Core 2.0 did have a few breaking changes, but with EF Core 3.0, my feeling is that it’s like "ripping the bandage off." If a few were necessary—such as renaming some of the raw SQL methods because of a big problem with interpolated strings--this was a good time to take care of other problems that would also result in breaking changes. In this way, breaking changes become a such a big part of the message for EF Core 3.0 that, hopefully, developers will pay close attention to them. One other important impact that .NET Core 3.0 has had on EF is related to the new capability of formerly .NET Framework-only applications, such as Windows Forms and WPF, now being able to run on .NET Core 3.0. Many of these older solutions use Entity Framework, so EF6 also needed to run on .NET Core. Starting with the newly released EF 6.3, not only can you use EF6 in .NET Framework apps, but also with .NET Core 3.0 apps. This is possible because, in addition to running on .NET 4.0 and .NET 4.5, EF6.3 is cross-compiled to target .NET Standard 2.1. To test this out in an extreme case, I built a small ASP.NET Core 3.0 sample app in VS Code on my MacBook using EF 6.3 for its data access. You can see the code and further explanation at. Changes to Development Dependencies EF Core 3.0 depends on .NET Standard 2.1 and no longer supports .NET Standard 2.0. This is important to understand because .NET Standard 2.0 is what .NET Framework relies on. You can’t use .NET Framework with .NET Standard 2.1. For example, see this EF Core 2.1 tutorial in the official docs (), which walks you through creating a .NET Framework 4.6.1 console app with EF Core 2.0. With EF Core 3.0, you’ll need to use .NET Core application types. Both WPF and Windows Forms have moved to NET Core 3.0, so you do have a wide variety of application types available—these two as well as console and ASP.NET Core 3.0 apps. Currently Xamarin and UWP aren’t running on .NET Standard 2.1, but they will be in the future, at which time, you’ll be able to use the new EF Core in those apps as well. .NET Standard 2.1 is installed as part of .NET Core 3.0 runtime. So, if you have the SDK installed for development, or the runtime installed for running apps, you’ll have the right version of .NET Standard. It’s also important to know that Visual Studio 2017 doesn’t support .NET Standard 2.1. If you’re using Visual Studio or Visual Studio for Mac, you’ll need the 2019 version. And it’s recommended that you have the latest update as well. Visual Studio Code with the latest OmniSharp extension supports .NET Standard 2.1. EF Core and Tooling Packages Getting EF Core into your apps and using the Migrations commands has changed in a big way. The Microsoft.EntityFrameworkCore, Microsoft.EntityFrameworkCore.SqlServer, and a few other EF Core NuGet packages were included with the Microsoft.AspNetCore.App (formerly Microsoft.Aspnet.All) package reference in ASP.NET Core projects. If you were building other types of projects, you had to explicitly reference the relevant EF Core package(s). Starting with version 3, there’s a consistent experience, which is that you need to explicitly specify the EF Core package(s) you want—no matter what type of project you’re building. EF Core won’t be "served up" with the ASP.NET packages. Nor will you have extraneous packages such as SqlServer and Relational if you don’t need them. The EF Core command-line tools have been moving around across versions. In the first versions of .NET Core, you had to reference the DotNetCliToolReference package in order to use EF’s CLI migrations tools, such as dotnet ef migration add. Then the EF tools were moved into the .NET Core 2.1 SDK, so they were just there. That meant that the release cycle of the EF tools was tied to the release cycle of the SDKs. Starting with 3.0, the EF tools are independent of the SDK and need to be installed explicitly. Something I kept forgetting to do on my various development computers but thankfully, the error messages were kind enough to remind me. You have two ways to ensure they are available. The first is to install the dotnet-ef tools globally onto your development computer with: dotnet tool install -g dotnet-ef --version 3.0.0 The Great LINQ Query Overhaul Although one important improvement to LINQ queries has the possibility of breaking many applications, there were a number of changes made to improve LINQ queries overall. The driver was to fix problems related to evaluating part of queries on the client. The original implementation was dependent on re-linq, which provides higher-level abstractions of LINQ expression trees. But this was preventing the EF team from making the types of changes required to fix the evaluation problems. So re-linq was removed and everything broken by this removal needed to be rewired. Let’s take a look at the client evaluation problem that started this important transition. EF Core introduced the ability to internally break queries apart, discover which pieces could be translated to SQL and executed, and then which pieces of the query needed to be evaluated on the client in memory using the results of the server-side (database) query. Although this became the default behavior for queries, it was possible to tweak a DbContext configuration to make any query that contained client-side logic to throw an exception instead. This feature was meant to simplify scenarios where you wrote LINQ queries that combined logic to be translated into SQL and executed on the server with logic that needs to be executed locally. Imagine, for example, that you have a local method to reverse a string: private static string Reverse (string value) { var stringChar = value.AsEnumerable (); return string.Concat(stringChar.Reverse()); } You could use that as part of a LINQ query: _context.People.Select (p => new {p.FullName,Reverse=Reverse(p.FullName)}) .FirstOrDefault (); EF Core is able to determine which parts of the query can be translated to SQL and which can’t. It passes the SQL on to the database for execution, then performs the rest of the evaluation (e.g., Reverse) on the results after they’ve been returned from the database. Client evaluation can also let you use the local logic in your query filters. Imagine using the same Reverse method as the predicate of a Where LINQ method. _context.People.Where (p =>Reverse(p.FirstName)=="eiluJ").ToList(); Because EF Core can’t translate Reverse into SQL, it first executes what it can to get the results and then performs the Reverse filter on the result set. With small sets of test data, this seemed innocuous. However, after releasing this into production where your People table might contain thousands or more rows, the query returns every row from the table into memory before performing the filter locally. The earlier version did provide a warning in the logs that part of the query would be evaluated locally: warn: Microsoft.EntityFrameworkCore.Query[20500] The LINQ expression 'where (Reverse ([p].FirstName) == "eiluJ")' could not be translated and will be evaluated locally. But this was too easy to miss. A lot of developers were experiencing performance and memory problems because they didn’t realize that the filtering (or other processes such as grouping or sorting) was happening on the client. This could be because perhaps they weren’t paying attention to the logs or these messages were buried among too many events in the logs. For EF Core 3.0, after much deliberation, the decision was made to change the behavior so that EF Core throws an exception when queries contain client-side evaluation. System.InvalidOperationException:The LINQ ex- pression '(p) => ReverseString(p.FirstName) == "eiluJ"' could not be translated. Either rewrite the query in a form that can be translated, or switch to client evaluation explicitly by insert- ing a call to either AsEnumerable(), AsAsyncEnumerable(), ToList(), or ToListAsync(). See for more information. Although this switch of the configuration was the first step in this change to easily expose client-side evaluation, the team did much more than simply change the default. They revised how LINQ works when evaluating queries. As per the documentation, they made "profound changes to how our LINQ implementation works, and how we test it." With the goal "to make it more robust (for example, to avoid breaking queries in patch releases), to enable translating more expressions correctly into SQL, to generate efficient queries in more cases, and to prevent inefficient queries from going undetected." With respect to the earlier configuration to either throw an exception or log a warning, this option no longer exists. Client evaluation will always throw an exception. Changing how LINQ queries are translated by EF Core was a major undertaking for the team. And they also had to ensure that the great variety of ways you can construct a query would continue to be possible. The team maintained a detailed and public to-do list of all the previously supported capabilities and their working (or not yet working) state in EF Core 3 as they iterated through the various previews. There is also a great explanation of the proposal) for these changes on GitHub (). The key is to be aware of the portion of the query that will be evaluated on the client and write the query (or multiple queries) in a way that acknowledges that. There’s one other acceptable way to include client-evaluated logic in the query that’s not mentioned in the exception. If the method is in the final projection of the query, it will succeed. The first query I showed above, with the Select statement, will succeed in EF Core 3.0 because the Select method is the last projection being called in the query. There’s no need to move the executing method before Select. Although, in the case of that query, it won’t change anything about the execution or the results. _context.People.ToList ().Select (p => new { p.FullName, Reverse=ReverseString(p.FullName)}); For the second example, where I was using Reverse to filter in a Where method, you need to rethink the query completely. Putting ToList() in front of the Where() method succeeds. However, it quietly forces the EF to retrieve all of the Person rows from the database. In this case, the code is pretty explicit and you should be aware of the cause and effect. Performance tests and logging are also your friends! In the case of my particular query, it’s an easy change: Just filter on the first name without it being reversed. The effect of moving the executing method is most obvious, I think, with OrderBy. This method will throw an error in EF Core 3.0 _context.People .OrderBy (p => ReverseString (p.FullName)) .ToList (); Calling ToList and then ordering the results explicitly succeeds: _context.People .ToList() .OrderBy (p => ReverseString (p.FullName)); Keep in mind that for frequently used methods that you want to incorporate into queries, you have the option of creating scalar functions in your database and mapping them with DbFunctions. I created a module about this feature in one of my Pluralsight courses called EF Core 2 Beyond the Basics: Mappings () The bottom line is that the LINQ query overhaul accomplished a number of important goals. One was to relieve the side effect of the client-evaluation creating performance and memory problems. Another was to simplify testing for provider writers. Most importantly, the rewrite has set up a foundation to make additional translations easier to implement and eventually improve performance as well. The ripple effect of the change required that they test every query translation possible. My examples were simple, but imagine this with relationships or owned entities across many providers. Renamed Raw SQL Methods Another pair of breaking changes important to highlight are: - The method names for raw SQL queries have changed. - The method names for raw SQL queries were moved from being extension methods on IQueryable<T> to extension methods on DbSet<T>. - As a result of the extension method move, you can only call the extension methods at the beginning of a query on the DbSet. EF Core 2.0 enabled the use of interpolated strings in the raw SQL methods. When executing the methods, EF Core didn’t always interpret the interpolation as expected. In some cases, this problem could lead EF Core away from its parameterized query protection. One example of the unexpected behavior is that ExecuteSqlCommand was able to parameterize interpolated strings that are part of the method call but not if they were passed in via another variable. This was a behavior of C# that eagerly transforms any interpolated string it sees into a string. Yet the raw SQL methods expect a FormattableString to be passed in. The next snippet is an example that throws an exception in EF Core 2.2. At least that’s a better response than allowing a SQL Injection attack. Line wraps create yet another problem but imagine that myCommand is on a single line. var first = "Smit"; var last = "Patel"; var myCommand = $"INSERT INTO People (Firstname,LastName) VALUES ({first}, {last})"; _context.Database.ExecuteSqlCommand(myCommand); Where myCommand becomes the string: INSERT INTO People (Firstname,LastName) VALUES (Smit, Patel) The database thinks Smit and Patel should be column names. You can see examples of the varying behavior in this GitHub issue:. The bottom line is that the team needed to handle plain strings differently from interpolated strings and the most clear-cut way to ensure that developers would get the results they expected was to remove the generally named methods (FromSql, ExecuteSqlCommand, and their async counterparts) and replace them with explicit methods (FromSqlRaw and FromSqlInterpolated, ExecuteSqlRaw, etc.). The Interpolated methods take a FormattableString object as the parameter. The last line of the above example changes to: _context.Database.ExecuteSqlInterpolated( myCommand); You’ll get a compiler error because myCommand is a string. Just change that to explicitly be a FormattableString. FormattableString myCommand = $"INSERT INTO People ..." Realigning the QueryType Implementation Query types were introduced in EF Core 2.1 to enable querying data into types that don’t have keys. The simplest example is having a type that maps to a database view. This is such a great addition to EF modeling. I wrote about query types in "Entity Framework Core 2.1: Heck Yes, It’s Production Ready!" (!) in the July/August 2018 issue of CODE Magazine. The implementation was somewhat confusing. The feature introduced "cousins" to DbSet (DbQuery) and EntityTypeBuilder (QueryTypeBuilder). I wrote about query types and dug even deeper in my EF Core 2.1: What’s New course on Pluralsight, and found it a bit convoluted to explain the ins and outs of configuring the DbContext and writing LINQ queries. In the long run, most of the rules are the same as working with DbSets and EntityTypeBuilder. So the EF Core team decided that for EF Core 3.0, they’d make everything an entity whether it has keys or not. The underlying behavior to differentiate entities with keys and without keys still exists and you can indicate keyless entities to the ModelBuilder simply by configuring an entity as having no key with ModelBuilder<Entity>().HasNoKey(). With this, you can use DbSets and EntityTypeBuilders for all the mapped types and not have to worry about the extra set of rules and classes that went along with query types. Reverse Engineering Database Views into Entities Now that there are keyless entities, it was a lot easier for the team to add database views into the scaffolding process. Here’s a simple view that I added to my database. CREATE VIEW [dbo].[PeopleNamesView] AS SELECT FirstName,LastName FROM People You can scaffold with PowerShell in VS or with the CLI. Here’s a CLI command I used: dotnet ef dbcontext scaffold "Server=(localdb)\MSSQLLocalDB; Database=PeopleDb" Microsoft.EntityFrameworkCore.SqlServer This generated a class: public partial class PeopleNamesView { public string FirstName { get; set; } public string LastName { get; set; } } More importantly, in the DbContext, it added a DbSet : public virtual DbSet<PeopleNamesView> PeopleNamesView { get; set; } And it created a mapping for the entity that specifies that it‘s keyless and maps to a view. modelBuilder.Entity<PeopleNamesView>(entity => { entity.HasNoKey(); entity.ToView("PeopleNamesView"); }); Now the view can be used as a keyless entity. The New Azure Cosmos DB Provider In my opinion, the most interesting (and to some, perhaps the most curious) feature introduced in EF Core 3.0 is the ability to use EF Core to interact with an Azure Cosmos DB database. The provider’s NuGet package name is Microsoft.EntityFrameworkCore.Cosmos. For those of you unfamiliar with this database, it’s Microsoft’s cloud-based multi-model NoSQL database service. The data is stored as JSON documents and the underlying database engine allows you to interact with your data using SQL, MongoDB, Cassandra, Gremlin, or Table (key/value pair) APIs. I’ve done a lot of work with Azure Cosmos DB over the years, mostly using the SQL query language and sending the queries either with the NodeJS SDK or the .NET Client APIs. Some may be curious why this new Cosmos DB provider exists for EF Core: Why use an ORM (Object Relational Mapper) to map to a data store that isn’t relational? For developers accustomed to working with EF to interact with their data store, it’s an easy leap to use EF when they want to target Cosmos DB. Additionally, having used the NodeJS SDK and .NET Client API, I really like the fact that EF Core can take care of all of the redundant set up for you that normally requires a lot of additional code to define objects to represent the Cosmos DB connection, the database, the containers and even the queries. With the EF Core provider, you can just define the connection settings in the options builder. optionsBuilder.UseCosmos("endpoint", "account key", "databasename"); The endpoint and account key are tied to the pre-existing Cosmos DB account and the database name is for one that already exists or one you will let EF Core create. I’ve created variables for the endpoint and the accountkey (whose value I’ve abbreviated). The database name (not yet existing) will be CosmicPeople. var endpoint= "" var key = "S8nBvUYAQ4ZKboQ1Q97RPDuzlBQzV . . ." optionsBuilder.UseCosmos(endpoint,key, "CosmicPeople"); With this in place, you can write and execute LINQ queries, and add, update, or delete data just as you would with any other EF Core provider. And all of these tasks leverage the investment you’ve already made in learning EF. One other common question is about this provider’s ability to create a new Cosmos DB database on your behalf. EF Core does this just as it would with any other database provider. However, like those other databases that can create a database for you on an existing database server, the Cosmos DB provider is only able to create databases on an existing Cosmos DB Account. And it’s the account whose settings are critical with respect to cost and performance. EF Core can’t affect those settings. Because the Cosmos DB APIs aren’t (yet) interchangeable, that account does need to be created for the SQL API. Each database has one or more containers and, because Cosmos DB is designed to handle large amounts of data, partitioning is important as well. It’s important to understand how different this is from a relational database. Data is stored as JSON documents in the database. A container isn’t the same as a table. Differently structured documents can be stored in a single container as needed and a single document can contain a hierarchy of data. It’s important to understand how different this is from a relational database. Some important defaults of EF Core’s Cosmos provider to be aware of are: - EF Core associates a single DbContext to a container within the database. You can use the ToContainer() mapping to associate specific entities with containers. - The container name is based on the context name, although you can use the HasDefaultContainerName() method to change the name. - The partition key is null by default and all data is stored in the default partition of the database. You can configure entities to use specific partition keys with HasPartitionKey() based on a property or a shadow property of an entity. If you configure one entity in a context, you need to configure all of them. - The provider injects a Discriminator property to tag each document with the name of its type so that EF Core can retrieve data. Entities are stored as separate documents, which means that you’ll need to leverage some relational concepts in order to reconnect that data when you retrieve it. For example, if I build a Person object with an Address in the Addresses property such as: var person = new Person { Id = Guid.NewGuid(), FirstName = "Shay", LastName = "Rojansky" }; person.Addresses.Add(new Address { Id= Guid.NewGuid(), AddressType = AddressType.Work, Street = "Two Main", PostalCode = "12345"} ); _context.People.Add(person); _context.SaveChanges(); Figure 1 shows the two stored documents that result from this code. Notice the Discriminator property that EF Core added to each document. Although the related entities aren’t stored together, EF Core’s owned entities get stored as a sub-document. For example, here’s a new class (PersonWithValueObject) with a Name property whose type (PersonName) is configured as an owned entity. My code creates a new PersonWithValueObject with the Name property defined. var person = new PersonWithValueObject { Id =Guid.NewGuid(), Name=new PersonName("Maurycy","Markowski")}; Figure 2 shows that EF Core stored the owned entity as a sub-document. It still uses a Discriminator so that EF Core can properly materialize the object when you retrieve it with LINQ queries. There’s more to learn, of course, about interacting with Cosmos DB. You can check out the documentation as well as my two-part article about an earlier preview of the Cosmos provider in MSDN Magazine (). Additional Notable Changes Although I can’t describe every change and new feature, I’ll briefly highlight a few more that are interesting and could apply to a lot of use cases. Support for New C# 8.0 Features C# 8 brings us some new super cool features and EF Core 3.0 is there for some of them. For example, support for asynchronous streaming, thanks to await foreach and the standard IAsyncEnumerable<T> interface. Nullable and Non-Nullable Reference Types, also new to C#8, are supported by EF Core 3.0 as well. This C#8 feature allows you to explicitly signal permission (or forbid) to allow nulls into reference types like string. EF Core recognizes nullable reference types. This works for mappings and queries but not yet for scaffolding from an existing database For example, if I add a new nullable string property to Person: public string? Middle { get; set; } The type is recognized by EF Core. A relevant column is created for the database table. And queries handle the nullable reference type as well, such as in the following query. Being new to nullable reference types, I expected to use p.Middle.HasValue() in the query but nullable reference types don’t have the same behaviors as nullable value types like int?. var people = _context.People .Where(p =>p.Middle!=null).ToList(); Interceptors EF6 brought the great capability of intercepting events along the query pipeline, which is a feature we’ve been eagerly awaiting to return in EF Core. And EF Core 3.0 brings new APIs to perform the same type of functionality. There are four classes that implement the IInterceptor. They’re currently limited to the Relational providers. - DbConnectionInterceptor - LazyLoadingInterceptor - DbTransactionInterceptor - DbCommandInterceptor Each class surfaces a slew of methods that you can tap into. Like EF6, you begin by defining an interceptor class that inherits from one of the above classes. Then you can register it to a provider on a particular context. For example: using System.Data.Common; using Microsoft.EntityFrameworkCore.Diagnostics; public class MyInterceptor:DbCommandInterceptor{} You can then override one of the many methods of the interceptor. For DbCommandInterceptor, you have access to methods such as CommandCreated, CommandCreating, and NonQueryExecutingAsync. If you used interceptors in EF6, these will be familiar. DbCommandInterceptor.ReaderExecuted gives you access to the command as well as the results of the executed command in the form of a DbDataReader. public override DbDataReader ReaderExecuted( DbCommand command, CommandExecutedEventData eventData, DbDataReader result) { //do something return result; } You can register one or more derived interceptors using a provider’s AddInterceptors method. Here, I’m using the DbContext’s OnConfiguring method. optionsBuilder .UseSqlite("Filename=CodePeople.db") .AddInterceptors(new MyInterceptor()); In ASP.NET Core, you’d do this as you register a DbContext in the Startup’s ConfigureServices method, e.g.: services.AddDbContext<PersonContext> (options => options .UseSqlServer(connectionstring) .AddInterceptors(new MyInterceptor()); There are other ways to register interceptors, but this will likely be the most common approach. N+1 Queries for Projections and Includes Projections and Includes always caused N+1 queries to get the related data, even in cases where expressing the query in a single SQL statement seemed like the obvious thing to do. EF Core 3.0 fixes this problem. In earlier versions of EF Core, you might have an Eager Loaded query such as: context.People.Include(p=>p.Addresses).ToList(); This first executes a query to get the person types and then executes a second query to get address types using an inner join to relate them to the people table. In EF Core 3.0, the query is much simpler, a single LEFT JOIN query: SELECT "p"."Id", "p"."Dob", "p"."FirstName", "p"."LastName", "p"."Middle", "a"."Id", "a"."PersonId", "a"."PostalCode", "a"."Street" FROM "People" AS "p" LEFT JOIN "Address" AS "a" ON "p"."Id" = "a"."PersonId"ORDER BY "p"."Id", "a"."Id" The N+1 problem also affected projection queries where a single related object was part of the projection, such as this query where I’m retrieving only the first address for a person. _context.People .Select(p => new { p.LastName, OneAddress = p.Addresses.FirstOrDefault() }) .ToList(); This resulted in four queries! The same happens if you get a list of the related data, e.g.: _context.People .Select(p => new { p.LastName, AllAddresses = p.Addresses.ToList() }) .ToList(); In EF Core 3.0, both of these queries are now reduced to a single query using a LEFT JOIN. The query for the single related result is a little more complex (to my non-DBA eyes) but still does the trick! Some Personal Favorites There are three final things I want to call out because they make me happy. The first is related to something I discovered when trying things every which way for one of my EF Core courses on Pluralsight. In previews of EF Core 2.0, the change tracker wasn’t fixing up navigations from projection queries that included related data, such as the Person with Address queries above. The Addresses were retrieved and tracked by the change tracker, but if you examined the resulting Person types, the Addresses property was empty. This was fixed before EF Core 2 was released but there was a regression in a subsequent minor version () that caused a variation on the original problem. If a projection included an entity along with a filtered set of related entities, those entities weren’t attached to the parent when the change tracker performed its "fix up." Here’s an example of a query exhibiting this problem. The projection returns person types and filtered related addresses. _context.People .Select(p => new { Person=p, Addresses = p.Addresses.Where( a => a.Street.Contains("Main")) }) .ToList(); I was happy to have a notification pop into my email inbox a few weeks before EF Core 3.0 was released telling me that the fix had finally been implemented. The second change is tiny but fixed a hugely annoying behavior. The implementation of SQLite that was used by earlier versions of EF Core didn’t have foreign key enforcement on by default. So EF Core had to change at the start of every DbContext interaction with the database. It did so by sending the command PRAGMA foreign_keys=1. For me, it was less of a performance issue than clutter in my logs. EF Core 3 now uses a new implementation of SQLite (by default), which has foreign key enforcement on by default. And last but not least, a fix that is near and dear to my heart because it improves my ability to use Domain-Driven Design value objects. In earlier versions of EF Core, entities that are dependents in a relationship, with their properties mapped to the same table as the principal in the relationship, couldn’t be optional. Under the covers, EF Core treats owned entities (which aren’t true entities and aren’t truly related) as if they were dependent entities. Owned entities are classes that don’t have their own key property, are used as properties of entities, and are mapped to their owner using either the OwnsOne or OwnsMany mapping in the DbContext. Owned entities are how EF Core is also able to map value objects in the DDD aggregates. The non-optional dependent meant that, regardless of your business rules, a value object property could never be null. There was a workaround for this which I wrote about in MSDN Magazine (). Thankfully, EF Core 3.0 fixes the base problem and I’m able to remove the workaround from my code. Dependents that share a table with principals and owned entities (and therefore value objects) are now optional and can now be left null if your business rules allow that. EF Core can persist and retrieve the parent, comprehending the nulls. Final Thoughts Watching the EF team go through the process of reworking EF Core for this version has been impressive. Tightening up the APIs and embarking on the revision to the query pipeline was a major undertaking. They shared with us detailed weekly status updates () and even shared the burn-down chart that they used internally to track their progress. Setting a goal for themselves of making EF Core 3.0 be a solid foundation for future innovation is a great indication of the planned longevity of EF Core. And they brought EF 6 (with version EF 6.3) into the .NET Core family as well, so those of us with investments in EF6 can continue to benefit from that.
https://www.codemag.com/Article/1911062/Entity-Framework-Core-3.0-A-Foundation-for-the-Future
CC-MAIN-2020-10
refinedweb
5,587
56.86
Stand-alone C++ code for exp(x) - 1 If x is very small, directly computing exp(x) - 1 can be inaccurate. Numerical libraries often include a function expm1 to compute this function. The need for such a function is easiest to see when x is extremely small. If x is small enough, exp(x) = 1 in machine arithmetic and so exp(x) - 1 returns 0 even though the correct result is positive. All precision is lost. If x is small but not so extremely small, direct computation still loses precision, just not as much. We can avoid the loss of precision by using a Taylor series to evaluate exp(x): exp(x) = 1 + x + x2/2 + x36 + ... If |x| < 10-5, the error in approximating exp(x) - 1 by x + x2/2 is on the order of 10-15 and so the relative error is on the order of 10-10 or better. If we compute exp(10-5) - 1 directly, the absolute error is about 10-16 and so the relative error is about 10-11. So by using the two-term Taylor approximation for |x| less than 10-5 and the direct method for |x| larger than 10-5, we obtain at least 10 significant figures for all inputs. #include <cmath> #include <iostream> // Compute exp(x) - 1 without loss of precision for small values of x. double expm1(double x) { if (fabs(x) < 1e-5) return x + 0.5*x*x; else return exp(x) - 1.0; } void testExpm1() { // Select a few input values double x[] = { -1, 0.0, 1e-5 - 1e-8, 1e-5 + 1e-8, 0.5 }; // Output computed by Mathematica // y = Exp[x] - 1 double y[] = { -0.632120558828558, 0.0, 0.000009990049900216168, 0.00001001005010021717, 0.6487212707001282 }; int numTests = sizeof(x)/sizeof(double); double maxError = 0.0; for (int i = 0; i < numTests; ++i) { double error = fabs(y[i] - expm1(x[i])); if (error > maxError) maxError = error; } std::cout << "Maximum error: " << maxError << "\n"; } This code is in the public domain. Do whatever you want to with it, no strings attached. Other versions of the same code: Python, C# Stand-alone numerical code
http://www.johndcook.com/cpp_expm1.html
CC-MAIN-2014-41
refinedweb
356
64.41
Send Mails from within a .NET 2.0 Application Mail Client Implementation To test the SendMail method, add a Visual C# Windows Forms application named MailClient to the existing solution and add a project reference to the MailServer using the Project->Add Reference dialog box. Once the reference is added, add a button control to the Form and double-click on the button control to bring up the Click event of the control in the code window. Modify the Click event of the button control to look as follows: private void btnSendMail_Click(object sender, EventArgs e) { MailService service = new MailService(); string from = "thiruthangarathinam@yahoo.com"; string to = "thiruthangarathinam@yahoo.com"; string subject = "Sending Mails from .NET 2.0"; string body = "This is the body of the message"; service.SendMail(from, to, subject, body); ; MessageBox.Show("Mail sent"); } As you can see, the implementation of the Click event is very simple and straightforward. You just create an instance of the MailService class and then simply call its SendMail method, passing in the required parameters. If everything is successful, you will get a message box indicating that the message has been sent successfully. Including CC and BCC Addresses public void SendMail(string from, string to, string CC, string BCC, string subject, string body) { string mailServerName = " smtp.test.com"; try { //MailMessage represents the e-mail being sent using (MailMessage message = new MailMessage(from, to, subject, body)) { if (CC != null) { if (CC.Trim().Length > 0) { MailAddress CCAddress = new MailAddress(CC); message.CC.Add(CCAddress); } } if (BCC != null) { if (BCC.Trim().Length > 0) { MailAddress BCCAddress = new MailAddress(BCC); message.BCC.Add(BCCAddress); } } modified version of the SendMail method, the main difference is the addition of CC and BCC parameters. These parameters are added to the CC and BCC properties of the MailMessage object, respectively. The CC and BCC properties of the MailMessage object return an instance of the MailAddressCollection, to which you add individual MailAddress objects as follows: if (CC != null) { if (CC.Trim().Length > 0) { MailAddress CCAddress = new MailAddress(CC); message.CC.Add(CCAddress); } } if (BCC != null) { if (BCC.Trim().Length > 0) { MailAddress BCCAddress = new MailAddress(BCC); message.BCC.Add(BCCAddress); } } If you are adding multiple recipients to the CC address, you need to create that many instances of the MailAddress objects and add them to the MailMessage.CC property. Note that the above example assumes there will be only one addressee listed in the CC address. Sending Attachments What you have seen so far are only basic functionalities of System.Net.Mail namespace. There is a lot more to the System.Net.Mail namespace. For example, by using the System.Net.Mail classes, you can send attachments as part of the mail messages. To accomplish this, you need to use the Attachment class, which provides a lot of flexibility in that you can load an attachment from a file or from a stream. Once you create an instance of the Attachment class, you then add it to the AttachmentCollection by calling the MailMessage.Attachments.Add method. The following code shows the code required: Attachment attach = new Attachment(attachmentPath); message.Attachments.Add(attach); Now that you have sent the code required for adding attachments, modify your SendMail method to send attachments as follows: public void SendMail(string from, string to, string subject, string body, string attachments) {; if (attachments != null) { attachments = attachments.Trim(); if (attachments.Length != 0) { //Get the list of semi-colon separated //attachments into an array string[] arr = attachments.Split(';'); foreach (string str in arr) { Attachment attach = new Attachment(str); message.Attachments.Add(attach); } } } //Send delivers the message to the mail server mailClient.Send(message); } } catch (SmtpException ex) { throw new ApplicationException ("SmtpException has oCCured: " + ex.Message); } catch (Exception ex) { throw ex; } } In the above code, the attachments are sent in the form of semicolon-separated values in a single string value to the SendMail method. Then, you split the attachments into an array, loop through the array, and add all the attachments to the MailMessage.Attachments.Add method as follows: if (attachments != null) { attachments = attachments.Trim(); if (attachments.Length != 0) { //Get the list of semi-colon separated //attachments into an array string[] arr = attachments.Split(';'); foreach (string str in arr) { Attachment attach = new Attachment(str); message.Attachments.Add(attach); } } } Once you add the attachments to the AttachmentCollection, the attachment will be embedded automatically in the mail when it is sent. That's all there is to sending attachments as part of the mail. Asynchronous Approach to Sending Mail Sometimes, you may want to send mail asynchronously. For example, if you are sending a lot of mail through your application, the synchronous approach might not work. In such a scenario, you can use SendAsync. Before using the SendAsync method, you need to set up the SendCompletedCallback event handler. The following code shows the implementation required: mailClient.SendCompleted += new SendCompletedEventHandler(SendCompletedHandler); mailClient.SendAsync(message, null); The SendCompletedHandler is declared as follows: void SendCompletedHandler(System.Object sender, AsyncCompletedEventArgs e) { //Code to process the completion } The AsyncCompletedEventArgs object supplied to the SendCompletedEventHandler delegate exposes a property named Error that will allow you to check whether an error has occurred during an asynchronous operation. The Error property returns an object of type Exception that you can examine to determine the exact cause of the error condition. Generating the Contents of the Mail Generally, when you send HTML-based e-mails, you can generate the body of the mail in different ways. For example, you can directly hardcode the body of the mail in the class that sends out the mail. The problem with this approach is that whenever you need to change the format of the mail, you need to change the code in the component, recompile it, and redeploy it on the server. This makes maintaining the component a nightmare. Fortunately, you can use XML in conjunction with XSL to solve this problem. In this approach, you can dynamically generate the data required for the mail in XML format and then apply an external XSL style sheet to transform the XML data into HTML. By using the following helper function, you can generate the body of the mail in the form of a string, which then can be used to set the Body property of the MailMessage object: public string GenerateMailBody(string inputXml, string xslPath) { XmlDocument xmlDoc = new XmlDocument(); xmlDoc.LoadXml(inputXml); XslCompiledTransform transform = new XslCompiledTransform(); //Load the XSL stylsheet into XslCompiledTransform transform.Load(xslPath); StringWriter writer = new StringWriter(); //Transform the xml contents into HTML by applying XSL transform.Transform(xmlDoc,null, writer); string bodyOfMail = writer.ToString(); return bodyOfMail; } The GenerateMailBody function takes in the input XML string and loads it into an XmlDocument object. Then, it creates an instance of the new XslCompiledTransform object (which is one of the new classes in the .NET Framework 2.0 used for XSL transformations). After that, you invoke the Transform method of the XslCompiledTransform object to transform the XML into HTML. To the Transform method, you also supply the StringWriter object as an argument so that the output of the transformation can be captured in that object. Then, you simply return the string value of the StringWriter object back to the caller. By simply reusing the above helper method, you can generate the right body of the HTML mail based on the input XML data and XSL stylesheet. Mail Processing Facelift The .NET Framework 2.0 provides improvements in almost all areas, and mail processing is one of them. In this article, you have seen how to send mail by using the new features of .NET Framework 2.0. You also learned how to send attachments by using the Attachment class. Then, you studied the use of XML and XSL in generating the body of a mail. Although the examples were simple in functionality, they should provide a solid foundation for understanding the use of new mail-processing features in .NET Framework 2.0. Download the Accompanying Code<<
http://www.developer.com/net/net/article.php/11087_3511731_2/Send-Mails-from-within-a-NET-20-Application.htm
CC-MAIN-2014-42
refinedweb
1,319
57.47
simpleaudio 00:00 simpleaudio is another cross-platform library that’s made for playing back WAV files. A thing to note is that you can wait until the sound stops playing to continue on to the next line of code. Go ahead and install it into your environment using pip, and just say install simpleaudio. 00:25 While that’s installing, we can head over to the text editor and try it out. So, import simpleaudio as sa. Then define a filename, which in my case is going to be 'hello.wav', and then make a wave_obj, which will be sa.WaveObject, and you’re going to make this from a WAV file, and pass in that filename. 00:59 From here, you can go ahead and say play_obj and make this equal to the wave_obj and call the .play() method off of it. And if you want to wait until that’s completed, you can then say play_obj.wait_done(). 01:19 That’ll make sure that the sound file has finished playing before continuing. So, save this and try it out! 01:31 “Hey there, this is a WAV file.” All right! That’s pretty cool! So, WAV files are sequences of bits with metadata stored as header information. There’s always a trade-off between sound quality and file size, however, so you’ll have to decide which is more critical for your application. 01:49 WAV files are considered uncompressed, but are defined by their sample rate and the size of each sample. The standard for music on a CD is a 16-bit sample, recorded at 44,100 samples per second, but this might not be necessary for things like speech. You could, for example, drop the sampling rate down to maybe 8,000 per second and enjoy some large file size improvements. 02:14 You might be wondering what this has to do with us using Python to play sounds, and it’s a good question. Some of the libraries you’ll learn about treat audio files as bytes, while others use NumPy arrays. simpleaudio can use both, so let’s see how this works in the editor. Go ahead and install NumPy, 02:37 and then import it into your script. 02:44 I’m going to delete the rest of this here, ‘cause we’re not going to need it—because instead, you’re going to be generating your own sound by creating it in a NumPy array. 02:53 NumPy arrays are mutable, unlike the bytes objects that you’d be getting from a WAV file, which make it better suited for making sounds and any type of audio processing. 03:04 So in your editor, go ahead and define a frequency and set this equal to 440, and then make your sampling size, which will be 44100. 03:17 And then for seconds, let’s say 3. Now make an array, which will just be an np and a linearly-spaced array from 0 to seconds, and it will contain seconds times the sampling rate. 03:40 All right, now make that 440 Hz sine wave, so note is just going to equal an np.sin() function, and in here pass in your frequency * t * 2, and then multiply that by pi. 04:02 And you’re going to want to make sure that the highest value is in that 16-bit range, 04:10 so you can say note, and then multiply this by 2**15 - 1, and then divide that by the largest absolute value inside note. 04:28 Make this 16-bit data by just reassigning audio to audio.as_type(np.int16). So with this, you can make your play_obj, which will now be an sa.play_buffer(), 04:55 which you’ll pass in audio, 1, 2, and then your sampling rate. And then like before, take that play_obj and call .wait_done() off of it. 05:08 All right. Let’s see if this works. 05:17 All right! So if you heard anything, that’s your computer generating a 440 Hz tone—or an A4 note, if you’re into music. If you’ve ever tuned a guitar off of a tuning fork, there’s a good chance that’s the note that was playing. All right! 05:32 So now you not only know how to play audio files, you can create your own audio sounds using NumPy arrays. In the next video, you’re going to learn how to use winsound, which only works with WAV files on Windows machines. 05:45 But it’s pretty straightforward, so it’s still definitely worth covering. Thanks for watching. Hi Pygator, This one is a bit more complicated as it synthesizes a sound from a sine function. Since the goal is to produce a 16-bit sound, 2**15 -1 will create the ‘ceiling’ for the volume, applied to the note array made earlier to scale the data. Because audio is now an array that maxes out at 16 bits, it can be passed into play_buffer to generate a tone. It accepts arguments for the audio data, number of channels, number of bytes per sample, and the sample rate. Thanks the 16 bits part makes more sense. Become a Member to join the conversation. Pygator on March 1, 2020 What is 2**15 -1 doing? and it’s not clear what you did with the play buffer call. Thanks.
https://realpython.com/lessons/simpleaudio/
CC-MAIN-2022-05
refinedweb
912
81.02
Nothing). While XML literal features in Visual Basic get all the love, the new XElement API for the CLR makes working with XML in C# a bit more fun, too. It's a prime cut of functional programming spiced with syntactic sugar. One example is how the API works with XML namespaces. When namespaces are present, they demand attention in almost every XML operation you can perform. It's like a tax you need to pay that doesn't pay back any benefits. An old poll on xml-dev once asked people to list their "favorite five bad problems" with XML, to which Peter Hunsberger replied: And in a different message, Joe English hit the nail on the head: I'd rather treat element type names and attribute names as simple, atomic strings. This is possible with a sane API, but most XML APIs aren't sane. The API we had pre .NET 3.5 was a fill-out-this-form-in-triplicate-and-wait-quietly-in-line bureaucracy living inside System.Xml. The new API tries to be a bit saner: Which produces: It's little things you don't notice at first that makes the API easier. Like XNamepace has an implicit conversion from string, and redefines the + operator to combine itself with a string to form a full XName (which also has an implicit string conversion operator). Someone spent some time designing this API for users instead of for a standards body, and it's much appreciated. Ross!ate Subversion and assorted tools or Team Foundation Server LINQ to SQL or Castle Active Record Entity Framework or NHibernate. Choose Microsoft Choose OSS Business Risks License issues Lack of formal support Hard to hire experts Technical Risks V1 and V2 won't always work Waiting on bug fixes Friction Small communities Lack of training material? At the last CMAP Code Camp I did a "code-only" presentation entitled "A Gentle Introduction to Mocking". We wrote down some requirements, opened Visual Studio, and started writing unit tests. Matt Podwysocki provided color commentary. Code download is here. I started "accepting" mock objects as one tool in my unit testing toolbox about three years ago (see "The 5 Stages Of Mocking"). Times have changed quite a bit since then, and the tools have improved dramatically. During the presentation we used the following: Rhino Mocks – the first mocking framework used in the presentation. Years ago, Oren and Rhino Mocks saved us from "string based" mock objects. Rhino Mocks can easily conjure up a strongly typed mock object. The strong typing results in fewer errors, and greatly enhances the refactoring experience. moq – is the latest mocking framework in the .NET space and is authored by kzu and friends. moq uses lambda expressions and expression trees to define mock object behavior, and also provides strongly typed mocks. The recent addition of factories and mock verification means you can do traditional interaction style testing with moq, if that is the path you choose. The primary differentiator between the two frameworks is that moq does not use a record / playback paradigm. Here is a test we wrote with Rhino Mocks: The same test using moq: xUnit.net – although not featured in the presentation, xUnit.net drove all the unit tests. xUnit is a new framework authored by Jim Newkirk and Brad Wilson. The framework codifies some unit testing best practices and takes advantage of new features in the C# language and .NET framework. I like it. One question that came up a few times was "when should I use a mock object framework"? Turns out I've been asked a lot of questions starting with when lately, so I'll answer that question in the next post. A dictionary definition of principle often uses the word "law", but principles in software development still require judgment. Sometimes the judgment requires some technical knowledge, like knowing the strengths and weaknesses of a particular technology. Other times the judgment requires some business knowledge, like the ability to anticipate where change is likely to occur. Asking someone to make a sensible judgment about a principle is difficult when all you see is a snippet of code in a blog. The code is outside of its context. Take Leroy's BankAccount class. We don't really know what sort of business Leroy works for, or even what type of software Leroy is building. Nevertheless, let's apply a few principles to see what's bothering Leroy. Does Leroy's original BankAccount class violate the Single Responsibility Principle? I think so. The class is opening text files for logging, calculating interest, and oh, by the way, it needs to provide all the state and behavior for a financial account, too. Even without knowing the context, it seems reasonable to remove the auditing cruft into a separate class. After writing some tests, and implementing a concrete auditing class, Leroy's BankAccount might look like the following. Leroy has an almost infinite number of choices to make before coming up with the above implementation, though. Leroy could have derived BankAccount from an Auditable base class, or forced BankAccount to implement an IAuditable interface. But what guides Leroy to this particular solution in the universe of a million possibilities are other principles - like the Interface Segregation Principle, and Composition Over Inheritance. Leroy might still frown at his class, feeling he has violated the Dependency Inversion Principle. Without any additional information, we have to trust Leroy's judgment when he decides to make some additional changes. Perhaps Leroy already knew about some future changes in his auditing implementation, or perhaps Leroy just wanted to make his class more testable. Some of us view software as a massive heap of dependencies, and we fight to reduce the brittleness created by dependencies using inversion of control and dependency injection techniques. In some environments, this isn't needed. The principles to apply depend on the language you use, the tools you use, and ultimately depend on the problem the software is trying to solve. WWWTC has had a run of 19 episodes. I have some material for at least another 20. Problem is, most of the material deals with API trivia and edge cases you might never see. Interesting? To me, at least, but I'm thinking of introducing more squishy design type entries. I know a lot of people struggle to apply the latest frameworks and libraries, but design questions are always enlightening and produce the most spirited debate, giving us all something we can learn… Ler….
http://odetocode.com/Blogs/scott/default.aspx
crawl-001
refinedweb
1,089
63.29
PropertyItem Class Encapsulates a metadata property to be included in an image file. Not inheritable. Assembly: System.Drawing (in System.Drawing.dll) System.Drawing.Imaging.PropertyItem The data consists of: an identifier, the length (in bytes) of the property, the property type, and a pointer to the property value. A PropertyItem is not intended to be used as a stand-alone object. A PropertyItem object is intended to be used by classes that are derived from Image. A PropertyItem object is used to retrieve and to change the metadata of existing image files, not to create the metadata. Therefore, the PropertyItem class does not have a defined Public constructor, and you cannot create an instance of a PropertyItem object. To work around the absence of a Public constructor, use an existing PropertyItem object instead of creating a new instance of the PropertyItem class. For more information, see Image.GetPropertyItem. The following code example demonstrates how to read and display the metadata in an image file using the PropertyItem class and the Image.PropertyItems property. This example is designed to be used in a Windows Form that imports the System.Drawing.Imaging namespace. Paste the code into the form and change the path to fakePhoto.jpg to point to an image file on your system. Call the ExtractMetaData method when handling the form's Paint event, passing e as PaintEventArgs. Private Sub ExtractMetaData(ByVal e As PaintEventArgs) Try 'Create an Image object. Dim theImage As Image = New Bitmap("c:\fakePhoto.jpg") 'Get the PropertyItems property from image. Dim propItems As PropertyItem() = theImage.PropertyItems 'Set up the display. Dim font As New font("Arial", 10) Dim blackBrush As New SolidBrush(Color.Black) Dim X As Integer = 0 Dim Y As Integer = 0 'For each PropertyItem in the array, display the id, type, and length. Dim count As Integer = 0 Dim propItem As PropertyItem For Each propItem In propItems e.Graphics.DrawString("Property Item " + count.ToString(), _ font, blackBrush, X, Y) Y += font.Height e.Graphics.DrawString(" iD: 0x" & propItem.Id.ToString("x"), _ font, blackBrush, X, Y) Y += font.Height e.Graphics.DrawString(" type: " & propItem.Type.ToString(), _ font, blackBrush, X, Y) Y += font.Height e.Graphics.DrawString(" length: " & propItem.Len.ToString() & _ " bytes", font, blackBrush, X, Y) Y += font.Height count += 1 Next propItem font.Dispose() Catch ex As ArgumentException MessageBox.Show("There was an error. Make sure the path to the image file is valid.") End Try End Sub Available since 1.1 Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
https://msdn.microsoft.com/en-us/library/system.drawing.imaging.propertyitem(v=vs.110).aspx?cs-save-lang=1&cs-lang=vb
CC-MAIN-2016-44
refinedweb
437
60.72
Sorry I forgot about escaping the includes '<' '>'. Please find the patch attached. The json_reader.cpp is missing an include of <istream>, needed for bool Reader::parse(std::istream& sin, ...). Failure to include it leads to a missing definition of getline for istream. This include was removed in revision 247 as part of a cleanup of unnecessary includes of <iostream> to improve binary startup time, but it should not have been removed in json_reader.cpp (or at least should have been replaced by an include of <istream>). Here is the patch: --- /src/lib_json/json_reader.cpp (revision 275) +++ src/lib_json/json_reader.cpp (working copy) @@ -13,6 +13,7 @@ #include <cstdio> #include <cassert> #include <cstring> +#include <istream> #include <stdexcept> #if defined(_MSC_VER) && _MSC_VER >= 1400 // VC++ 8.0 Sorry I forgot about escaping the includes '<' '>'. Please find the patch attached. Sorry for the delay, and thanks for the patch. Applied in r276. Log in to post a comment.
https://sourceforge.net/p/jsoncpp/patches/24/
CC-MAIN-2017-17
refinedweb
154
69.89
Now that we're all experts in how dynamic invocations work for regular method calls, lets extrapolate from our previous discussion about phantom methods a bit and take a look at how those basic concepts apply to other dynamic operations. Today we'll just go through a laundry list of each type of operation, and throw in a few caveats and gotchas (limitations really, but that's such a negative word) that come along with the whole package. As always, I'll try to give some insights as to why we made the decisions that we did, and if there are workarounds for certain scenarios, I'll definitely point them out. So without further ado, lets hit the list! Named properties take the form d.Foo, where d is some dynamic object and Foo is some name for some field or property that lives on the runtime type of d. When the compiler encounters this, it encodes the name "Foo" in the payload, and instructs the runtime binder to bind off of the runtime type of d. Note however, that named properties are always used in context! You can do one of three things with these guys - access the value, assign a value to the member, or do both (compound operations, such as += etc). The compiler will thus encode the intent of the usage in the payload as well, so that the runtime will allow you to bind to a get-only property only if you're trying to access it, and will throw you an error if you're trying to assign to it. The thing to note here is that the compiler will treat any named thing the same, and allow the runtime to differentiate between properties and fields. The return type of these guys is dynamic at compile time. You can think of indexers in one of two ways - properties with arguments, or method calls with a set name. The latter is a much more useful way to think of these guys when we're dealing with dynamic. The reason is that just like method calls, even if the indexer itself can be statically bound, any dynamic arguments that don't directly map to dynamic can cause the phantom overload to come into play, and cause a late binding based on the static type of the receiver, and the dynamic types of the arguments. They still do have some similarities to properties however - they're always used in context. As such, the compiler again will encode whether or not the user is accessing the value of the indexer, setting a value, or performing a compound operation into the payload for the binder to use. The return type of these guys is also dynamic at compile time. Last time we mentioned that the although dynamic is not convertible to any other type, there are certain scenarios in which we allow it to be convertible. Assignments, condition expressions, and foreach iteration variables are a few examples. These payloads are quite simple - because the compiler already knows the type that we're attempting to convert to (ie the type of the variable you're assigning to), it simply encodes the conversion type in the payload, indicating to the runtime binder that it should attempt all implicit (or explicit if its a cast) conversions from the runtime type of the argument to the destination type. Note that user-defined conversions will be applied as well. We worked pretty hard to make sure that the runtime semantics will behave just like the compile time ones, so argument conversions for overload resolution and the like will all happen exactly as you'd expect. These guys return the destination type at compile time. Note that these guys are the only guys who have a non-dynamic return type at compile time. Operators are a bit of a strange beast. At first glance, it's hard to tell that anything dynamic is going on. However, a simple statement like d+1 still needs to be dispatched at runtime, because user-defined operators can come into play. As such, any operation that has a dynamic argument will be dispatched at runtime. This includes all of the in place operators as well (+=, -= etc). Note that the compiler will do the magic to figure out if you've got a member assignment (ie d.Foo += 10) or a variable assignment (ie d += 10), and figures out if it needs to pass d by ref to the call site so that it can be mutated. Note also that structs will get mutated as well! So if we were to do: public struct S { public int Foo; } public class C { static void Main() { dynamic d = new S(); d.Foo += 10; } } the result would be that d would point to a struct who's Foo member is set to 10. Lastly, note that the compiler knows that if you're doing something like d.Foo += x, and at runtime d.Foo binds to a delegate type or an event type, then the correct combine/add call will be invoked for you. The invocation syntax is very much like a method call. The only difference is that the name of the action is not explicitly stated. This means that just like calls, both of the following examples would end up causing runtime dispatches: public class C { static void Main() { MyDel c = new MyDel(); dynamic d = new MyDel(); d(); c(d); } } The first example causes a runtime dispatch of an invoke that takes no arguments. At runtime, the binder will check to verify that the recevier is indeed a delegate type, and then perform overload resolution to make sure the arguments supplied match the delegate signature. The second example causes a runtime dispatch because the argument specified is dynamic. The compiler determines at compile time that we have an invoke of a delegate since c's type is a delegate, but the actual resolution must be done at runtime. Okay, that's enough laundry listing for today. Next time we'll look at a few small caveats of things that aren't allowed in dynamic contexts. After that, I think we'll switch gears and start looking at some other VS 2010 features - named arguments, optional parameters, and more! Regarding properties and operator+= and -= - wouldn't those two also require special treatment (as operations distinct from just "access" and "mutate") because it could be an event, not a property? Yes, like I mentioned in this paragraph: "Lastly, note that the compiler knows that if you're doing something like d.Foo += x, and at runtime d.Foo binds to a delegate type or an event type, then the correct combine/add call will be invoked for you." We'll treat these guys distinctly. Currently the architecture is such that the DLR has a special payload kind for compound operators, so we can encode the thing as one site. So for example, d.Foo += 1 would be encoded in one single site that knows the receiver is d, and that we're doing a member access for "Foo", with the += operation and 1 as the value. You've been kicked (a good thing) - Trackback from DotNetKicks.com One of the features I find very limiting in C# (and VB.NET) generics is the way constraints work. As
http://blogs.msdn.com/samng/archive/2008/12/11/dynamic-in-c-v-indexers-operators-and-more.aspx
crawl-002
refinedweb
1,223
67.79
package Carp::Assert; require 5.006; use strict qw(subs vars); use warnings; use Exporter; use vars qw(@ISA $VERSION %EXPORT_TAGS); BEGIN { $VERSION = '0.21'; @ISA = qw(Exporter); %EXPORT_TAGS = ( NDEBUG => [qw(assert affirm should shouldnt DEBUG)], ); $EXPORT_TAGS{DEBUG} = $EXPORT_TAGS{NDEBUG}; Exporter::export_tags(qw(NDEBUG DEBUG)); } # constant.pm, alas, adds too much load time (yes, I benchmarked it) sub REAL_DEBUG () { 1 } # CONSTANT sub NDEBUG () { 0 } # CONSTANT # Export the proper DEBUG flag according to if :NDEBUG is set. # Also export noop versions of our routines if NDEBUG sub noop { undef } sub noop_affirm (&;$) { undef }; sub import { my $env_ndebug = exists $ENV{PERL_NDEBUG} ? $ENV{PERL_NDEBUG} : $ENV{'NDEBUG'}; if( grep(/^:NDEBUG$/, @_) or $env_ndebug ) { my $caller = caller; foreach my $func (grep !/^DEBUG$/, @{$EXPORT_TAGS{'NDEBUG'}}) { if( $func eq 'affirm' ) { *{$caller.'::'.$func} = \&noop_affirm; } else { *{$caller.'::'.$func} = \&noop; } } *{$caller.'::DEBUG'} = \&NDEBUG; } else { *DEBUG = *REAL_DEBUG; Carp::Assert->_export_to_level(1, @_); } } # 5.004's Exporter doesn't have export_to_level. sub _export_to_level { my $pkg = shift; my $level = shift; (undef) = shift; # XXX redundant arg my $callpkg = caller($level); $pkg->export($callpkg, @_); } sub unimport { *DEBUG = *NDEBUG; push @_, ':NDEBUG'; goto &import; } # Can't call confess() here or the stack trace will be wrong. sub _fail_msg { my($name) = shift; my $msg = 'Assertion'; $msg .= " ($name)" if defined $name; $msg .= " failed!\n"; return $msg; } =head1 NAME Carp::Assert - executable comments =head1; =head1 DESCRIPTION =begin testing BEGIN { local %ENV = %ENV; delete @ENV{qw(PERL_NDEBUG NDEBUG)}; require Carp::Assert; Carp::Assert->import; } local %ENV = %ENV; delete @ENV{qw(PERL_NDEBUG NDEBUG)}; =end testing "We are ready for any unforseen event that may or may not occur." - Dan Quayle Carp::Assert is intended for a purpose like the ANSI C library L: =for example begin # Take the square root of a number. sub my_sqrt { my($num) = shift; # the square root of a negative number is imaginary. assert($num >= 0); return sqrt $num; } =for example end =for example_testing is( my_sqrt(4), 2, 'my_sqrt example with good input' ); ok( !eval{ my_sqrt(-1); 1 }, ' and pukes on bad' ); B<enforces> the comment. $life = begin_life(); assert( $life =~ /!$/ ); =for testing my $life = 'Whimper!'; ok( eval { assert( $life =~ /!$/ ); 1 }, 'life ends with a bang' ); =head1 FUNCTIONS =over 4 =item B<assert> assert(EXPR) if DEBUG; assert(EXPR, $name) if DEBUG; assert's functionality is effected by compile time value of the DEBUG constant, controlled by saying C<use Carp::Assert> or C<no Carp::Assert>. In the former case, assert will function as below. Otherwise, the assert function will compile itself out of the program. See L<Debugging vs Production> for details. =for testing { package Some::Other; no Carp::Assert; ::ok( eval { assert(0) if DEBUG; 1 } ); } Give assert an expression, assert will Carp::confess() if that expression is false, otherwise it does nothing. (DO NOT use the return value of assert for anything, I mean it... really!). =for testing ok( eval { assert(1); 1 } ); ok( !eval { assert(0); 1 } ); The error from assert will look something like this: Assertion failed! Carp::Assert::assert(0) called at prog line 23 main::foo called at prog line 50 =for testing eval { assert(0) }; like( $@, '/^Assertion failed!/', 'error format' ); like( $@, '/Carp::Assert::assert\(0\) called at/', ' with stack trace' );!" =for testing eval { assert( Dogs->isa('People'), 'Dogs are people, too!' ); }; like( $@, '/^Assertion \(Dogs are people, too!\) failed!/', 'names' ); =cut sub assert ($;$) { unless($_[0]) { require Carp; Carp::confess( _fail_msg($_[1]) ); } return undef; } =item B. =for example begin affirm { my $customer = Customer->new($customerid); my @cards = $customer->credit_cards; grep { $_->is_active } @cards; } "Our customer has an active credit card"; =for example end =for testing my $foo = 1; my $bar = 2; eval { affirm { $foo == $bar } }; like( $@, '/\$foo == \$bar/' ); affirm() also has the nice side effect that if you forgot the C<if DEBUG> suffix. =cut sub affirm (&;$) { unless( eval { &{$_[0]}; } ) { my $name = $_[1]; if( !defined $name ) { eval { require B::Deparse; $name = B::Deparse->new->coderef2text($_[0]); }; $name = 'code display non-functional on this version of Perl, sorry' if $@; } require Carp; Carp::confess( _fail_msg($name) ); } return undef; } =item B<should> =item B I<what> failed or I. =cut sub should ($$) { unless($_[0] eq $_[1]) { require Carp; &Carp::confess( _fail_msg("'$_[0]' should be '$_[1]'!") ); } return undef; } sub shouldnt ($$) { unless($_[0] ne $_[1]) { require Carp; &Carp::confess( _fail_msg("'$_[0]' shouldn't be that!") ); } return undef; } # Sorry, I couldn't resist. sub shouldn't ($$) { # emacs cperl-mode madness #' sub { my $env_ndebug = exists $ENV{PERL_NDEBUG} ? $ENV{PERL_NDEBUG} : $ENV{'NDEBUG'}; if( $env_ndebug ) { return undef; } else { shouldnt($_[0], $_[1]); } } =back =head1 C. =head1 C<assert('$a == $b')> because $a and $b will probably be lexical, and thus unavailable to assert(). But with Perl, unlike C, you always have the source to look through, so the need isn't as great. =head1 EFFICIENCY With C<no Carp::Assert> (or NDEBUG) and using the C<if DEBUG> suffixes on all your assertions, Carp::Assert has almost no impact on your production code. I say almost because it does still add some load-time to your code (I've tried to reduce this as much as possible). If you forget the C<if DEBUG> on an C<assert()>, C<should()> or C<shouldnt()>, its arguments are still evaluated and thus will impact your code. You'll also have the extra overhead of calling a subroutine (even if that subroutine does nothing). Forgetting the C<if DEBUG> on an C<affirm()> is not so bad. While you still have the overhead of calling a subroutine (one that does nothing) it will B<not> evaluate its code block and that can save a lot. Try to remember the B<if DEBUG>. =head1 ENVIRONMENT =over 4 =item NDEBUG Defining NDEBUG switches off all assertions. It has the same effect as changing "use Carp::Assert" to "no Carp::Assert" but it effects all code. =item PERL_NDEBUG Same as NDEBUG and will override it. Its provided to give you something which won't conflict with any C programs you might be working on at the same time. =back =head1 BUGS, CAVETS and other MUSINGS =head2 Conflicts with C<POSIX.pm> The C<POSIX> module exports an C<assert> routine which will conflict with C<Carp::Assert> if both are used in the same namespace. If you are using both together, prevent C<POSIX> from exporting like so: use POSIX (); use Carp::Assert; Since C<POSIX> exports way too much, you should be using it like that anyway. =head2 C<affirm> and C<$^S> affirm() mucks with the expression's caller and it is run in an eval so anything that checks $^S will be wrong. =head2 C<shouldn't> Yes, there is a C<shouldn't> routine. It mostly works, but you B<must> put the C<if DEBUG> after it. =head2 missing C<if DEBUG> It would be nice if we could warn about missing C<if DEBUG>. =head1 SEE ALSO L<assert.h|> - the wikipedia page about C<assert.h>. L<Carp::Assert::More> provides a set of convenience functions that are wrappers around C<Carp::Assert>. L<Sub::Assert> provides support for subroutine pre- and post-conditions. The documentation says it's slow. L<PerlX::Assert> provides compile-time assertions, which are usually optimised away at compile time. Currently part of the L<Moops> distribution, but may get its own distribution sometime in 2014. L<Devel::Assert> also provides an C<assert> function, for Perl >= 5.8.1. L<assertions> provides an assertion mechanism for Perl >= 5.9.0. =head1 REPOSITORY L<> =head1 COPYRIGHT Copyright 2001-2007 by Michael G Schwern E<lt>schwern@pobox.comE<gt>. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See F<> =head1 AUTHOR Michael G Schwern <schwern@pobox.com> =cut return q|You don't just EAT the largest turnip in the world!|;
https://metacpan.org/release/Carp-Assert/source/lib/Carp/Assert.pm
CC-MAIN-2019-30
refinedweb
1,302
65.12
From Bugzilla Helper: User-Agent: Mozilla/4.73 [en] (X11; U; Linux 2.4.0-test2 alpha; Nav) BuildID: 2000182349127863 The file: xpcom/reflect/xptcall/src/md/unix/xptcstubs_asm_osf1_alpha.s requires cpp preprocessing before assembly. To enable this, it should be named with a .S (note capital S). Reproducible: Always Steps to Reproduce: 1.compile with CC=ccc CXX=cxx using Compaq's compilers on linux/alpha. 2. 3. Actual Results: Expected Results: To allow the lizard to be compiled with Compaq's ccc/cxx compilers under linux/alpha, change: In xpcom/reflect/xptcall/src/md/unix/Makefile.in: # # Linux/Alpha # ifneq (,$(filter Linuxalpha FreeBSDalpha NetBSDalpha,$(OS_ARCH)$(OS_TEST))) ifeq ($(GNU_CC),1) CPPSRCS := xptcinvoke_linux_alpha.cpp xptcstubs_linux_alpha.cpp else CPPSRCS := xptcinvoke_osf1_alpha.cpp xptcstubs_osf1_alpha.cpp ASFILES := xptcinvoke_asm_osf1_alpha.s xptcstubs_asm_osf1_alpha.S endif endif My apologies for not producing a proper patch. Renaming of the above .s file to .S causes some makefile code later on to barf because it doesn't know what .S is. (linking of libxpcom.so fails because the makefile passes the .S file to ld...which treats it as a script :() It appears that a make in xpcom/build will copy the .S file to xpcom/build and try to link it. Don't know why... This form sucks ()...there's no build ID (compiled by hand). My problem with doing this kind of work is verifying that the patch works, since I lack the hardware. There are other patches for various platforms just waiting because I do not have the ability to test that it works on that platform. How should we procede on this? leger? What's the Netscape-internal process for getting patches checked out on other platforms? Gerv colin is the DEC guy, iirc? Compaq (Digital) has a few operating systems. I am the OpenVMS guy. This problem is reported against a file that is specific to Digital UNIX, formerly OSF/1, now known as Tru64 UNIX. Although it looks like someone's trying to build on Linux on Alpha. Anyway, its not OpenVMS, so I'm afraid I'm not your guy. Edward: Welcome to xpcom! Once again... attempting to reassign from Ray to Edward. Created attachment 19130 [details] [diff] [review] adds necessary asm for Linux/Alpha using Compaq's ccc compiler. I just sent a patch to allow ccc to compile properly. The ASM is copied from the osf1 sources, and symbols changed for different symbol mangling. This compiler now has template instatiation problems (in xpcom) that I haven't resolved yet. Any suggestions would be appreciated, since gcc generates a REALLY slow mozilla on this platform. If there is confusion, this is for Linux on the alpha using Compaq's ccc compiler: Also if you guys want to test out the patch, Compaq has a "testdrive" program that will give you a shell account on a linux/alpha machine (or many other OS's), which could be useful for this purpose: I have no idea what to do with bug. I have no way of checking it out. Can someone take this one please (or let me know who to reassign it to). Thanks. reassign all kandrot xpcom bug. -> xpconnect I'm not sure what I can do about this. I don't have an Alpha system to test this against. I'm futuring this for now. I have no way of verifying this patch. Either someone else own it or get me access to an Alpha box ;-) this bug looks the same as bug 124074... *** Bug 124074 has been marked as a duplicate of this bug. *** David Bradley wrote: > I'm futuring this for now. I have no way of verifying this patch. Either > someone else own it or get me access to an Alpha box ;-) What about using the SourceForge's CompileFarm machines ? They have a DEC Alpha box there... Created attachment 104101 [details] rpm output. Created attachment 104102 [details] mozilla-1.1.rpm.out.bz2. Jiann-Ming, can you modify your spec file so that it does not use 'make -s' and attach a new build log? I'd like to see the actual commands that are being used to create libxpcom.so. Created attachment 104714 [details] Output of "rpm -ba --clean" make without the -s option Created attachment 104733 [details] [diff] [review] Use CC & CXX to build shared libs I think the problem is that we're using ld to create libxpcom.so. This has proven problematic on linux in the past. This patch should force the use of $(CC) -shared & $(CXX) -shared to build the shared libs. It also kills the warning caused by using -mieee. Created attachment 104827 [details] mozilla-1.1.rpm.out.bz2 New patches causes a different error now. I don't think it got quite as far in the compile process this time. cxx -I/usr/X11R6/include -O2 -pthread -DNDEBUG -DTRIMMED -O -KPIC -shared -Wl,-soname -Wl,libmozjs.so -o libmozjs.so jsapi.o jsarena.o jsarray.o jsatom.o jsbool.o jscntxt.o jsdate.o jsdbgapi.o jsdhash.o jsdtoa.o jsemit.o jsexn.o jsfun.o jsgc.o jshash.o jsinterp.o jslock.o jslog2.o jslong.o jsmath.o jsnum.o jsobj.o jsopcode.o jsparse.o jsprf.o jsregexp.o jsscan.o jsscope.o jsscript.o jsstr.o jsutil.o jsxdrapi.o prmjtime.o -lm -ldl -L/usr/src/redhat/BUILD/mozilla/dist/lib -lplds4 -lplc4 -lnspr4 -lpthread -ldl -lc -ldl -lm -lc cxx: Info: Switch -pthread not supported on this platform...ignoring. ld: unrecognized option '--demangle=compaq' ld: use the --help option for usage information ld: unrecognized option '--demangle=compaq' ld: use the --help option for usage information make[2]: *** [libmozjs.so] Error 1 It looks like that version of ld doesn't support the compaq compiler. Does building any non-mozilla c++ libraries or programs work? (Why we're linking JS using the c++ compiler is a separate issue.) Created attachment 104928 [details] mozilla-1.1.rpm.out.bz2 Okay, I upgraded binutils and am now running ld version 2.13.90.0.2 20020802. Looks like ld is bombing on "-KPIC" for files compiled with cxx, while the ccc compiled files linked without problems. In the previous patch, change DSO_CFLAGS=-fPIC to DSO_PIC_CFLAGS=-fPIC. Created attachment 104932 [details] mozilla-1.1.rpm.out.bz2 Okay, I modified the patch. Here's the new output. Created attachment 104936 [details] [diff] [review] add -fPIC to ASFLAGS cxx -I/usr/X11R6/include -O2 -pthread -DNDEBUG -DTRIMMED -O -fPIC -shared -Wl,-soname -Wl,libxpcom.so -o libxpcom.so .... ld: xptcstubs_asm_linux_alpha_ccc.o: pc-relative relocation against dynamic symbol PrepareAndDispatch I think that problem is caused by not building the asm file with -fPIC. This patch should fix that. Applied new patch, but same error, with a few minor differences: # diff rpm.out-latest rpm.out-previous 9639,9640c9639,9640 < ccc -o xptcinvoke_asm_linux_alpha_ccc.o -fPIC -I../../../../../../dist/include /xpcom -c xptcinvoke_asm_linux_alpha_ccc.s < ccc -o xptcstubs_asm_linux_alpha_ccc.o -fPIC -I../../../../../../dist/include/ xpcom -c xptcstubs_asm_linux_alpha_ccc.S --- > ccc -o xptcinvoke_asm_linux_alpha_ccc.o -I../../../../../../dist/include/xpco m -c xptcinvoke_asm_linux_alpha_ccc.s > ccc -o xptcstubs_asm_linux_alpha_ccc.o -I../../../../../../dist/include/xpcom -c xptcstubs_asm_linux_alpha_ccc.S So according to the messages at & , recent versions of binutils trigger that error when a relocation cannot be computed at link time. The fix for symbols that exist in the same asm file is straight forward but I don't see any examples for external symbols. You may have to experiment. *** Bug 175546 has been marked as a duplicate of this bug. *** Created attachment 115213 [details] new rpm output Okay, finally upgraded to Compaq's latest compilers. Tried building mozilla 1.2.1 and now getting a bit farther it seems. The src rpm compiles fine with gcc. Created attachment 117358 [details] rpm output for 1.3 Different error for 1.3.
https://bugzilla.mozilla.org/show_bug.cgi?id=48516
CC-MAIN-2017-34
refinedweb
1,298
60.72
- Basic Object creation - Overriding functions - Serialization of objects - TCPClient transmission - Binary formatter Tools required: Visual Studio 2010 .NET 4.0 Preface 1: For this example I am using three virtual machines in VMware player.. you can use multiple networked pcs or even virtualbox to test this out. Installing and running a virtual machine (VM) is a bit outside of the scope of this tutorial. Just remember when you do this the quick and dirty way is to have the VMs all on a bridged connection, and disable windows firewall. (Clearly you would want to have the firewall setup to to allow connections, but on a private VM network you can take some allowances). Preface 2: I am fully aware the agent validation is missing and this is not necessarily the most secure way of transmitting and ensuring the integrity of the agents. This provides the ground work and most security and authentication can be bolted in as needed. Automated Agents - that term conjures a spiraling level of thought and ideas. From semi futuristic movies (Matrix, Terminators, Screamers, etc), super complex code, and a bit of fear for of lost control. The truth be told they are not too difficult to create when you decide to build an environment and platform for the agents to exist and interact. Think of it as building a game world for NPCs to take form in. Quite a few years back the fringe business world was buzzing with the idea of having digital assistants at your beck and call - programmed to smartly know how to interact with various industries and systems (on your behalf) to purchase, research, and be an army of support for your lives. Purchasing groceries, airline tickets, stock research, and so forth. Cutting down this concept to the core I decided a good example would be a simple auction house. The idea is there would be a one host system where the auction house would reside. The host would deal with item creation, collecting the agents for bidding, and created a standard interface for how to bid on an item. On the flip side there would be a client system where people created agents, dictated their bidding behavior, and send them backing to the auction house. The agents would be allowed to determine how to bid as they wanted (running statistics on the bids, variable increments, etc), but they had to conform to the interface with the market place life was good. Granted the agents in this tutorial are not crunching big numbers the key aspect is: they are deciding things independent of a person telling them to. With a bit of variable data the object operates without supervision and can produce different results depending on how other agents work with it. Complexity can crop up with the most simple rules! On face this seems like a complex project, but with the .NET framework much of the tedious behavior is swept out of the way. How do we strap connections from the client to the host? The System.Net.Sockets namespace has TCPClient to do that! How would we create an object on one system and send it to another? Serialization and the System.Runtime.Serialization.Formatters' Binary.BinaryFormatter! A bit of extra code to puzzle out a menu to interact with our object and you are mostly there! Conceptually here is what is going on: Action wise - here are the important bits for the flow and state: Let's dive into the code and see what is going on. Agent.Agent '-- A simple class to hold critical information the agent needs to know to do its job. '-- Notice the class is serializable.. that means it can be broken down, thrown into a stream and moved to a disk or even another application. <Serializable()> Public Class Agent ... '-- The universal 'bidding'.. could be tweaked to be more robust. '-- a -1 return means drop out of the bidding else return how much you want to jump the price. Public Function DoBid(ByVal price As Int32) As Int32 If price + Increment <= MaxBid Then Return Increment Else Return -1 End If End Function ... End Class First - the agent. The critical take away is with one single line (<Serializable()>) we told Visual Studios to be ready to break down this object, and the data in that specific instance, to a serializable format. Binary, AScii, or what ever we chose. In theory we could save it with a stream writer to a file, or convert to a binary array for database storage. What this means you can take your object, freeze dry it, transfer/store it, and rehydrate the object in another time or place! Also check out the bidding system - not that complex, but it provides variable results when combined with similar other agents! Agent.Item The item class is not particularly interesting - a filler class to hold an id, price, and a quick and dirty method to randomize the information for testing. IntelligentAgent_Client The client module has a few interesting parts. First is sending the agent across the wire. '-- sends the agent to the host.. if it was good then return true so the client can listen for the agent coming back. Private Function DoSending(ByVal serverIP As String, ByVal _agent As Agent.Agent) As Boolean Dim bReturn As Boolean = False Dim port As Int32 = 13000 '-- we are all connecting to this port on the host.. the host doesn't care who sends it an agent but if you do send it to this door. Dim client As TcpClient '-- the simple client connection to the host. Dim format As IFormatter '-- format the serialized agent to be sent across the wire. Dim stream As NetworkStream '-- the stream to feed the bits to for the host. Try '-- Create a TcpClient to our host on the given port. ' client = New TcpClient(serverIP, port) '-- for the agent object to be serialized into binary to be sent across the stream to the host. ' format = New BinaryFormatter() '-- the client's network stream to send the serialized agent.. much like writing to a filestream or streamwriter. ' stream = client.GetStream() Console.WriteLine(String.Format("Sending agent: {0} {1}", _agent.MYID, _agent.Increment)) '-- take the agent, serialize it, and push it into the stream. format.Serialize(stream, _agent) '-- wait for the return to say it was a good send. bReturn = CType(format.Deserialize(stream), Boolean) Console.WriteLine(String.Format("Response: {0}", bReturn)) Catch e As ArgumentNullException Console.WriteLine("ArgumentNullException: {0}", e) bReturn = False Catch e As SocketException Console.WriteLine("SocketException: {0}", e) bReturn = False Finally ' Close everything. If stream IsNot Nothing Then stream.Close() stream.Dispose() End If If client IsNot Nothing Then client.Close() End Try Return bReturn End Function In the method we take an instance of the agent.agent class, serialize it to binary, and push it across a TCPClient connection to the host. This could easily mimic what I would do to write the agent to disk, or push it to a memory stream. When the agent is sent across we wait for a response back from the host if it was a good send (there might have been corruption or missing information). Here you could bolt in a retransmission or some sort of checker, but for this basic foundation that would clutter things. Once we got the green light the agent was sent correctly the client then moves into a "wait for the agent's return" state. The client is essentially locked until the agent it sent comes home. '-- Similar to sending an agent, but instead listen to another port for any information coming down the stream (from the host). Private Function AgentReturn() As Boolean Dim bReturn As Boolean = False Dim server As TcpListener Dim port As Int32 = 13001 '-- don't listen to the port we were sending the agent on. Dim localAddr As IPAddress = IPAddress.Parse(_homeIP) '_hostIP Dim client As TcpClient Dim stream As NetworkStream Dim _agent As Agent.Agent Dim format As IFormatter Try '-- Just listen to our port until something comes in.. halts the program until it does. ' server = New TcpListener(localAddr, port) '-- start listening server.Start() '' While True Console.Write("Waiting for a connection... ") '-- when something does come in shift it to the client. client = server.AcceptTcpClient() Console.WriteLine("Connected!") ' get the stream to read it. stream = client.GetStream() '-- the agent will be in binary so get ready for that. format = New BinaryFormatter() '-- deserialize and cast it as the agent. _agent = CType(format.Deserialize(stream), Agent.Agent) '-- send back to the host we got the object. format.Serialize(stream, _agent IsNot Nothing) Console.WriteLine(String.Format("Made it back: {0}", _agent.MYID)) '-- set the global agent object to this new one gotten pack. _myAgent = _agent '-- see if we won an item! If _myAgent.ItemID = Guid.Empty Then Console.WriteLine("No item won... :(/>") Else Console.WriteLine(String.Format("Won item {0} at price {1}", _myAgent.ItemID, _myAgent.ItemPrice)) End If Catch e As SocketException Console.WriteLine("SocketException: {0}", e) Finally '-- clean up server.Stop() If client IsNot Nothing Then client.Close() End Try Return bReturn End Function Here the client listens to second port for data to come streaming in. It uses a TCPListener and blocks program actions until something shows up. The method then attempts to create the agent object, and sends the results back to the host. The stream here works like the 'Sending' method - data is transmitted in binary across a connection and the tcplistener jumps into action to collect and make sense of the data. Again - if the agent wasn't re-hydrated correctly the host could try a retransmission, but that isn't implemented here. IntelligentAgent_Host The host works on the same principles of the client - using the TCPClient class to wait for agent data to stream in, tries to creates the agent, responds accordingly, and if an agent is created add to the queue of agents already sent. The interesting part here is the auction the agents operate in. '-- for the auction we wait for the agents.. then start the round robin bidding. Private Sub DoAuction() If _homeIP = String.Empty Then Console.WriteLine("Set Host IP!") Exit Sub End If '-- kill off any agents still around If _agentList.Count > 0 Then _agentList.Clear() '-- wait for the agents WaitForAgents() '-- the actual bidding. '-- this gist here is of all the agents are in a list.. if they drop out because of a max bid they are off the list. '-- go until the list is empty or has one agent. Dim activeBidders As New List(Of Agent.Agent)(_agentList) Dim lCurrentPrice As Int32 = _item.Price Dim tempBid As Int32 = 0 Console.WriteLine("Auction Starting") Console.WriteLine(_item.ToString) Console.WriteLine("--------------") '-- keep the agents going round right round. While activeBidders.Count > 1 '-- displays to the host what happened.. might as well be logging to a file or a db. Console.WriteLine(String.Format("Current Price: {0}", lCurrentPrice)) For i As Int32 = activeBidders.Count - 1 To 0 Step -1 tempBid = activeBidders(i).DoBid(lCurrentPrice) '-- deal with what the agents said. If tempBid = -1 Then Console.WriteLine(String.Format("{0} dropped out.", activeBidders(i).MYID)) activeBidders.Remove(activeBidders(i)) Else If activeBidders.Count > 1 Then lCurrentPrice += tempBid Console.WriteLine(String.Format("{0}: {1}.", activeBidders(i).MYID, lCurrentPrice)) End If End If Next End While '-- if everyones out the item isn't won. If activeBidders.Count = 0 Then Console.WriteLine("No winners") Else '-- if won then send the agent with the item id and how much If activeBidders.Count = 1 Then For Each temp As Agent.Agent In _agentList If temp.MYID = activeBidders(0).MYID Then temp.ItemID = _item.ID temp.ItemPrice = lCurrentPrice Console.WriteLine(String.Format("Agent {0} won item {1} at price {2}({3})", temp.MYID, temp.ItemID, temp.ItemPrice, _item.Price)) Exit For End If Next End If End If '-- send everyone home. For Each temp As Agent.Agent In _agentList SendAgentBack(temp) Next End Sub Here the host waits to collect the agents, shifts them into a new collection on who is active or not, and does a simple while loop to bid on an object. The gist is for each agent the current price is sent in. The agent acts on this data and either sends back the increment it wants to provide or -1 to say it is out of the auction (and removed from the list of active bidders). When there is one or no agents left the item is dolled out (or not) and then the host sends all the agents back to their home systems. The sending is pretty much the same code as the agent sending to the host! A good bit of parity of functions between the host and client. In a nutshell - that is it! There's some filler code for creating a more user friend menu system and setting uninteresting data, but the meat of the system: transmitting agents to another system and having them interact is there. The host's "DoAuction" provides a common arena for the agent objects to meet up, a semi circular loop to keep a baseline to act, and a common interface to have each agent decide on the data at hand. The scary big idea of "how to get my object into binary" is wrapped up in what - eight lines of code, and creating a connection between two systems (on different IPs) boils down to about as many lines? Advanced topics: - making agents smarter or more complex - sliding biding increments - waiting for right item - multi item biding - so agents go in with a list of items to wait/bid for - and not just all of them. - a better way of threading the bidding process so it is not a queue - security of transmission and ensuring object integrity - agents bring items to be bid on - a more robust way to track money and transactions with a database. Project setup: My solution structure: Code: Agent.Agent Spoiler Agent.Item Spoiler IntelligentAgent_Client Spoiler IntelligentAgent_Host Spoiler Example of interaction: Spoiler
http://www.dreamincode.net/forums/topic/289277-semi-intelligent-multi-agent-auction-hosting/
CC-MAIN-2018-17
refinedweb
2,328
65.73
. Step by step example :. Sample code: #include < iostream > using namespace std; int main() { int a[100]; // The sorting array int n; // The number of elements cout << "Please insert the number of elements to be sorted: "; cin >> n; // The total number of elements for(int i=0;i< n;i++) { cout << "Input " << i << " element: "; cin >>a[i]; // Adding the elements to the array } cout << "Unsorted list:" << endl; // Displaying the unsorted array for(int i=0;i< n;i++) { cout << a[i] << " "; } int flag=-1; // The flag is necessary to verify is the array is sorted, 0 represents sorted array int k=n; // Another variable is required to hold the total number of elements while(flag!=0 && k>=0) // Conditions to check if the array is already sorted { k=k-1; flag=0; for (int i=0;i < k;i++) { if(a[i]>a[i+1]) // Check if the two adjacent values are in the proper order, if not, swap them { int aux=a[i]; // The swap procedure a[i]=a[i+1]; a[i+1]=aux; flag=1; // A swap has taken place, this means a new run of the algorithm must take place } // to determine if the array is sorted. } } cout << "nSorted list:" << endl; // Display the sorted array for(int i=0;i < n;i++) { cout << a[i] << " "; } return 0; } Output: Code explanation: while(flag!=0 && k>=0) { k=k-1; flag=0; ... }. Complexity: Compares – there is a for loop embedded inside a while loop (n-1)+(n-2)+(n-3)+ … +1 which results in O(n^2), Swaps – Best Case 0, or O(1) and on Worst Case (n-1)+(n-2)+(n-3) + … +1 , or O(n^2). Advantages: - it is easy to learn; - few lines of code; - works very well on already sorted lists, or lists with just a few permutations. Disadvantages: - not effective for large numbers of sorting elements; - complexity is O(n^2). Conclusion:. [catlist id=197].
http://www.exforsys.com/tutorials/c-algorithms/bubble-sort.html
CC-MAIN-2016-18
refinedweb
322
55.92
Created on 2010-04-24 18:57 by rubenlm, last changed 2010-05-16 18:28 by eric.araujo. The code that lists directory contents in rmtree is: try: names = os.listdir(path) except os.error, err: onerror(os.listdir, path, sys.exc_info()) If there is an error there is nothing the "onerror" function can do to fix the problem because the variable "names" will not be updated after the problem is solved in "onerror". Two possible solutions: 1 - Call os.listdir() again after onerror() try: names = os.listdir(path) except os.error, err: onerror(os.listdir, path, sys.exc_info()) names = os.listdir(path) 2 - Allow onerror() to return a value and set "names" to that value. try: names = os.listdir(path) except os.error, err: names = onerror(os.listdir, path, sys.exc_info()) If solution 1 is acceptable in the general case, then I think a better fix would look like this: try: names = os.listdir(path) except os.error, err: onerror(os.listdir, path, sys.exc_info()) return That is, this is another case in which we can't continue even if onerror returns. However, onerror is free to correct the problem and then call rmtree. (The danger, of course, is infinite recursion, but I don't think it is our responsibility to protect the author of an onerror handler from that potential mistake.) By analogy to the other place rmtree returns after an onerror call, the above fix does fix a real bug, regardless of the disposition of the feature request, since currently if onerror returns we get a name error. Your solution sounds fine to me. Currently we don't get a NameError because "names" is set to [] before the "try". What happens is that the "for" is skipped and later rmdir fails with "directory not empty". The whole error handling in rmtree strikes me as something that cannot be used efficiently. (see also #7969). How can you decide in an isolated function, that can be called anywhere in the tree you want to remove, the proper thing to do ? You don't know the global status of what is going on. I think rmtree() should drop these onerror calls and have two different behaviors: 1/ remove all it can in the tree, and return a list of files it couldn't remove, with the error for each file. The developer can then act upon. 2/ analyze the tree to see if the full removal can be done. If it's possible, it does it, if not, it doesn't do anything and return the problems. For 2/ a possible way to do it could be to copy in a temporary place files that are being removed and copy them back in place in case a problem occurs. This can be long and space consuming though, for big files and big trees. I am not 100% sure 2/ is really useful though... Do you really need the global status? I wrote an onerror that seems to works fine after I modified rmtree with the "return" suggested by r.david.murray. It assumes that: if os.listdir fails: the user doesn't have read permissions in the dir; if os.remove or os.rmdir fails: the user doesn't have write permissions in the dir that contains the file/dir being removed. There are other reasons it can fail (attributes, acl, readonly filesystem, ...) but having access to the global status doesn't seem to be of much help anyway. I don't like your fix 2/ because it can fail to copy files if you don't have read permissions for the file but have write permissions in the directory (so you can delete the file). Besides, the behaviour doesn't seem useful. /1 seems ok to me but to make use of the global status it provides the user must write a somewhat complex recovery code. All in all it seems the current behaviour of having an onerror function is more user friendly. > /1 seems ok to me but to make use of the global status it provides > the user must write a somewhat complex recovery code. The onerror() code you did is as complex as a global function working with a sequence returned by rmtree, since it is used *everywhere* in rmtree, and not only for os.listdir issues. IOW, if you didn't handle other possible failures than os.listdir errors, it will fail if rmtree calls it for other APIs like os.remove, etc.. If we state that onerror() is not used as a fallback in rmtree(), and do whatever it wants to do on its side, in an isolated manner, then I find it simpler that this function works will a list of paths rmtree() failed to removed, at the end of the rmtree() process. I'd be curious to see your onerror() function btw: if it's here just to silent permission errors, 1/ would make it even simpler: don't deal with the error list returned by rmtree() that's all. Well, I don't think removing the current onerror support is a viable option for backward compatibility reasons, so we might as well fix it. I agree that (2) sounds tricky to get reliably right, while I don't see how (1) is an improvement over onerror. It seems to me that the list is either really a tree, in which case the error handler has to reimplement most of rmtree's logic in order to do a recover, or it is a list of error filepath pairs, in which case the error handler would be in most cases simply be iterating over the list and branching based on the error flag...which is no different than what onerror allows, but onerror doesn't need the loop code. Actually, it's less capable than what onerror allows, since if onerror successfully removes a problem file, rmtree will remove the directory, whereas a (1) style handler will have to have its own error logic for dealing with the successive removals (and the (1) style list will have to be guaranteed to be sorted in the correct bottom up order). I guess I don't see how knowing the global state (which in this case appears to mean the total list of files and directories where removal failed) is useful very often, if ever. It feels like it is a more complicated API that provides little benefit. Do you have some use cases in mind, Tarek? On the other hand, it seems to me that a nice improvement to onerror would be for it to accept an object, and call an error-case-specific method for each different error case, perhaps even checking for method existence first and doing its normal error handling for that case if the method isn't defined. Here is my current error handler: def handleRmtreeError(func, path, exc): excvalue = exc[1] if excvalue.errno == errno.EACCES: if func in (os.rmdir, os.remove): parentpath = path.rpartition('/')[0] os.chmod(parentpath, stat.S_IRWXU) # 0700 func(path) elif func is os.listdir: os.chmod(path, stat.S_IRWXU) # 0700 rmtree(path=path, ignore_errors=False, onerror=handleRmtreeError) else: raise Looking back to this code there is an infinite recursion bug if os.chmod fails for some reason in the os.listdir condition. I don't see an easy way to solve this... > Well, I don't think removing the current onerror support is a viable > option for backward compatibility reasons, > so we might as well fix it. The options could be deprecated since the new behavior would *return* errors. > Do you have some use cases in mind, Tarek? What I have in mind is robustness and simplicity: robustness because rmtree() will stop calling third party code that can possibly fail and blow the whole process, while working at removing the tree. Simplicity because, if it fails at removing some files using the usual os.* APIs, it will just return these errors. Having this two phases-process will ensure that rmtree() did all that was possible to remove files. And as I said previously, I am curious to know what is going to be done in the onerror() function when something fails in rmtree(). I doubt any third-party code will do better. This statement "I couldn't copy this file, try it yourself" seems doomed to complexity. If the only use case for onerror() is to silent failures, returning these failures seem quite enough. Ala smtp when you get back a list of mails that couldn't be send: it doesn't ask you to send them by yourself, just informs you. Now maybe we do miss some APIs to check for a file tree sanity, like: - are the permissions the same throughout the tree ? - is there any link that will make rmtree() fail ? - etc/ Looking at your example rubenlm, it appears like a case that is missing in rmtree(). You are trying to chmod your tree if a file in there cannot be removed because of the permissions. This sounds like something we need to add in rmtree() directly, for example under a "force_permissions" flag that would handle permission failures by trying to chmod. I think rmtree() should not try to delegate the hard work to third party code, and should try to handle as much failures as possible, and just return errors.
http://bugs.python.org/issue8523
crawl-003
refinedweb
1,562
72.05
Johannes Stezenbach wrote: > On Sat, Dec 03, 2005, Klaus Schmidinger wrote: > >>(AFAIK with NPTL all threads >>of a given program have the same pid, so you won't be able to >>distinguish them in 'top'). > > > This is not entirely true, you can still see and distinguish > the threads in htop or "ps -T u -C vdr" etc. (top does not work). > >.) Does this "gettid" call return a different tid than "pthread_self()"? I'm just wondering because the introduction of "pthread_self()" was one of the things we had to change to make VDR run with NPTL... Klaus > Johannes > > --- vdr-1.3.37/thread.c.orig 2005-12-03 19:52:38.000000000 +0100 > +++ vdr-1.3.37/thread.c 2005-12-03 20:12:47.000000000 +0100 > @@ -17,6 +17,11 @@ > #include <unistd.h> > #include "tools.h" > > +static inline pid_t gettid(void) > +{ > + return (pid_t) syscall(224); > +} > + > static bool GetAbsTime(struct timespec *Abstime, int MillisecondsFromNow) > { > struct timeval now; > @@ -231,10 +236,10 @@ void cThread::SetDescription(const char > void *cThread::StartThread(cThread *Thread) > { > if (Thread->description) > - dsyslog("%s thread started (pid=%d, tid=%ld)", Thread->description, getpid(), pthread_self()); > + dsyslog("%s thread started (pid=%d, tid=%d)", Thread->description, getpid(), gettid()); > Thread->Action(); > if (Thread->description) > - dsyslog("%s thread ended (pid=%d, tid=%ld)", Thread->description, getpid(), pthread_self()); > + dsyslog("%s thread ended (pid=%d, tid=%d)", Thread->description, getpid(), gettid()); > Thread->running = false; > Thread->active = false; > return NULL;
http://www.linuxtv.org/pipermail/vdr/2005-December/006484.html
CC-MAIN-2016-18
refinedweb
234
64.91
Guido van Rossum wrote: > Another idea might be to change the exec() spec so that you are > required to pass in a namespace (and you can't use locals() either!). > Then the whole point becomes moot. I think of exec as having two major uses: (1) A run-time compiler (2) A way to change the local namespace, based on run-time information (such as a config file). By turning exec into a function with its own namespace (and enforcing a readonly locals()), the second use is eliminated. Is this intentional for security/style/efficiency/predictability? If so, could exec/eval at least (1) Be treatable as nested functions, so that they can *read* the current namespace. (2) Grow a return value, so that they can more easily pass information back to at least a (tuple of) known variable name(s). -jJ
https://mail.python.org/pipermail/python-dev/2005-October/057380.html
CC-MAIN-2016-40
refinedweb
142
60.55
> bits.zip > bits.c, change:2008-01-26,size:9837b /* * CS:APP Data Lab * * <Please put your name and userid here> * *. */ #if 0 /* * Instructions to Students: * * STEP 1: Read the following instructions carefully. */ You will provide your solution to the Data Lab by editing the collection of functions in this source file. INTEGER. 7. Use any data type other than int. This implies that you cannot use arrays, structs, or unions.; } FLOATING POINT CODING RULES For the problems that require you to implent floating-point operations, the coding rules are less strict. You are allowed to use looping and conditional control. You are allowed to use both ints and unsigneds. You can use arbitrary integer and unsigned constants. You are expressly forbidden to: 1. Define or use any macros. 2. Define any additional functions in this file. 3. Call any functions. 4. Use any form of casting. 5. Use any data type other than int or unsigned. This means that you cannot use arrays, structs, or unions. 6. Use any floating point data types, operations, or constants.. Use the BDD checker to formally verify your functions 5. The maximum number of ops for each function is given in the header comment for each function. If there are any inconsistencies between the maximum ops in the writeup and in this file, consider this file the authoritative source. /* * STEP 2: Modify the following functions according the coding rules. * * IMPORTANT. TO AVOID GRADING SURPRISES: * 1. Use the dlc compiler to check that your solutions conform * to the coding rules. * 2. Use the BDD checker to formally verify that your solutions produce * the correct answers. */ #endif /* * bitAnd - x&y using only ~ and | * Example: bitAnd(6, 5) = 4 * Legal ops: ~ | * Max ops: 8 * Rating: 1 */ int bitAnd(int x, int y) { return ~((~x)|(~y)); } /* * thirdBits - return word with every third bit (starting from the LSB) set to 1 * Legal ops: ! ~ & ^ | + << >> * Max ops: 8 * Rating: 1 */ int thirdBits(void) { int x = 0x24; x = x+(x<<6); x = x+(x<<12); x = x+(x<<24); return (x<<1)+1; } /* * conditional - same as x ? y : z * Example: conditional(2,4,5) = 4 * Legal ops: ! ~ & ^ | + << >> * Max ops: 16 * Rating: 3 */ int conditional(int x, int y, int z) { return (((!x)-1)&y) + ((~((!x)-1))&z); } /* * logicalShift - shift x to the right by n, using a logical shift * Can assume that 1 <= n <= 31 * Examples: logicalShift(0x87654321,4) = 0x08765432 * Legal ops: ~ & ^ | + << >> * Max ops: 16 * Rating: 3 */ int logicalShift(int x, int n) { int z = x >> n; int y = 1<<31; y = y >> (n-1); y = ~y; return z&y; } /* * isNonZero - Check whether x is nonzero using * the legal operators except ! * Examples: isNonZero(3) = 1, isNonZero(0) = 0 * Legal ops: ~ & ^ | + << >> * Max ops: 10 * Rating: 4 */ int isNonZero(int x) { return 2; } /* * leftBitCount - returns count of number of consective 1's in * left-hand (most significant) end of word. * Examples: leftBitCount(-1) = 32, leftBitCount(0xFFF0F0F0) = 12 * Legal ops: ! ~ & ^ | + << >> * Max ops: 50 * Rating: 4 */ int leftBitCount(int x) { return 2; } /* * isTmin - returns 1 if x is the minimum, two's complement number, * and 0 otherwise * Legal ops: ! ~ & ^ | + * Max ops: 8 * Rating: 1 */ int isTmin(int x) { return !(x^(1<<31)); } /* * fitsShort - return 1 if x can be represented as a * 16-bit, two's complement integer. * Examples: fitsShort(33000) = 0, fitsShort(-32768) = 1 * Legal ops: ! ~ & ^ | + << >> * Max ops: 8 * Rating: 1 */ int fitsShort(int x) { int y = x>>15; return !(y^0) + !(y^(~0)); } /* * sign - return 1 if positive, 0 if zero, and -1 if negative * Examples: sign(130) = 1 * sign(-23) = -1 * Legal ops: ! ~ & ^ | + << >> * Max ops: 10 * Rating: 2 */ int sign(int x) { return (x>>31)|(!!x); } /* * isGreater - if x > y then return 1, else return 0 * Example: isGreater(4,5) = 0, isGreater(5,4) = 1 * Legal ops: ! ~ & ^ | + << >> * Max ops: 24 * Rating: 3 */ int isGreater(int x, int y) { int case_1, case_2, sign_x, sign_y, y_minus_x; sign_x = x>>31; sign_y = y>>31; case_1 = !(sign_x^(sign_y+1)); y_minus_x = y+((~x)+1); case_2 = !!((~(sign_x^sign_y)) & (y_minus_x)>>31); return case_1 | case_2; } /* * ezThreeFourths - multiplies by 3/4 rounding toward 0, * Should exactly duplicate effect of C expression (x*3/4), * including overflow behavior. * Examples: ezThreeFourths(11) = 8 * ezThreeFourths(-9) = -6 * ezThreeFourths(1073741824) = -268435456 (overflow) * Legal ops: ! ~ & ^ | + << >> * Max ops: 12 * Rating: 3 */ int ezThreeFourths(int x) { int z = x+x+x; int sign_z = z>>31; return ((z>>2)&(~sign_z)) + (((z>>2)+1)&sign_z); } /* * trueThreeFourths - multiplies by 3/4 rounding toward 0, * avoiding errors due to overflow * Examples: trueThreeFourths(11) = 8 * trueThreeFourths(-9) = -6 * trueThreeFourths(1073741824) = 805306368 (no overflow) * Legal ops: ! ~ & ^ | + << >> * Max ops: 20 * Rating: 4 */ int trueThreeFourths(int x) { return 2; } /* * float_abs - Return bit-level equivalent of absolute value of f for * floating point argument f. * Both the argument and result are passed as unsigned int's, but * they are to be interpreted as the bit-level representations of * single-precision floating point values. * When argument is NaN, return NaN value 0x7FC00000. * Legal ops: Any integer/unsigned operations. Conditional operations * Max ops: 10 * Rating: 2 */ unsigned float_abs(unsigned uf) { int mantissa, exp; mantissa = uf&0x007fffff; exp = ((uf&0x7f800000)>>23); if((exp == 0x000000ff) && (mantissa != 0)) return 0x7fc00000; else if(uf == 0x80000000) return 0x80000000; else return uf&0x7fffffff; } /* * float_f2i - Return bit-level equivalent of expression (int) f * for floating point argument f. * Argument is passed as unsigned int, but * it is to be interpreted as the bit-level representation of a * single-precision floating point value. * Anything out of range (including NaN and infinity) should return * 0x80000000u. * Legal ops: Any integer/unsigned operations. Conditional operations * Max ops: 30 * Rating: 4 */ int float_f2i(unsigned uf) { return 2; } /* * float_twice - Return bit-level equivalent of expression 2*f for * floating point argument f. * Both the argument and result are passed as unsigned int's, but * they are to be interpreted as the bit-level representation of * single-precision floating point values. * When argument is NaN, return NaN value 0x7FC00000. * Legal ops: Any integer/unsigned operations. Conditional operations * Max ops: 30 * Rating: 4 */ unsigned float_twice(unsigned uf) { return 2; }
http://read.pudn.com/downloads109/sourcecode/others/449517/bits.c__.htm
CC-MAIN-2016-22
refinedweb
994
56.15
Python. Example: Repeated Measures ANOVA in Python. Use the following steps to perform the repeated measures ANOVA in Python. Step 1: Enter the data. First, we’ll create a pandas DataFrame to hold our data: import numpy as np import pandas as pd #create data df = pd.DataFrame({'patient': np.repeat([1, 2, 3, 4, 5], 4), 'drug': np.tile([1, 2, 3, 4], 5), 'response': [30, 28, 16, 34, 14, 18, 10, 22, 24, 20, 18, 30, 38, 34, 20, 44, 26, 28, 14, 30]}) #view first ten rows of data df.head[:10] patient drug response 0 1 1 30 1 1 2 28 2 1 3 16 3 1 4 34 4 2 1 14 5 2 2 18 6 2 3 10 7 2 4 22 8 3 1 24 9 3 2 20 Step 2: Perform the repeated measures ANOVA. Next, we will perform the repeated measures ANOVA using the AnovaRM() function from the statsmodels library: from statsmodels.stats.anova import AnovaRM #perform the repeated measures ANOVA print(AnovaRM(data=df, depvar='response', subject='patient', within=['drug']).fit()) Anova ================================== F Value Num DF Den DF Pr > F ---------------------------------- drug 24.7589 3.0000 12.0000 0.0000 ================================== Step 3: Interpret the results. A repeated measures ANOVA uses the following null and alternative hypotheses: The null hypothesis (H0): µ1 = µ2 = µ3 (the population means are all equal) The alternative hypothesis: (Ha): at least one population mean is different from the rest In this example, the F test-statistic is 24.7589 and the corresponding p-value is 0.0000. Since this p-value is less than 0.05, we reject the null hypothesis and conclude that there is a statistically significant difference in mean response times between the four drugs..75887, p < 0.001).
https://www.statology.org/repeated-measures-anova-python/
CC-MAIN-2021-04
refinedweb
297
62.98
Part 3 – Implementation In the last article we covered a number of experimental design issues and made some decisions for our experiments. We decided to compare the performance of two simple artificial neural networks on the Iris Flower dataset. The first neural network will be the control arm, and it will consist of a single hidden layer of four neurons. The second neural network will be the experimental arm, and it will consist of a single hidden layer of five neurons. We will train both of these using default configurations supplied by the Keras library and collect thirty accuracy samples per arm. We will then apply the Wilcoxon Rank Sums test to test the significance of our results. Overview This will be the third. 3. Implementation With our dataset analysis and experimental design complete, let’s jump straight into coding up the experiments. If your desired dataset is hosted on Kaggle, as it is with the Iris Flower Dataset, you can spin up a Kaggle Kernel easily through the web interface: You’re also welcome to use your own development environment, provided you can load the Iris Flower dataset. Import packages Before we can make use of the many libraries available for Python, we need to import them into our notebook. We’re going to need numpy, pandas, tensorflow, keras, and sklearn. Depending on your development environment these may already be installed and ready for importing. You’ll need to install them if that’s not the case. import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import tensorflow as tf # dataflow programming from tensorflow import keras # neural networks API from sklearn.model_selection import train_test_split # dataset splitting If you’re using a Kaggle Kernel notebook you can just update the default cell. Below you can see I’ve included imports for tensorflow, keras, and sklearn. To support those using their own coding environment, I have listed the version numbers for the imported packages below: - tensorflow==1.11.0rc1 - scikit-learn==0.19.1 - pandas==0.23.4 - numpy==1.15.2 Preparing the dataset First, we load the Iris Flower dataset into a pandas DataFrame using the following code: # Load iris dataset into dataframe iris_data = pd.read_csv("../input/Iris.csv") Input parameters Now, we need to separate the four input parameters from the classification labels. There are multiple ways to do this, but we’re going to use pandas.DataFrame.iloc, which allows selection from the DataFrame using integer indexing. # Splitting data into training and test set X = iris_data.iloc[:,1:5].values With the above code we have selected all the rows (indicated by the colon) and the columns at index 1, 2, 3, and 4 (indicated by the 1:5). You may be wondering why the fifth column was not included, as we specified 1:5, that’s because in Python we’re counting from one up to five, but not including it. If we wanted the fifth column, we’d need to specify 1:6. It’s important to remember that Python’s indexing starts at 0, not 1. If we had specified 0:5, we would also be selecting the “Id” column. To remind ourselves of what columns are at index 1, 2, 3, and 4, let’s use the pandas.DataFrame.head() method from the first part. We can also print out the contents of our new variable, X, which is storing all the Sepal Length/Width and Petal Length/Width data for our 150 samples. This is all of our input data. For now, that is all the processing needed for the input parameters. Classification labels We know from our dataset analysis in part 1 that our samples are classified into three categories, “Iris-setosa“, “Iris-virginica“, and “Iris-versicolor“. However, this alphanumeric representation of the labels is not compatible with our machine learning functions, so we need to convert them into something numeric. Again, there are many ways to achieve a similar result, but let’s use pandas features for categorical data. By explicitly selecting the Species column from our dataset as being of the category datatype, we can use pandas.Series.cat.codes to get numeric values for our class labels. We have one extra step, because we plan on using the categorical_crossentropy objective function to train our model. The Keras documentation gives the following instructions: When using theKeras Documentation () categorical_crossentropyloss, your targets should be in categorical format (e.g. if you have 10 classes, the target for each sample should be a 10-dimensional vector that is all-zeros except for a 1 at the index corresponding to the class of the sample). What this means is we will need to use One-hot encoding. This is quite typical for categorical data which is to be used with machine learning algorithms. Here is an example of One-hot encoding using the Iris Flower dataset: You can see that each classification label has its own column, so Setosa is \(1,0,0\), Virginica is \(0,1,0\), and Versicolor is \(0,0,1\). Luckily encoding our labels using Python and Keras is easy, and we’ve already completed the first step which is converting our alphanumeric classes to numeric ones. To convert to One-hot encoding we can use keras.utils.to_categorical(): # Use One-hot encoding for class labels Y = keras.utils.to_categorical(y,num_classes=None) Training and testing split In the previous part of this series we decided on the following: The Iris Flower dataset is relatively small at exactly 150 samples. Because of this, we will use 70% of the dataset for training, and the remaining 30% for testing, otherwise our test set will be a little on the small side.Machine Learning with Kaggle Kernels – Part 2 This is where sklearn.model_selection.train_test_split() comes in. This function will split our dataset into a randomised training and testing subset: # split into randomised training and testing subset X_train, X_test, y_train, y_test = train_test_split(X,Y,test_size=0.3,random_state=0) This code splits the data, giving 30% (45 samples) to the testing set and the remaining 70% (105 samples) for the training set. The 30/70 split is defined using test_size=0.3 and random_state=0 defines the seed for the randomisation of the subsets. These have been spread across four new arrays storing the following data: - X_train: the input parameters, to be used for training. - y_train: the classification labels corresponding to the X_train above, to be used for training. - X_test: the input parameters, to be used for testing. - y_test: the classification labels corresponding to the X_test above, to be used for testing. Before moving on, I recommend you have a closer look at the above four variables, so that you understand the division of the dataset. Neural networks with Keras Keras is the software library we will be using through Python, to code up and conduct our experiments. It’s a user friendly high-level neural networks library which in our case will be running on top of TensorFlow. What is most attractive about Keras is how quickly you can go from your design to the result. Configuring the model The keras.Sequential() model allows you to build a neural network by stacking layers. You can add layers using the add() method, which in our case will be Dense() layers. A dense layer is a layer in which every neuron is connected to every neuron in the next layer. Dense() expects a number of parameters, e.g. the number of neurons to be on the layer, the activation function, the input_shape (if it is the first layer in the model), etc. model = keras.Sequential() model.add(keras.layers.Dense(4, input_shape=(4,), activation='tanh')) model.add(keras.layers.Dense(3, activation='softmax')) In the above code we have created our empty model and then added two layers, the first is a hidden layer consisting of four neurons which are expecting four inputs. The second layer is the output layer consisting of our three output neurons. We then need to configure our model for training, which is achieved using the compile() method. Here we will specify our optimiser to be Adam(), configure for categorical classification, and specify our use of accuracy for the metric. model.compile(keras.optimizers.Adam(), 'categorical_crossentropy', metrics=['accuracy']) At this point, you may wish to use the summary() method to confirm you’ve built the model as intended: Training the model Now comes the actual training of the model! We’re going to use the fit() method of the model and specify the training input data and desired labels, the number of epochs (the number of times the training algorithm sees the entire dataset), a flag to set the verbosity of the process to silent. Setting the verbosity to silent is entirely optional, but it helps us manage the notebook output. model.fit(X_train, y_train, epochs=300, verbose=0) If you’re interested in receiving more feedback during the training (or optimisation) process, you can remove the assignment of the verbose flag when invoking the fit() method to use the default value. Now when the training algorithm is being executed, you will see output at every epoch: Testing the model After the neural network has been trained, we want to evaluate it against our test set and output its accuracy. The evaluate() method returns a list containing the loss value at index 0 and in this case, the accuracy metric at index 1. accuracy = model.evaluate(X_test, y_test)[1] If we run all the code up until this point, and we output the contents of our accuracy variable, we should see something similar to the following: Generating all our results Up until this point, we have successfully prepared the Iris Flower dataset, configured our model, trained our model, evaluated it using the test set, and reported its accuracy. However, this reported accuracy is only one sample of our desired thirty. We can do this with a simple loop to repeat the process thirty times, and a list to store all the results. This only requires some minor modifications to our existing code: results_control_accuracy = [] for i in range(0,30): model = keras.Sequential() model.add(keras.layers.Dense_control_accuracy.append(accuracy) print(results_control_accuracy) This will take a few minutes to execute depending on whether you’re using a Kaggle Kernel notebook or your own development environment, but once it has you should see a list containing the accuracy results for all thirty of the executions (but your results will vary): [0.9333333359824286,.6000000052981906, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9111111124356588,.9111111124356588] These are the results for our control arm, let’s now do the same for our experimental arm. The experimental arm only has one difference: the number of neurons on the hidden layer. We can re-use our code for the control arm and just make a single modification where: model.add(keras.layers.Dense(4, input_shape=(4,), activation='tanh')) is changed to: model.add(keras.layers.Dense(5, input_shape=(4,), activation='tanh')) Of course, we’ll also need to change the name of the list variable so that we don’t overwrite the results for our control arm. The code will end up looking like this: results_experimental_accuracy = [] for i in range(0,30): model = keras.Sequential() model.add(keras.layers.Dense_experimental_accuracy.append(accuracy) print(results_experimental_accuracy) After executing the above and waiting a few minutes, we will have our second set of results: [0.9111111124356588, 0.9555555568801032, 0.9555555568801032, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9555555568801032, 0.933333334657881, 0.9777777791023254, 0.9777777791023254, 0.9777777791023254, 0.9555555568801032, 0.9777777791023254, 0.933333334657881,333333359824286, 0.9777777791023254, 0.9777777791023254, 0.9333333359824286, 0.9777777791023254, 0.9555555568801032, 0.9777777791023254, 0.9777777791023254] Saving the results The results for our experiment have been generated, and it’s important that we save them somewhere, so we that can use them later. There are multiple approaches to saving or persisting your data, but we are going to make use of pandas.DataFrame.to_csv(): pd.DataFrame(results_control_accuracy).to_csv('results_control_accuracy.csv', index=False) pd.DataFrame(results_experimental_accuracy).to_csv('results_experimental_accuracy.csv', index=False) The above code will save your results to individual files corresponding to the arm of the experiment. Where the files go depend entirely on your development environment. If you’re developing in your own local environment, then you will likely find the files in the same folder as your notebook or script. If you’re using a Kaggle Kernel, it is important that you click the blue commit button in the top right of the page. It will take a few minutes to commit your notebook but once it’s done, you know your file is safe. It’s not immediately obvious where the files have been stored, but you can double check their existence by repeating the following steps: Conclusion In this article we prepared our dataset such that it was ready to be fed into our neural network training and testing process. We then built and trained our neural network models using Python and Keras, followed by some simple automation to generate thirty samples per arm of our experiment. In the next part of this four-part series, we will have a look at how our solutions performed and discuss anything interesting about the results. This will include some visualisation, and we may even return to our experiment code to produce some new results.
https://blog.shahinrostami.com/2018/10/machine-learning-with-kaggle-kernels-part-3/
CC-MAIN-2019-09
refinedweb
2,239
51.68
Available items The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:ocommand via npm: sudo npm install -g journo ... now, let's go through those features one at a time: Create a folder for your blog, and cdinto it. Type journo initto bootstrap a new empty blog. Edit the config.json, layout.html, and posts/index.mdfiles to suit. Type journoto start the preview server, and have at it. We'll use the excellent marked module to compile Markdown into HTML, and Underscore for many of its goodies later on. Up top, create a namespace for shared values needed by more than one function. coffeescript marked = require 'marked' _ = require 'underscore' shared = {}To render a post, we take its raw source, treat it as both an Underscore template (for HTML generation) and as Markdown (for formatting), and insert it into the layout as content. coffeescript. coffeescript loadLayout = (force) -> return layout if not force and layout = shared.layout shared.layout = _.template(fs.readFileSync('layout.html').toString()) A blog is a folder on your hard drive. Within the blog, you have a postsfolder for blog posts, a publicfolder for static content, a layout.htmlfile for the layout which wraps every page, and a journo.jsonfile for configuration. During a build, a static version of the site is rendered into the sitefolder, by rsyncing over all static files, rendering and writing every post, and creating an RSS feed. ```coffeescriptconfiguration file is where you keep the configuration details of your blog, and how to connect to the server you'd like to publish it on. The valid settings are: title, description, author(for RSS), url, publish(the publishPort(if your server doesn't listen to SSH on the usual one). An example config.jsonwill(/\/$/, '') </pre> <h2>Publish via rsync</h2> <p>Publishing is nice and rudimentary. We build out an entirely static version of the site and <strong>rsync</strong> it up to the server. </p><pre>coffeescript Journo.publish = -> do Journo.build rsync 'site/images/', path.join(shared.config.publish, 'images/'), -> rsync 'site/', shared.config.publish </pre> A helper function for <strong>rsync</strong>ing, with logging, and the ability to wait for the rsync to continue before proceeding. This is useful for ensuring that our any new photos have finished uploading (very slowly) before the update to the feed is syndicated out. <pre>coffeescript </pre> <h2>Maintain a Manifest File</h2> <p>The "manifest" is where Journo keeps track of metadata -- the title, description, publications date and last modified time of each post. Everything you need to render out an RSS feed ... and everything you need to know if a post has been updated or removed. ```coffeescript manifestPath = 'journo-manifest.json'</p> <p>loadManifest = -> do loadConfig</p> <p>shared.manifest = if fs.existsSync manifestPath JSON.parse fs.readFileSync manifestPath else {}</p> <p>do updateManifest fs.writeFileSync manifestPath, JSON.stringify shared.manifest </p><pre> We update the manifest by looping through every post and every entry in the existing manifest, looking for differences in `mtime`, and recording those along with the title and description of each post. </pre>coffeescript updateManifest = -> manifest = shared.manifest posts = folderContents 'posts' <p>delete manifest[post] for post of manifest when post not in posts</p> <p</p> <p>yes ```</p> <h2>Retina Ready</h2> <p>In the future, it may make sense for Journo to have some sort of built-in facility for automatically downsizing photos from retina to regular sizes ... But for now, this bit is up to you.</p> <h2>Syntax Highlight Code</h2> <p>We syntax-highlight blocks of code with the nifty <strong>highlight</strong> package that includes heuristics for auto-language detection, so you don't have to specify what you're coding in. ```coffeescript {Highlight} = require 'highlight'</p> <p>marked.setOptions highlight: (code, lang) -> Highlight code ```</p> <h2>Publish a Feed</h2> <p>We'll use the <strong>rss</strong> module to build a simple feed of recent posts. Start with the basic </p><pre>author</pre>, blog <pre>title</pre>, <pre>description</pre> and <pre>url</pre> configured in the <pre>config.json</pre>. Then, each post's <pre>title</pre> is the first header present in the post, the <pre>description</pre> is the first paragraph, and the date is the date you first created the post file. ```coffeescript Journo.feed = -> RSS = require 'rss' do loadConfig config = shared.config <p>feed = new RSS title: config.title description: config.description feed<em>url: "#{shared.siteUrl}/rss.xml" site</em>url: shared.siteUrl author: config.author</p> <p>for post in sortedPosts()[0...20] entry = shared.manifest[post] feed.item title: entry.title description: entry.description url: postUrl post date: entry.pubtime</p> <p>feed.xml() ```</p> <h2>Quickly Bootstrap a New Blog</h2> <p>We <strong>init</strong> a new blog into the current directory by copying over the contents of a basic </p><pre>bootstrap</pre> folder. <pre>coffeescript}" </pre> <h2>Preview via a Local Server</h2> <p>Instead of constantly rebuilding a purely static version of the site, Journo provides a preview server (which you can start by just typing </p><pre>journo</pre> from within your blog). ```coffeescript Journo.preview = -> http = require 'http' mime = require 'mime' url = require 'url' util = require 'util' do loadManifest <p>server = http.createServer (req, res) -> rawPath = url.parse(req.url).pathname.replace(/(^\/|\/$)/g, '') or 'index' </p><pre> If the request is for a preview of the RSS feed... </pre>coffeescript if rawPath is 'feed.rss' res.writeHead 200, 'Content-Type': mime.lookup('.rss') res.end Journo.feed() <pre> If the request is for a static file that exists in our `public` directory... </pre>coffeescript else publicPath = "public/" + rawPath fs.exists publicPath, (exists) -> if exists res.writeHead 200, 'Content-Type': mime.lookup(publicPath) fs.createReadStream(publicPath).pipe res <pre> If the request is for the slug of a valid post, we reload the layout, and render it... </pre>coffeescript else post = "posts/#{rawPath}.md" fs.exists post, (exists) -> if exists loadLayout true fs.readFile post, (err, content) -> res.writeHead 200, 'Content-Type': 'text/html' res.end Journo.render post, content <pre> Anything else is a 404. (Does anyone know a cross-platform equivalent of the OSX `open` command?) </pre>coffeescript else res.writeHead 404 res.end '404 Not Found' <p>server.listen 1234 console.log "Journo is previewing at" exec "open" ```</p> <h2>Work Without JavaScript, But Default to a Fluid JavaScript-Enabled UI</h2> <p </p><pre>alt</pre> attributes for captions, for example. <p>Since the blog is public, it's nice if search engines can see all of the pieces as well as readers.</p> <h2>Finally, Putting it all Together. Run Journo From the Terminal</h2> <p>We'll do the simplest possible command-line interface. If a public function exists on the </p><pre>Journo</pre> object, you can run it. <em>Note that this lets you do silly things, like</em> <pre>journo toString</pre> <em>but no big deal.</em> <pre>coffeescript Journo.run = -> command = process.argv[2] or 'preview' return do Journo[command] if Journo[command] console.error "Journo doesn't know how to '#{command}'" </pre> Let's also provide a help page that lists the available commands. ```coffeescript Journo.help = Journo['--help'] = -> console.log """ Usage: journo [command] <pre>If called without a command, `journo` will preview your blog. init start a new blog in the current folder build build a static version of the blog into 'site' preview live preview the blog via a local server publish publish the blog to your remote server </pre> <p>""" </p><pre> And we might as well do the version number, for completeness' sake. </pre>coffeescript Journo.version = Journo['--version'] = -> console.log "Journo 0.0.1" Miscellaneous Bits and Utilities Little utility functions that are useful up above. The file path to the source of a givenpost.coffeescript postPath = (post) -> "posts/#{post}"The server-side path to the HTML for a givenpost.coffeescript htmlPath = (post) -> name = postName post if name is 'index' 'site/index.html' else "site/#{name}/index.html"The name (or slug) of a post, taken from the filename.coffeescript postName = (post) -> path.basename post, '.md'The full, absolute URL for a published post.coffeescript postUrl = (post) -> "#{shared.siteUrl}/#{postName(post)}/"Starting with the string contents of a post, detect the title -- the first heading.coffeescript detectTitle = (content) -> _.find(marked.lexer(content), (token) -> token.type is 'heading')?.textStarting with the string contents of a post, detect the description -- the first paragraph.coffeescript detectDescription = (content, post) -> desc = _.find(marked.lexer(content), (token) -> token.type is 'paragraph')?.text marked.parser marked.lexer _.template("#{desc}...")(renderVariables(post))Helper function to read in the contents of a folder, ignoring hidden files and directories.coffeescript folderContents = (folder) -> fs.readdirSync(folder).filter (f) -> f.charAt(0) isnt '.'Return the list of posts currently in the manifest, sorted by their date of publication.coffeescript sortedPosts = -> _.sortBy _.without(_.keys(shared.manifest), 'index.md'), (post) -> shared.manifest[post].pubtimeThe shared variables we want to allow our templates (both posts, and layout) to use in their evaluations. In the future, it would be nice to determine exactly what best belongs here, and provide an easier way for the blog author to add functions to it.coffeescript renderVariables = (post) -> { _ fs path mapLink postName folderContents posts: sortedPosts() post: path.basename(post) manifest: shared.manifest }Quick function which creates a link to a Google Map search for the name of the place.coffeescript mapLink = (place, additional = '', zoom = 15) -> query = encodeURIComponent("#{place}, #{additional}") "#{place}"Convenience function for catching errors (keeping the preview server from crashing while testing code), and printing them out.coffeescript catchErrors = (func) -> try do func catch err console.error err.stack "Finally, for errors that you want the app to die on -- things that should break the site build.#{err.stack}"coffeescript fatal = (message) -> console.error message process.exit 1
https://xscode.com/jashkenas/journo
CC-MAIN-2020-40
refinedweb
1,668
52.36
Okay guys so here is my problem. Write a program that asks the user for the low and high integer in a range of integers. The program then asks the user for integers to be added up. The program computes two sums: The sum of integers that are in the range (inclusive), and the sum of integers that are outside of the range. The user signals the end of input with a 0. Your output should look like this: Sample input/output In-range Adder Low end of range: 20 High end of range: 50 Enter data: 21 Enter data: 60 Enter data: 49 Enter data: 30 Enter data: 91 Enter data: 0 Sum of in range values: 100 Sum of out of range values: 151 This is my assignment and below is my code. import java.util.Scanner; class InRangeAdder { public static void main(String[] args) { Scanner scan = new Scanner (System.in); int low, high, data=1, rangesum=0, outrangesum=0; System.out.println("Low end of range:"); low = scan.nextInt(); System.out.println("High end of range:"); high = scan.nextInt(); while (data!=0) { System.out.println("Enter data:"); data = scan.nextInt(); } if (data >= low && data <= high) { rangesum= rangesum+data; } if (data > high || data < low) { outrangesum=outrangesum+data; } System.out.println("Sum of in range values: " +rangesum); System.out.println("Sum of out of range values: " +outrangesum); } } The problem is that the program simply gives an output of 0 for both "rangesum" and "outrangesum". I don't quite understand why that is. Also i have a quick question, the program needs me to end the program when the value of data is 0 but in order to initialize it I need to give it a value. Usually I would give it a value of 0 like I have for rangesum and outrangesum but if I do the program does not run till the loop as it considers the value of data to be 0 and ends the program right away. What would be a work around to this and when do I need to have a value to initialize an integer? for example, I do not need a value for low and high. Is this because the program recognizes that a value is going to be defined but cannot do that for the other integers as they are inside a loop? Thanks so much and any feedback is greatly appreciated. Cheers.
https://www.javaprogrammingforums.com/whats-wrong-my-code/37305-looping-doesnt-change-value.html
CC-MAIN-2020-40
refinedweb
402
71.34
GETCWD(3) BSD Programmer's Manual GETCWD(3) getcwd, getwd - get working directory pathname #include <unistd.h> char * getcwd(char *buf, size_t size); char * getwd(char *buf); The getcwd() function copies the absolute pathname of the current working directory into the memory referenced by buf and returns a pointer to buf. The size argument is the size, in bytes, of the array referenced by buf. If buf is NULL, space is allocated as necessary to store the pathname. This space may later be free(3)'d. The function getwd() is a compatibility routine which calls getcwd() with its buf argument and a size of MAXPATHLEN (as defined in the include file <sys/param.h>). Obviously, buf should be at least MAXPATHLEN bytes in length. These routines have traditionally been used by programs to save the name of a working directory for the purpose of returning to it. A much faster and less error-prone method of accomplishing this is to open the current directory (.) and use the fchdir(2) function to return.. The getwd() function will fail if: [EACCES] Read or search permission was denied for a component of the pathname. [EINVAL] The size argument is zero. [ENOENT] A component of the pathname no longer exists. [ENOMEM] Insufficient memory is available. [ERANGE] The size argument is greater than zero but smaller than the length of the pathname plus 1..
http://mirbsd.mirsolutions.de/htman/sparc/man3/getwd.htm
crawl-003
refinedweb
229
65.22
Samsung SRIB internship interview experience (2018) Round 1 (Online Coding Test): There was an online coding round which was the first round. It was conducted on cocubes platform. There were 3 coding questions, one of 3 marks and two of 5 marks. The 3 mark question was based on array. Mine was : 1. There’s a array and you need to find the second minimum of odd indices of the array and second maximum of even indices of the array. Then two return their sum. Question can be done in Linear Time . 2. Given a Binary Tree and a node(value of node) you need to find length of closest Leaf from that node. If the given node is not present in binary tree then return -1 and if node is itself a leaf then return 0 . Solution Approach-> step 1: Check whether the given node is present or not (using any of BFS or DFS) step 2: Find length of leaf closest below the given node (say z) step 3: Find length of node from root of Binary tree (say x) step 4: Find length of closest leaf from root of BT(using BFS say y) step 5: return (min(z, (x+y)) 3. There’s a binary tree. You need to find the minimum number of jumps needed to reach the second node from the first one. You need to implement the function which takes the root of the tree and the value of nodes as arguments. If any of the nodes are not present in the tree then return -1. Basically based on Lowest Common Ancestor(LCA) approach . step 1: Check whether both are present in tree or not if(not present) return -1 step 2:Find LCA to both nodes. step3: return (length of first node from LCA + length of second node from LCA) Total 322 students appeared the online coding round and around 84 students got selected for the next rounds. Round 2 (Technical GD): Next round was a group fly round. You will be given 3 questions (one by one) and you need to write the code on paper. They gave one problem and 20-30 minutes to solve. Problems were given from different groups. My group had 8 members out of which 5 are eligible for next round. Result of fly was declared immediately after the completion of round. Around 60 students got selected for the next round . Round 3 (Tech): The next round was a Personal Interview (FACE2FACE) (2 Interviewer) In this Round, I was asked to write a code for Rotations of AVL Tree and then they were asking about project and use of the project in real life. Mine was completed in about 25 minutes. Almost 55 students cleared this round as was based on general coding and basics of projects. Round 4 (Tech cum HR): The next round was another Personal Interview (FACE2FACE) (1 Senior Interviewer) This was the final Round (HR cum Technical). Questions like why this GPA were asked and then all the discussions were based on projects and algorithms. The round was a bit longer (i guess 50 minutes) but was quite interactive. Finally, 38 students were selected for SRIB internship. Fortunately we didn’t face any Proper HR round before final selection! TIPS- - Apply Brute force first (As test cases were not that much strong) - If completed your coding test (try to submit as fast as possible) - Be interactive in GD round (try to tell your approach first ) - Never Forget to ask question about company (As they will think that you are not interested in the company if u say you don’t have any.
https://www.geeksforgeeks.org/samsung-srib-internship-interview-experience-2018/
CC-MAIN-2021-31
refinedweb
610
78.99
From the Journals of Tarn Barford Jul 22, 2011 I fucking love the Manos do Mono manifesto. It has very grand ideals and a philosophy of simplicity which I really dig, it is also excellently illustrated with pictures of cats. I've wanted try it since listening to Jackson Harper talking about it on a Herding Code podcast last year. I'm not sure if I needed to upgrade from version 2.6.7 to use Manos, which is in included with the Linux Mint distribution I use, but I knew I would need a newer version to for some other stuff I want to do. So I built and installed the latest Mono (2.11) from source, which was pretty straight forward. $ git clone git://github.com/mono/mono.git $ cd mono $ ./autogen.sh --prefix=/usr/local **Error**: You must have 'libtool' installed to compile Mono. Ok, install libtool. $ sudo apt-get install libtool $ ./autogen.sh --prefix=/usr/local .. configure: error: msgfmt not found. You need to install the 'gettext' package, or pass --enable-nls=no to configure. I think I have gettext, oh well. $ ./autogen.sh --prefix=/usr/local --enable-nls=no $ make $ make install And make sure the new Mono works, awesome. $ mono --version Mono JIT compiler version 2.11 (master/fbff787 Fri Jul 15 17:54:49 EST 2011)) Manos was easy to build and install from source. $ git clone $ ./autogen.sh $ make $ sudo make install Manos has a command line tool like rails to create, build and serve websites. $ manos manos usage is: manos [command] [options] -h, -?, --help --init, -i --server, -s --docs, -d --build, -b --show-environment, --se --run, -r=VALUE The --init option creates a new application. $ manos --init ManosChat initing: ManosChat $ cd ManosChat/ && ls ManosChat.cs StaticContentModule.cs Which creates a new folder with two files. The initial app is setup to serve static content from a content folder. We'll replace those files with a ridiculously simple long polling "chat" application which will consist of three files; a C# file, a HTML page and a JavaScript file. The Manos application has four routes, one for the HTML page, the JavaScript file and routes to post messages and wait for updates. The routing metadata hopefully looks pretty familiar. That is it! using Manos; using System; using System.IO; using System.Threading; using System.Collections.Generic; namespace ManosChat { public class ManosChat : ManosApp { public List<IManosContext> _waiting; public ManosChat () { _waiting = new List<IManosContext>(); } [Post ("/send")] public void Send(IManosContext context) { var message = context.Request.PostData["message"]; foreach(var listener in _waiting) { try { listener.Response.End(message); } catch(Exception) { } } _waiting.Clear(); context.Response.End(); } [Post ("/wait")] public void Wait(IManosContext context) { _waiting.Add(context); } [Get("/")] public void Home(IManosContext context) { var content = File.ReadAllText("index.html"); context.Response.End(content); } [Get("/app.js")] public void Script(IManosContext context) { var content = File.ReadAllText("app.js"); context.Response.End(content); } } } My HTML looks something like this. <!DOCTYPE HTML> <html lang="en"> <head> <meta charset="UTF-8"> <title>Manos Chat</title> <script type="text/javascript" src=""></script> <script type="text/javascript" src="/app.js"></script> </head> <body> <h1>Chat</h1> <input type="text" id="message" name="message" /> <input type="submit" id="send-message" name="Send" /> <ul id="messages"> </ul> </body> </html> And my JavaScript something like this. $(function() { var showMessage = function(message) { $('#messages').prepend($('<li/>').append(message)); }; var getMessages = function() { $.ajax({ type: 'POST', url: '/wait', data: 'data=none', success: function(data) { showMessage(data); getMessages(); } }); }; var sendMessage = function(message) { $.ajax({ type: 'POST', url: '/send', data: 'message=' + message, success: function(data) { } }); } $('#send-message').click(function(e) { e.preventDefault(); var m = $('#message').val(); sendMessage(m); }); getMessages(); }); The web application can be built and served using the Manos command-line tool $ manos --build $ manos --server Running ManosChat.ManosChat on port 8080. Now we point some browser windows at the server and start chatting. The chat browser windows are "Waiting for localhost" even though the page has fully loaded. This is because we are not responding to the request until we have a message, this is known as long polling. A common way to avoid this is to make the request from inside an iframe. And yes, it is possible for clients to miss messages between requests to wait in this chat application. Also requests timeouts are not handled correctly. The reason we can access the list of clients without thread synchronization mechanisms is because all our requests a called on the same thread. This is great for this example, but it also means that if you block the handler thread (by making database query or something) then all other requests will have to wait. This just means you need to do any time consuming work or IO on another thread. It may be worth noting that request are queued in another thread, so the server isn't blocking while we are handling a request. I hope to take this a little further and add Riak storage and maybe fix some of the obvious short failings of this example. Finally, props to Jackson Harper and everyone who has contributed to Manos and Mono, you are all awesome. Thanks mate, I see you have some pretty neat posts about doing similar things with and without frameworks. Yes, reading the file asynchronously that would be better. Here is how it might look. Just out of interest I tried load testing each implementation with ab, the Apache HTTP server benchmarking tool. I used 10000 request and a concurrency of 300. With the blocking file read I got 6329 request / second and with the asynchronous code I got 7800 request / second. So yeah, good call :-) P.S. Apologies, my blog was adding and additional http:// in front of URL to you're site. The capital H in Http:// was throwing it. I've updated your URL so it links though right. Good article!! I think it should be better if you use the asynchronous overloads for reading the static files, right?
http://tarnbarford.net/journal/manos-long-polling-chat
CC-MAIN-2015-14
refinedweb
994
67.25
perlman gods <HR> <P> <H1><A NAME="NAME">NAME</A></H1> <P> perlsyn - Perl syntax <P> <HR> <H1><A NAME="DESCRIPTION">DESCRIPTION</A></H1> <P> <FONT SIZE=-1>A</FONT> Perl script consists of a sequence of declarations and statements. The only things that need to be declared in Perl are report formats and subroutines. See the sections below for more information on those declarations. All uninitialized user-created objects are assumed to start with a <CODE>null</CODE> or <CODE>0</CODE> value until they are defined by some explicit operation such as assignment. (Though you can get warnings about the use of undefined values if you like.) The sequence of statements is executed just once, unlike in <STRONG>sed</STRONG> and <STRONG>awk</STRONG> scripts, where the sequence of statements <STRONG>-n</STRONG> or <STRONG>-p</STRONG> switch. It's just not the mandatory default like it is in <STRONG>sed</STRONG> and <STRONG>awk</STRONG>.) <P> <HR> <H2><A NAME="Declarations">Declarations</A></H2> <P> Perl is, for the most part, a free-form language. (The only exception to this is format declarations, for obvious reasons.) Comments are indicated by the <CODE>"#"</CODE> character, and extend to the end of the line. If you attempt to use <CODE>/* */</CODE> C-style comments, it will be interpreted either as division or pattern matching, depending on the context, and <FONT SIZE=-1>C++</FONT> <CODE>//</CODE> comments just look like a null regular expression, so don't do that. <P> <FONT SIZE=-1>A</FONT> [perlfunc:my|my()], you'll have to make sure your format or subroutine definition is within the same block scope as the my if you expect to be able to access those private variables. <P> Declaring a subroutine allows a subroutine name to be used as if it were a list operator from that point forward in the program. You can declare a subroutine without defining it by saying <CODE>sub name</CODE>, thus: <P> <PRE> sub myname; $me = myname $0 or die "can't get myname"; </PRE> <P> Note that it functions as a list operator, not as a unary operator; so be careful to use <CODE>or</CODE> instead of <CODE>||</CODE> in this case. However, if you were to declare the subroutine as <CODE>sub myname ($)</CODE>, then <CODE>myname</CODE> would function as a unary operator, so either <CODE>or</CODE> or <CODE>||</CODE> would work. <P> Subroutines declarations can also be loaded up with the [perlfunc:require|require] statement or both loaded and imported into your namespace with a [perlfunc:use|use] statement. See [perlman:perlmod|the perlmod manpage] for details on this. <P> <FONT SIZE=-1>A</FONT>. <P> <HR> <H2><A NAME="Simple_statements">Simple statements</A></H2> <P> The only kind of simple statement is an expression evaluated for its side effects. Every simple statement must be terminated with a semicolon, unless it is the final statement in a block, in which case the semicolon is optional. <FONT SIZE=-1>(A</FONT> semicolon is still encouraged there if the block takes up more than one line, because you may eventually add another line.) Note that there are some operators like [perlfunc:eval] and [perlfunc:do] that look like compound statements, but aren't (they're just TERMs in an expression), and thus need an explicit termination if used as the last item in a statement. <P> Any simple statement may optionally be followed by a <EM>SINGLE</EM> modifier, just before the terminating semicolon (or block ending). The possible modifiers are: <P> <PRE> if EXPR unless EXPR while EXPR until EXPR foreach EXPR </PRE> <P> The <CODE>if</CODE> and <CODE>unless</CODE> modifiers have the expected semantics, presuming you're a speaker of English. The <CODE>foreach</CODE> modifier is an iterator: For each value in <FONT SIZE=-1>EXPR,</FONT> it aliases <CODE>$_</CODE> to the value and executes the statement. The <CODE>while</CODE> and <CODE>until</CODE> modifiers have the usual ``<CODE>while</CODE> loop'' semantics (conditional evaluated first), except when applied to a [perlfunc:do|do] <FONT SIZE=-1>-BLOCK</FONT> (or to the now-deprecated [perlfunc:do|do] <FONT SIZE=-1>-SUBROUTINE</FONT> statement), in which case the block executes once before the conditional is evaluated. This is so that you can write loops like: <P> <PRE> do { $line = <STDIN>; ... } until $line eq ".\n"; </PRE> <P> See [perlfunc:do|do]. Note also that the loop control statements described later will <EM>NOT</EM> work in this construct, because modifiers don't take loop labels. Sorry. You can always put another block inside of it (for [perlfunc:next|next]) or around it (for [perlfunc:last|last]) to do that sort of thing. For [perlfunc:next|next], just double the braces: <P> <PRE> do {{ next if $x == $y; # do something here }} until $x++ > $z; </PRE> <P> For [perlfunc:last|last], you have to be more elaborate: <P> <PRE> LOOP: { do { last if $x = $y**2; # do something here } while $x++ <= $z; } </PRE> <P> <HR> <H2><A NAME="Compound_statements">Compound statements</A></H2> <P>). <P> But generally, a block is delimited by curly brackets, also known as braces. We will call this syntactic construct a <FONT SIZE=-1>BLOCK.</FONT> <P> The following compound statements may be used to control flow: <P> <PRE> if (EXPR) BLOCK if (EXPR) BLOCK else BLOCK if (EXPR) BLOCK elsif (EXPR) BLOCK ... else BLOCK LABEL while (EXPR) BLOCK LABEL while (EXPR) BLOCK continue BLOCK LABEL for (EXPR; EXPR; EXPR) BLOCK LABEL foreach VAR (LIST) BLOCK LABEL BLOCK continue BLOCK </PRE> <P> Note that, unlike <FONT SIZE=-1>C</FONT> and Pascal, these are defined in terms of BLOCKs, not statements. This means that the curly brackets are <EM>required</EM>--no dangling statements allowed. If you want to write conditionals without curly brackets there are several other ways to do it. The following all do the same thing: <P> <PRE> if (!open(FOO)) { die "Can't open $FOO: $!"; } die "Can't open $FOO: $!" unless open(FOO); open(FOO) or die "Can't open $FOO: $!"; # FOO or bust! open(FOO) ? 'hi mom' : die "Can't open $FOO: $!"; # a bit exotic, that last one </PRE> <P> The <CODE>if</CODE> statement is straightforward. Because BLOCKs are always bounded by curly brackets, there is never any ambiguity about which <CODE>if</CODE> an <CODE>else</CODE> goes with. If you use <CODE>unless</CODE> in place of <CODE>if</CODE>, the sense of the test is reversed. <P> The <CODE>while</CODE> statement executes the block as long as the expression is true (does not evaluate to the null string (<CODE>""</CODE>) or <CODE>0</CODE> or <CODE>"0")</CODE>. The <FONT SIZE=-1>LABEL</FONT> is optional, and if present, consists of an identifier followed by a colon. The <FONT SIZE=-1>LABEL</FONT> identifies the loop for the loop control statements [perlfunc:next|next], [perlfunc:last|last], and [perlfunc:redo|redo]. If the <FONT SIZE=-1>LABEL</FONT> is omitted, the loop control statement refers to the innermost enclosing loop. This may include dynamically looking back your call-stack at run time to find the <FONT SIZE=-1>LABEL.</FONT> Such desperate behavior triggers a warning if you use the <STRONG>-w</STRONG> flag. <P> If there is a [perlfunc:continue|continue] <FONT SIZE=-1>BLOCK,</FONT> it is always executed just before the conditional is about to be evaluated again, just like the third part of a <CODE>for</CODE> loop in <FONT SIZE=-1>C.</FONT> Thus it can be used to increment a loop variable, even when the loop has been continued via the [perlfunc:next|next] statement (which is similar to the <FONT SIZE=-1>C</FONT> [perlfunc:continue|continue] statement). <P> <HR> <H2><A NAME="Loop_Control">Loop Control</A></H2> <P> The [perlfunc:next|next] command is like the [perlfunc:continue|continue] statement in <FONT SIZE=-1>C;</FONT> it starts the next iteration of the loop: <P> <PRE> LINE: while (<STDIN>) { next LINE if /^#/; # discard comments ... } </PRE> <P> The [perlfunc:last|last] command is like the <CODE>break</CODE> statement in <FONT SIZE=-1>C</FONT> (as used in loops); it immediately exits the loop in question. The [perlfunc:continue|continue] block, if any, is not executed: <P> <PRE> LINE: while (<STDIN>) { last LINE if /^$/; # exit when done with header ... } </PRE> <P> The [perlfunc:redo|redo] command restarts the loop block without evaluating the conditional again. The [perlfunc:continue|continue] block, if any, is <EM>not</EM> executed. This command is normally used by programs that want to lie to themselves about what was just input. <P> For example, when processing a file like <EM>/etc/termcap</EM>. If your input lines might end in backslashes to indicate continuation, you want to skip ahead and get the next record. <P> <PRE> while (<>) { chomp; if (s/\\$//) { $_ .= <>; redo unless eof(); } # now process $_ } </PRE> <P> which is Perl short-hand for the more explicitly written version: <P> <PRE> LINE: while (defined($line = <ARGV>)) { chomp($line); if ($line =~ s/\\$//) { $line .= <ARGV>; redo LINE unless eof(); # not eof(ARGV)! } # now process $line } </PRE> <P> Note that if there were a [perlfunc:continue|continue] block on the above code, it would get executed even on discarded lines. This is often used to reset line counters or <CODE>?pat?</CODE> one-time matches. <P> <PRE> #? } </PRE> <P> If the word <CODE>while</CODE> is replaced by the word <CODE>until</CODE>, the sense of the test is reversed, but the conditional is still tested before the first iteration. <P> The loop control statements don't work in an <CODE>if</CODE> or <CODE>unless</CODE>, since they aren't loops. You can double the braces to make them such, though. <P> <PRE> if (/pattern/) {{ next if /fred/; next if /barney/; # so something here }} </PRE> <P> The form <CODE>while/if BLOCK BLOCK</CODE>, available in Perl 4, is no longer available. Replace any occurrence of <CODE>if BLOCK</CODE> by <CODE>if (do BLOCK)</CODE>. <P> <HR> <H2><A NAME="For_Loops">For Loops</A></H2> <P> Perl's C-style <CODE>for</CODE> loop works exactly like the corresponding <CODE>while</CODE> loop; that means that this: <P> <PRE> for ($i = 1; $i < 10; $i++) { ... } </PRE> <P> is the same as this: <P> <PRE> $i = 1; while ($i < 10) { ... } continue { $i++; } </PRE> <P> (There is one minor difference: The first form implies a lexical scope for variables declared with [perlfunc:my|my] in the initialization expression.) <P> Besides the normal array index looping, <CODE>for</CODE> can lend itself to many other interesting applications. Here's one that avoids the problem you get into if you explicitly test for end-of-file on an interactive file descriptor causing your program to appear to hang. <P> <PRE> $on_a_tty = -t STDIN && -t STDOUT; sub prompt { print "yes? " if $on_a_tty } for ( prompt(); <STDIN>; prompt() ) { # do something } </PRE> <P> <HR> <H2><A NAME="Foreach_Loops">Foreach Loops</A></H2> <P> The <CODE>foreach</CODE> loop iterates over a normal list value and sets the variable <FONT SIZE=-1>VAR</FONT> to be each element of the list in turn. If the variable is preceded with the keyword [perlfunc:my|my], then it is lexically scoped, and is therefore visible only within the loop. Otherwise, the variable is implicitly local to the loop and regains its former value upon exiting the loop. If the variable was previously declared with [perlfunc:my|my], it uses that variable instead of the global one, but it's still localized to the loop. (Note that a lexically scoped variable can cause problems if you have subroutine or format declarations within the loop which refer to it.) <P> The <CODE>foreach</CODE> keyword is actually a synonym for the <CODE>for</CODE> keyword, so you can use <CODE>foreach</CODE> for readability or <CODE>for</CODE> for brevity. (Or because the Bourne shell is more familiar to you than <EM>csh</EM>, so writing <CODE>for</CODE> comes more naturally.) If <FONT SIZE=-1>VAR</FONT> is omitted, <CODE>$_</CODE> is set to each value. If any element of <FONT SIZE=-1>LIST</FONT> is an lvalue, you can modify it by modifying <FONT SIZE=-1>VAR</FONT> inside the loop. That's because the <CODE>foreach</CODE> loop index variable is an implicit alias for each item in the list that you're looping over. <P> If any part of <FONT SIZE=-1>LIST</FONT> is an array, <CODE>foreach</CODE> will get very confused if you add or remove elements within the loop body, for example with [perlfunc:splice|splice]. So don't do that. <P> <CODE>foreach</CODE> probably won't do what you expect if <FONT SIZE=-1>VAR</FONT> is a tied or other special variable. Don't do that either. <P> Examples: <P> <PRE> for (@ary) { s/foo/bar/ } </PRE> <P> <PRE> foreach my $elem (@elements) { $elem *= 2; } </PRE> <P> <PRE> for $count (10,9,8,7,6,5,4,3,2,1,'BOOM') { print $count, "\n"; sleep(1); } </PRE> <P> <PRE> for (1..15) { print "Merry Christmas\n"; } </PRE> <P> <PRE> foreach $item (split(/:[\\\n:]*/, $ENV{TERMCAP})) { print "Item: $item\n"; } </PRE> <P> Here's how a <FONT SIZE=-1>C</FONT> programmer might code up a particular algorithm in Perl: <P> <PRE> for (my $i = 0; $i < @ary1; $i++) { for (my $j = 0; $j < @ary2; $j++) { if ($ary1[$i] > $ary2[$j]) { last; # can't go to outer :-( } $ary1[$i] += $ary2[$j]; } # this is where that last takes me } </PRE> <P> Whereas here's how a Perl programmer more comfortable with the idiom might do it: <P> <PRE> OUTER: foreach my $wid (@ary1) { INNER: foreach my $jet (@ary2) { next OUTER if $wid > $jet; $wid += $jet; } } </PRE> <P> See how much easier this is? It's cleaner, safer, and faster. It's cleaner because it's less noisy. It's safer because if code gets added between the inner and outer loops later on, the new code won't be accidentally executed. The [perlfunc:next|next] explicitly iterates the other loop rather than merely terminating the inner one. And it's faster because Perl executes a <CODE>foreach</CODE> statement more rapidly than it would the equivalent <CODE>for</CODE> loop. <P> <HR> <H2><A NAME="Basic_BLOCKs_and_Switch_Statemen">Basic BLOCKs and Switch Statements</A></H2> <P> <FONT SIZE=-1>A</FONT> <FONT SIZE=-1>BLOCK</FONT> by itself (labeled or not) is semantically equivalent to a loop that executes once. Thus you can use any of the loop control statements in it to leave or restart the block. (Note that this is <EM>NOT</EM> true in [perlfunc:eval], [perlfunc:sub], or contrary to popular belief [perlfunc:do] blocks, which do <EM>NOT</EM> count as loops.) The [perlfunc:continue|continue] block is optional. <P> The <FONT SIZE=-1>BLOCK</FONT> construct is particularly nice for doing case structures. <P> <PRE> SWITCH: { if (/^abc/) { $abc = 1; last SWITCH; } if (/^def/) { $def = 1; last SWITCH; } if (/^xyz/) { $xyz = 1; last SWITCH; } $nothing = 1; } </PRE> <P> There is no official <CODE>switch</CODE> statement in Perl, because there are already several ways to write the equivalent. In addition to the above, you could write <P> <PRE> SWITCH: { $abc = 1, last SWITCH if /^abc/; $def = 1, last SWITCH if /^def/; $xyz = 1, last SWITCH if /^xyz/; $nothing = 1; } </PRE> <P> (That's actually not as strange as it looks once you realize that you can use loop control ``operators'' within an expression, That's just the normal <FONT SIZE=-1>C</FONT> comma operator.) <P> or <P> <PRE> SWITCH: { /^abc/ && do { $abc = 1; last SWITCH; }; /^def/ && do { $def = 1; last SWITCH; }; /^xyz/ && do { $xyz = 1; last SWITCH; }; $nothing = 1; } </PRE> <P> or formatted so it stands out more as a ``proper'' <CODE>switch</CODE> statement: <P> <PRE> SWITCH: { /^abc/ && do { $abc = 1; last SWITCH; }; </PRE> <P> <PRE> /^def/ && do { $def = 1; last SWITCH; }; </PRE> <P> <PRE> /^xyz/ && do { $xyz = 1; last SWITCH; }; $nothing = 1; } </PRE> <P> or <P> <PRE> SWITCH: { /^abc/ and $abc = 1, last SWITCH; /^def/ and $def = 1, last SWITCH; /^xyz/ and $xyz = 1, last SWITCH; $nothing = 1; } </PRE> <P> or even, horrors, <P> <PRE> if (/^abc/) { $abc = 1 } elsif (/^def/) { $def = 1 } elsif (/^xyz/) { $xyz = 1 } else { $nothing = 1 } </PRE> <P> <FONT SIZE=-1>A</FONT> common idiom for a <CODE>switch</CODE> statement is to use <CODE>foreach</CODE>'s aliasing to make a temporary assignment to <CODE>$_</CODE> for convenient matching: <P> <PRE> SWITCH: for ($where) { /In Card Names/ && do { push @flags, '-e'; last; }; /Anywhere/ && do { push @flags, '-h'; last; }; /In Rulings/ && do { last; }; die "unknown value for form variable where: `$where'"; } </PRE> <P> Another interesting approach to a switch statement is arrange for a [perlfunc:do|do] block to return the proper value: <P> <PRE> $amode = do { if ($flag & O_RDONLY) { "r" } # XXX: isn't this 0? elsif ($flag & O_WRONLY) { ($flag & O_APPEND) ? "a" : "w" } elsif ($flag & O_RDWR) { if ($flag & O_CREAT) { "w+" } else { ($flag & O_APPEND) ? "a+" : "r+" } } }; </PRE> <P> Or <P> <PRE> print do { ($flags & O_WRONLY) ? "write-only" : ($flags & O_RDWR) ? "read-write" : "read-only"; }; </PRE> <P> Or if you are certainly that all the <CODE>&&</CODE> clauses are true, you can use something like this, which ``switches'' on the value of the <CODE>HTTP_USER_AGENT</CODE> envariable. <P> <PRE> #!/usr/bin/perl # pick out jargon file page based on browser $dir = '<A HREF=""></A>';"; </PRE> <P> That kind of switch statement only works when you know the <CODE>&&</CODE> clauses will be true. If you don't, the previous <CODE>?:</CODE> example should be used. <P> You might also consider writing a hash instead of synthesizing a <CODE>switch</CODE> statement. <P> <HR> <H2><A NAME="Goto">Goto</A></H2> <P> Although not for the faint of heart, Perl does support a [perlfunc:goto|goto] statement. <FONT SIZE=-1>A</FONT> loop's <FONT SIZE=-1>LABEL</FONT> is not actually a valid target for a [perlfunc:goto|goto]; it's just the name of the loop. There are three forms: [perlfunc:goto|goto] <FONT SIZE=-1>-LABEL,</FONT> [perlfunc:goto|goto] <FONT SIZE=-1>-EXPR,</FONT> and [perlfunc:goto|goto] <FONT SIZE=-1>-&NAME.</FONT> <P> The [perlfunc:goto|goto] <FONT SIZE=-1>-LABEL</FONT> form finds the statement labeled with <FONT SIZE=-1>LABEL</FONT> and resumes execution there. It may not be used to go into any construct that requires initialization, such as a subroutine or a <CODE>foreach</CODE> loop. It also can't be used to go into a construct that is optimized away. It can be used to go almost anywhere else within the dynamic scope, including out of subroutines, but it's usually better to use some other construct such as [perlfunc:last|last] or [perlfunc:die|die]. The author of Perl has never felt the need to use this form of [perlfunc:goto|goto] (in Perl, that is--C is another matter). <P> The [perlfunc:goto|goto] <FONT SIZE=-1>-EXPR</FONT> form expects a label name, whose scope will be resolved dynamically. This allows for computed [perlfunc:goto|goto]s per <FONT SIZE=-1>FORTRAN,</FONT> but isn't necessarily recommended if you're optimizing for maintainability: <P> <PRE> goto ("FOO", "BAR", "GLARCH")[$i]; </PRE> <P> The [perlfunc:goto|goto] <FONT SIZE=-1>-&NAME</FONT> form is highly magical, and substitutes a call to the named subroutine for the currently running subroutine. This is used by <CODE>AUTOLOAD()</CODE> subroutines that wish to load another subroutine and then pretend that the other subroutine had been called in the first place (except that any modifications to <CODE>@_</CODE> in the current subroutine are propagated to the other subroutine.) After the [perlfunc:goto|goto], not even [perlfunc:caller|caller()] will be able to tell that this routine was called first. <P> In almost all cases like this, it's usually a far, far better idea to use the structured control flow mechanisms of [perlfunc:next|next], [perlfunc:last|last], or [perlfunc:redo|redo] instead of resorting to a [perlfunc:goto|goto]. For certain applications, the catch and throw pair of [perlfunc:eval] and <CODE>die()</CODE> for exception processing can also be a prudent approach. <P> <HR> <H2><A NAME="PODs_Embedded_Documentation">PODs: Embedded Documentation</A></H2> <P> Perl has a mechanism for intermixing documentation with source code. While it's expecting the beginning of a new statement, if the compiler encounters a line that begins with an equal sign and a word, like this <P> <PRE> =head1 Here There Be Pods! </PRE> <P> Then that text and all remaining text up through and including a line beginning with <CODE>=cut</CODE> will be ignored. The format of the intervening text is described in [perlman:perlpod|the perlpod manpage]. <P> This allows you to intermix your source code and your documentation text freely, as in <P> <PRE> =item snazzle($) </PRE> <P> <PRE> The snazzle() function will behave in the most spectacular form that you can possibly imagine, not even excepting cybernetic pyrotechnics. </PRE> <P> <PRE> =cut back to the compiler, nuff of this pod stuff! </PRE> <P> <PRE> sub snazzle($) { my $thingie = shift; ......... } </PRE> <P>. <P> <PRE> $a=3; =secret stuff warn "Neither POD nor CODE!?" =cut back print "got $a\n"; </PRE> <P> You probably shouldn't rely upon the [perlman:perlguts] being podded out forever. Not all pod translators are well-behaved in this regard, and perhaps the compiler will become pickier. <P> One may also use pod directives to quickly comment out a section of code. <P> <HR> <H2><A NAME="Plain_Old_Comments_Not_">Plain Old Comments (Not!)</A></H2> <P> Much like the <FONT SIZE=-1>C</FONT> preprocessor, Perl can process line directives. Using this, one can control Perl's idea of filenames and line numbers in error or warning messages (especially for strings that are processed with [perlfunc:eval|eval()]). The syntax for this mechanism is the same as for most <FONT SIZE=-1>C</FONT> preprocessors: it matches the regular expression <CODE>/^#\s*line\s+(\d+)\s*(?:\s"([^"]*)")?/</CODE> with <CODE>$1</CODE> being the line number for the next line, and <CODE>$2</CODE> being the optional filename (specified within quotes). <P> Here are some examples that you should be able to type into your command shell: <P> <PRE> % perl # line 200 "bzzzt" # the `#' on the previous line must be the first char on line die 'foo'; __END__ foo at bzzzt line 201. </PRE> <P> <PRE> % perl # line 200 "bzzzt" eval qq[\n#line 2001 ""\ndie 'foo']; print $@; __END__ foo at - line 2001. </PRE> <P> <PRE> % perl eval qq[\n#line 200 "foo bar"\ndie 'foo']; print $@; __END__ foo at foo bar line 200. </PRE> <P> <PRE> % perl # line 345 "goop" eval "\n#line " . __LINE__ . ' "' . __FILE__ ."\"\ndie 'foo'"; print $@; __END__ foo at goop line 345. </PRE> <HR> <BR>Return to the [Library]<BR>
http://www.perlmonks.org/index.pl/jacques?displaytype=xml;node_id=395
CC-MAIN-2013-20
refinedweb
3,824
58.52
This one’s simple to understand. The entire concept of a do-while loop lies in its name. The first thing that the command does is that it “does” something, and then it checks for a condition. This means that before a condition is checked for, the contents of the loop execute at least one time. If later, the condition holds true, the loop will execute again. The sequence of events is something as follows: Execute loop -> Check for condition (if true) -> Execute loop -> Check for condition (if false) -> Exit loop. What is the syntax of the do-while loop in C? do { statement(s); } while( condition ); What is a do-while loop? How do these do while loops work? Where should you use a do-while loop? As explained above a do-while loop executes the set of statements first and then check for the condition. The control enters the loop normally, executes it and then exits the loop to check the condition. If the condition is false then the control transfers to the subsequent command and the loop is history. However, if the condition is true then the control goes back to the beginning of the loop and its contents execute all over again. The do-while loop in C is used mainly when you want to execute a set of commands at least once even if the condition is false. What does the flow chart of a do-while loop in C look like? What is the difference between a do-while loop and a while loop? A while loop checks the condition first, if the condition is satisfied, the control enters the loop and executes the statements within the loop. In short, the contents of the loop never execute even once before the condition is checked. Meanwhile, in a do while loop, the loop executes at least once before the condition is checked. Example of a while loop #include <stdio.h> int main() { int a=0; while(a==1) { printf("I am in the loop"); } printf("I am out of the loop"); } Output I am out of the loop Example of a do-while loop #include <stdio.h> int main() { int a=0; do { printf("I am in the loop"); } while(a==1) printf("\nI am out of the loop"); } Output I am in the loop I am out of the loop What is the difference between a do-while loop, a while loop and a for loop? - The main difference between the three is that in for loops and while loops, the condition is checked before the control enters the loop. Whereas in the do-while loop, the condition is checked when the control exits the loop. - Another main difference is in the syntax. In for and while loops, there is no semicolon after the condition. However, there is a semicolon after the condition in a do-while loop. This makes sense because if a semicolon were entered after the condition in for and while loop, the control would never enter the loop. As a semicolon signals the end to a command. - In a for loop, the initialization of the condition along with updating it forms a part of the syntax. In a while and do-while, it doesn’t. Example 1: Write a program in C using a do while loop to print something n times. //print number of times the user wants to print something #include<stdio.h> #include<conio.h> void main() { int i, x=0; printf("Enter number of times you want to print hello\n"); scanf("%d",&i); do { printf("Hello\n"); x++; } while(x<i); getch(); } Output Enter number of times you want to print hello 1 Hello Example 2: Write a program in C using a do while loop to take a number from the user and print the sum of its digits. //take number from the user and print sum of digits #include<stdio.h> #include<conio.h> void main() { int i,temp,a,sum=0; printf("Enter number\n"); scanf("%d",&i); do { temp=i%10; //separate the numbers starting from the RHS and store to temp a=temp; //assign seperated numbers to a i=i/10; //remove the LSB of the number sum=sum+a; //add the seperated digits } while(i>0); printf("sum of digits = %d",sum); getch(); } Output Enter number 234 sum of digits = 9 First, a number is taken from the user and stored in a variable. Then we use the modulo function to get the remainder by dividing the number by 10. And then we decrease the number in terms of numerals by dividing it by 10. The separated number is added in every iteration of the loop. The condition remains true as long as the division of the number does not yield 0. This happens only after all the digits are added.
https://www.technobyte.org/do-while-loop-c-explanation-examples-tutorials/
CC-MAIN-2020-16
refinedweb
812
70.33
Re: Consolidation Roots vs. Static DNS/WINS entries From: Frances [MSFT] (v-franhe_at_microsoft.com) Date: 02/15/05 - ] Date: Tue, 15 Feb 2005 09:30:44 GMT Hello Glenn, Good to hear from you. According to your message, I understand that you want to migrate a win2k file server to a win2k3 file server. So you want to have more information about some related concepts, such as the DFS consolidation root. Is this correct? What is your current configuration? Do you have a DFS server now? I assume you have no DFS server, since you only have one file server. All the shares are on the win2k server. If I have misunderstood, please feel free to let me know. As for the comparison, yes, you understanding is partly correct. If you follow the wizards in FSMT to complete DFS consolidation root wizard and File Server Migration Wizard, users who try to connect to \\oldserver\share1 will be automatically redirected to \\newserver\share1. In fact the target will be \\newserver\oldserver\share1 to be exact. Because the old server name will be appended to the target share name by default. It is addressed in the following article, question "Q: In the File Server Migration Wizard, why is the source file server name appended to the target share name and how do I change this?" Frequently Asked Questions About File Server Migration .mspx To have more information about FSMT, please take a careful look at the FSMT white paper Don suggests. Actually, I am not quite sure about your meaning of "In the standard DFS root, \\dfsserver\dfsroot\share1 will be redirected to \\newserver\share1." Do you mean it will occur after you migrate the file server, and reconfigure the DFS server? If this is the case, yes, it is true. As for your main concern, it is different between DFS consolidation roots and adding static WINS/DNS entries. When you use DFS consolidation root, it does not only add a static WINS/DNS server entry to redirect to the new server, but also make some configurations concerning nonclustered and clustered servers. To maintain the original UNC paths of files, the DFS Consolidation Root Wizard and Dfsconsolidate.exe use new functionality in Distributed File System (DFS). These tools also modify Domain Name System (DNS) records and Windows Internet Name Service (WINS) records (through registry entries) for nonclustered servers. For server clusters, the tools create a Network Name resource. These modifications enable users who attempt to access the original UNC path of the files to be redirected to the DFS root server that hosts the namespace. The DFS root server sends the client a referral that contains the current UNC path of the files. Whether the files still reside on the source file server (before the files are migrated) or whether they reside on the target file server (after the files are migrated), users can continue to access the files by using their original UNC path. Using DFS consolidation root will be a complicated process. To achieve your goal to migrate a file server, it is best to use FSMT to copy files from the win2k file server to the win2k3 server and change the new server name to the name of the win2k server. The clients will also visit the shares. This method is easier, however, the destination file server is not a DFS server. If you want to have a DFS server, you need to re-configure it. It depends on your choice. In my opinion, DFS is only a way to manger the files. If you don't have many file servers, it is not necessary to use DFS. In addition, I will offer you an article about the DFS for your reference. Overview of DFS in Windows 2000;en-us;812487. - ]
http://www.tech-archive.net/Archive/Windows/microsoft.public.windows.server.migration/2005-02/0320.html
crawl-002
refinedweb
632
64.41
To a smiley face. Might deflect the shilt that is sure to come. Myself, I think you and your people done good.. I'm using FF 3.6, if that makes any difference.. You're suggesting we get paid for image loads? Please. You're a lot smarter than that. (But, then again, I did get a good laugh out of it.) I thought there had to be some motivation to have a single chocie for user image storage.. It's not like an avatar is a requirement. If one doesn't want to use avatar, the default silhouette is always an option. I'd still like to see the TR flag as the default instead. TR used to encourage it. they did a points and (reward?) system once to get people to complete profiles and upload an avatar. I know people's avatar's as I would recognize faces in the real world. TR once wanted to encourage use of avatars and completing profiles (as many good sites force you to do before posting) as it cuts down on spammers. People won't post spam links and run if it is a more involved registration process, thus you build a better community with fewer 'post and runs'. Now you have a choice, register with a new host, upload and reformat an avatar, welcoming more junk mail and spam, or don't have an avatar. If the latter, why have them at all ,if they are all going to be blank avatars then there's simply no point at all in using them to begin with, just list by user name. What could possibly be so hard about just letting us upload a small pic directly from our own PCs? FB does it and so do any number of other, smaller sites. I set up an account and I *might* find a value in it at some time in the future, but I would have liked to take my time thinking it through. There are sites that I use that I don't care to use a picture of me, others where I use a specific avatar for specific reasons, still others that I would prefer a more professional appearing pic. In my experience, one size rarely fits all. Gravatar allows you to use a different picture on a per-site basis. It's a good system, and I like it, but it's not for everyone and it's not a good thing that TR forces you to use it if you want to use a picture. J.Ja I know they allow per email address, but I didn't see anything about per site. I just now went and checked a webcomic I've been folllowing, and the comments I wrote in the feedback section... and now they have this gravatar next to them!! And it's not like I've posted there through a login or anything, I just provided my email address in the comment form as required! :0 Talk about nebulous... Yahoo provides throwaway email addresses, so I guess you could put one of those up for gravatar. It'll still be accessable through yahoo, but it's hopefully insulated... Such as Windows Live, Picturetrail etc. you now have to download, reformat, upload images to a new server that you have to sign up for and await a flood of spam and junk mail from. It would have been somewhat palletable if the image size remained the same though. Lose lose from a users perspective. If the average new visitor has an attention span of 12 seconds to find what they want, this is dead in the water before the champagne bottle breaks on the bow. Some lifers will take time to muddle through the birds nest of links and panels, but it is not at all inviting for new visitors. Well Jason for starters the sizes are different which requires you to resize or do without. Then there is the slight issue of needing to join yet another site and spread what where previously Secure E Mail Address about. In the Past I was sure that TR wouldn't be sharing my E Mail Address with anyone and they where the only ones to get most of those which can not be dumped at a moments notice. Now there is a site Gravatar who has 2 of my E Mail Addresses that would be inconvenient to loose though not Earth Shattering it would certainly be Inconvenient. The remainder they will never see and any Accounts that are already open or may be opened in the future will remain Blank. Currently I'm not overly happy with the need to join yet another site just so I can post at TR with the same features that where in place previously. Maybe I'll get used to the FB Look of things but currently at the moment I'm debating weather or not to continue using TR as it's become inconvenient, hard to know where posts will end up particularly in Q&A Threads, Not Possible to Subscribe to Questions/Discussions without adding a post and seriously looks way too much like Face Book which I refuse to use. I've also just seen that the Contact TR Link that used to be on every page is no longer there nor is there a link to report Spam Posts so I'm assuming that this was just an oversight that will get changed when it gets brought to your attention. :0 Col it is now called flag, and it gives you several options as to how to report the post, cool, I like it. It's still there... at the bottom of the post is a link that says "Flag". J.Ja Here's the link to Gravatar's privacy policy: In short, they promise to "never rent, sell, or otherwise distribute or make public your email address." We wouldn't have used them unless they had such an unequivocal policy. ... just ask Google. Sorry, but there's no defense on this one. Gravatar should be *an* option, not *the* option. Personally, I don't mind Gravatar, but I'm standing up for the folks who have every right to not want to use it. Just let people host elsewhere, and offer defaults icons for folks as well. J.Ja Have a look. This is exactly why I sign up for as few services as possible. This is now my third or fourth Gravatar account, thanks to the fact I don't use the same email address everywhere for purposes of keeping things separated. I am hoping that I can at least choose avatars per-site from the same email address as Justin mentions. I am still really not happy with TR pulling the avatar from my existing Gravatar account automatically. A note that an existing Gravatar was detected and an offer to use it would have been just fine, if automated social connectivity was what they were shooting for. But the current behavior is <i>just wrong</i>. I never took the time to set up an avatar before, but I was considering it... until now. Why should I fork over my email address(es) and to some extent my online identity, wade through a bunch of incomprehensible legaleze which becomes null & void when someone buys out gravatar next year anyway... when I have 3 (yea, count 'em 3) web servers in my basement on my own static IPs. I have the ability of hosting my own avatar [and my own email... another reason to not sign up "just anywhere" - extra spam... ] yet it's not an option to host one's own avatar? A lot of the changes to the site I dislike immensely -- mostly because I just don't like change. Any time that an option to host ones own images isn't available sounds like there's an underlying (and underhanded) motive (even if it's just PHB foolishness) for the decision. I'm just sayin'... "Merch" I've considered it more than once: step 1, host avatar image on my own managed server step 2, give TR my avatar URL step 3, collect the logs as all you other TR readers download my avatar image step 4, (uh.. ok.. I got not setup four yet) step 5, Profit!! (this is where figuring out step four comes in ) I'm not really sure how much of a concern it is but it does present an option to collect information on fellow TR readers. I like the new profile layout. It's easier to follow contacts, update newsletter subscriptions, and set preferences. However, I can't seem to find an equivalent of the Updates listing from the old Workspace. Where is a list of the discussions I've subscribed to? I used this list to try to keep up with my subscribed discussions and now I can't find it. Am I looking in the wrong place? Or is construction just not finished yet? Now go an edit your comment and see how well it is displayed without spaces or paragraphs. There is a mess of links in the bottom bar, scroll down past the ads and other junk in the right sidebar, you may find a page that disaplys your discussions, it's not uniform, each page seems to offer a unique navigation set. I've already run into it, and we've already been told they're working on it. As for the bottom of the page, most of that has always been there. It was just ignored except in rare cases when the targeted ads or recommended links didn't make sense or were hilarious. I took a good look and It actually makes more sense than it used to. The problem I'm having right now is that I've set my default thread view to Expanded, yet this thread is displayed collapsed and I can find no way to expand it. Was mainly redundant as far more logical navigation was available near the top of the page. not nearly as useful as the old Print/View All, primarlly because the old way used the entire screen whilst the new way stays to a limited column size. I do like that I can set my default view to expanded, however, and not have to click each comment to open and read. Try setting the default to collapsed, then back to expanded again. If I select and individual post, I get collapsed. If I click on the thread title, I get my default selections. Weird. @NickNielsen: At the top-right of every page is a menu called "My Account." (It's in the blue bar.) Mouse over that menu and select "My Stuff," and you'll go to a page listing all the discussion threads you have participated in. Also available on that page are all your Q&As. Scarily, mine go back to 2006! It was more accessible and sensibly laid out before. It doesn' matter though, TR always asks for feedback, defends all the changes against complaints and then leaves it as is anyway. The same thing happens every time there are changes, whether people like them or not. The mentality is 'you'll get used to it' whether it works or not. It's really just about 'user issue' feedback and will acting on botched script issues but if the layout was purple and yellow and deemed absolutely repulsive, TR has no intention on changing it, apparently they know what everyone wants. Of course we can't make everyone happy, but we're working on it. We don't pretend to be perfect and all knowing. If you have constructive feedback, we are listening and want you, the community to take a role in shaping this site. TR didn't change a lot for a long time due to resources, but this is changing and the site will continue to evolve. Here's your chance at being a productive, civil member of the community. It's your choice. David's right, you know. TR is in the impossible position of trying to please everyone. There is a Google Docs link around page six (assuming you view 50 entries per page) that is attempting to gather up the issues in a way that will enable the tech team at TR to address them without things falling off radar. Oh wait! I have the link right here... The last TR update was so many years ago that I was still using a Nokia cell phone that closely resembled a brick. You didn't get thumbs up, you got points for answering questions... if you were lucky. Peter Spande hadn't been replaced by Jason Hiner yet, and we still wrote our own News. In twenty years of working in technology, the only constant I have seen is the mandate, "Change or die". Oz, to be fair, TR has asked NUMEROUS times for feedback on what people want. I have personally started a few posts requesting such information. VERY often, all anyone got was crickets- INCLUDING me. My opinion? They are doing their best. Realistically? They pay for the bandwidth and they assume all the risk. If they re-design my playground every few years, it's a small price to pay. That Google docs link? There is a page JUST for suggestions. Edit for typos and poor sentence construction Appreciate the perspective. And, just for the record, I was part of the last big redesign in 2006, too -- when we rolled out Workspace and a lot of community changes. Pete Spande was the general manager of TechRepublic at the time and I was the head of editorial. We're organized a little differently now, but today David's role is similar to Pete's back then (only David's role is even more focused on the site itself). Also, the editorial department is now in a separate branch of the organization, so David and I collaborate closely on the big picture strategy of the TechRepublic site. He's a super-knowledgeable guy with a lot of great experience in UX and we're lucky to have him on the team. Seriously, TR looks like a FB wannabe now. Did you invite a random sample of members to give their opinions? Did you do any mock-ups? I honestly don't think I can stand it. This is so ugly. Wouldn't it be much more user-friendly to have a just plain text theme for serious business users? There really should be a smiley that indicated sarcasm. While it doesn't directly impact on me I do have a few clients who are Color Blind so maybe I'm more sensitive to this issue but I can tell you any person that is Color Blind will simply loose most of the links up top when they are active. That is taking into account if they can see them at all which most will not be able to. Col I'm color blind and don't have any issues with the page color scheme. I do however dislike that I have to view it with ie as Opera 11 doesn't like it. My Opera 11 doesn't complain. Maybe you have some old CSS-file in your cache? Try refreshing your cache and/or restarting Opera. (Oh, there was an update to Opera some 12 hours ago. Why not run it...) Unfortunately, the list of discussions is completely useless now. It shows my own comments; it gives me absolutely nothing about most recent comments to discussions. No matter how much activity goes on in a discussion, nothing changes on my discussions list unless I make a comment myself -- and even then, all it tells me is that I made a comment. Well, duh, I know I did, because I'm the person who made the comment. I have NO WAY of keeping track of the conversations, short of waiting for the email alert, getting an RSS feed (I really hate RSS feeds for busy conversations, its inbox overload), or hitting refresh and hoping to notice what's new, because the coloring of visited/unvisited doesn't work. J.Ja as a quick fix for what's been seen and unseen. will review the conversation tracking outside of email and RSS. send me any ideas. Even though I, too, convert my feeds to email, I deposit them into a different folder so I can scan them more quickly. They don't mix well with correspondence. thanks for that perspective, will note it for feature improvements. so you expressed what you don't like, so what elements would be useful to display? thread title, who started it, who last commented? and/or contacts that last commented? link to your comment? Discussion, Contribution, Date, Posts, Votes These are good columns. I think they just need to reflect different information. It seems to track comments user's have posted into discussions rather than activity in discussions users are interested in. Discussion; show me the discussion or article title. This allows me to track specific discussions rather than my own comments within discussions. Contribution; optional though I'm not sure how it relates to discussions. I see how it relates to tracking user comments.. just not how it relates to tracking discussions. Date; the date/time of the last comment. Date was there in the past, time was missing. With time now indicated, I can see if comments are newer then my last discussion scan even if my browser does not grew out the discussion link as "visited". Posts; number of posts in the discussion is good. Number of responses to my posts or number of posts since my last post could be handy information also. As is, it indicates how large a discussion is though so fair enough. Votes; maybe this indicates how many votes a person has gotten in the discussion though votes really does refer more to a specific post. Alternatively, maybe leave the display table as is; focusing on user posts. The user can see comments, if they've been voted up/down and Posts could refer to replies in the subthread versus total posts in discussion. So, what happens if a post is added but not under one of my comment's subthreads? Currently, I don't see that update. Potentially, you add an additional column; just wide enough for a "new posts" icon. In this case, I'm getting the current information; my comment title, time/date, replies to it, it's being voted up or down - which is all good stuff to know. I also get the indication of new discussion comments outside of my own and subthreads. My other example would be OSNews. The display is by comment not by discussion similar to the new TR layout versus the old. comments are in cronological order newest to oldest. With each comment, I see votes up/down and number of replies. Links provide direct access to the sub-thread starting with my own comment. What OSNews system lacks is the overall discussion indicator. One must go back and scan down the discussion hoping to spot new comments in amongst the already read stuff. Others should chime in, too, but here's what I would like ideally: One page, on which I can find, organized by discussion, an individual link to every comment *that I haven't visited yet* on every discussion in which I have participated. Added bonus: a way to mark one of those comments as "read" without visiting it. That's the key problem that's alienating me; I can't track discussions I'm interested in. Lurking the "Hot discussions" list gives some indication when it's time to check back on chatter but it's not ideal. Hopefully it's enough of a crutch to keep up while the TR devs finish polishing. That page now only carries my last 100 posts. It doesn't have thread titles, it only updates if I post. I want to know when people post to discussions I've subscribed to. ... mine only go back to 12/26/2010 -- there must be some maximum on the number listed. I've been a member since the previous millennium. for the few contacts that still show in my contacts. Four to five in each category, and categories aren't clickable to display a full list of posts. Again, no indication per post what thread was posted in. I had a totally acerbic reply, but I was cast into some netherworld, now having to log in again. Is this an anomaly, or, is it designed? several times so far. Good question. Can't seem to edit that last one... I'd really like to see who made the last post in a discussion or question. Frankly, seeing the same new name ruling the 'last post' column was a dead giveaway that the newbie was a spammer and made it much easier to notice them, as well as find their profile and get rid of their crud. I've also noticed, that when viewing a profile and clicking on a spammers post, I'm taken to the head of the thread, not the post I want to mark. In lengthy discussions/questions, this is going to make it a huge pita to find and get rid of spam posts. I'll have more in a few days. Pretty sure you can bet on that. I can just people posting lengthy dialogue, properly spaced for easy reading, and then going in to fix a typo and all paragraph formatting is lost. Now that's a PITA! Since a tweak from wayback, editing a comment with HTML entities required re-typing the entity codes. This may have occurred with some other HTML formatting as well,. Haven't even tried with this forum software yet. Voting kicks me out and makes me have to log back in. That's really odd, it seems like this kicked on/off thing is only happening to some people? J.Ja On the old TR, this one, and other sites when I swapped to IE8 and tightened security and changed some cookie settings. I don't know if that's what would be causing it for others, but that's what causes it for me. It's an irritation, but I'll put up with it in the interest of a little bit tighter security. There's probably some specific cookie you need to accept to prevent that behavior -- something that registers as a "third party site" origin for the cookie according to your browser, perhaps, even though it is likely to be some TechRepublic-associated property (just with a different domain name). It seems that nobody designs Websites to make things easy for people who care about the technologies they use. as some have noticed we changed domains with the redesign, tr.com.com tr.com, which may be causing some issues. or you have a legit bug! Try clearing your cookies and let us know if you still see login problems. please let us know your browser, OS and steps to reproduce. thanks!! ... after the content issues are sorted out. First and foremost, I know how much work goes into something like this... you guys know that I'm real familiar with what these kinds of projects are like. I don't expect them to EVER be perfect or even go smoothly. The redesigned site I rolled out a few days ago was a year of work, and I'm very glad that my name wasn't attached to most of the project because I'm not thrilled with it... The overall job done here is something to be proud of, and a lot of the feedback folks have been giving was obviously taken into account, and it's appreciated. So think of this as a "punch list", not as a set of "wow, this sucks!" criticisms. One of the things that's really harsh on the eyes is this embedded effect on the text for all of the graphics. It's adding a slight drop shadow on the inside of the "bezel" that makes it hard to see. Throughout, many of the color choices are making it VERY hard to read. White numbers on a light green button, for example (the comment counts), or a light green on a light grey background (like right above the box I'm typing in right now). White text on light blue buttons. All of these buttons need a heavier weight font so that they are readable. The over all issue is one of contrast. The left arrows on the "Media Gallery" to the right are invisible for all intents and purposes... and I'm using a very high end monitor! On the other hand, I like the increased font size very much! The content areas are, for all intents and purposes, SMALLER. Why? Because the font size is cranked to a readable level (finally). I have no clue how i am supposed to include code samples in articles or comments when there is less than 100 characters' worth of width to work with. At least I use C# now where the whitespace didn't matter, but for Ruby, Python, VB.NET, etc. code... let's just say it's not going to encourage folks to post sample code. The top few navigation bars are totally illegible. The highlighted item in the "Blogs/Discussions" nav covers up the pipe character to the right of the button. I like the "Collapse" on the giant buttons over the article. I never used them, and it's nice to be able to get those 150 pixels back. The navigation, as is, is very poor. It will be extremely improved if the "blog" content goes into the main bins ("Mobile Development", "Leadership", etc.). J.Ja I had to do a log out/log in to post. Maybe because I hadn't logged into the new system yet. J.Ja It looks like, at least in the blog posts, that code samples are actually displaying fairly nicely (in a colored box in a different font). That's appreciated! Is there a way to tag content in a comment to be code too, like on Stack Overflow and MSDN? J.Ja I am not a coder. I am not a coder. I am not a coder. But I know how to copy and paste, and use the "samp" tag.<samp>namespace Microsoft.Crm.Sdk.Utility { public class CrmServiceUtility { public static CrmService GetCrmService() { return GetCrmService(null, null); } public static CrmService GetCrmService(string organizationName) { return GetCrmService(null, organizationName); } /// <summary> /// Set up the CRM Service. /// </summary> /// <param name="organizationName">My Organization</param> /// <returns>CrmService configured with AD Authentication</returns> public static CrmService GetCrmService(string crmServerUrl, string organizationName) { </samp> Looks good. How are we supposed to know about this? There needs to be a link near the comments box for formatting help. I've never used the formatting because I have no way of finding out how to do the formatting. J.Ja <s: </samp> Fail ... it is completely unacceptable. It cuts off long lines. J.Ja : </pre> <code>: </code> "Pre" nor "code" worked, both lost the formatting. What else can we do? Every site I've been to has a way of putting in sample code that does nice formatting and coloring and such. Just use that system, whatever it is, please. J.Ja Every site has their own list of what tags do and do not work. Some sites don't allow HTML, but prefer their own markup language (like Markdown). You might know about using pre, but not everyone else does. And the pre tag formats them wrong anyways... look what happens to long lines, they run off the edge. Pre is truly the wrong tag. The *right* tag for code samples would do the following: * Maintain spaces, indentation, tabs, etc. as types. * Wrap overflow text. * Use a fixed width font. * Set the block in some formatting that makes it obviously separate from the main text. J.Ja Go to StackOverflow or MSDN and see how they do it. It's not just a matter of what we, the users should be doing. It's a matter of what TR needs to do. They need to provide a facility to post code samples properly, and they need to provide explanatory text. I'm not avoiding your question, I already answered it above. :) J.Ja I actually kinda like non-wrapping, as long as there's a 100 column width box to use (or more), guaranteed for any reasonable display resolution (600x800 and up) and any reasonable screen zoom (150%), because it encourages people to write code with lines of a reasonable length. Then again, I don't care much about Java, C#, and VB.NET -- and I know line lengths tend to need to be <strong>much</strong> longer in those languages. I guess I'm just being a little self-centered when I say "So what?" about line wrapping. ... I hope they'll publish detailed instructions in an obvious location on how to post code. It would be really cool if they adopted an approach like the SyntaxHighlighterEvolved plugin, where you specify a code section with an optional language attribute, and it gets syntax-colored as well. The only thing I don't like about that plugin is that it relies on JavaScript being enabled. Seems like the processing could be done server-side instead. Ideally, it'd be something that allows dynamic updates (JavaScript driven, in other words) but degrades gracefully so that a page reload will achieve the same effect if JavaScript is restricted or unavailable for some reason. Unfortunately, it seems like almost everybody in the world has forgotten about the concept of graceful degradation of interfaces. The full selection of display preferences and third party tracking options apear only when I enable scripts. It looks like most of the site runs fine with full NoScript in affect though. @SinisterSlay - your code here is formatted correctly with the pre tag! It's cut off at the ends. It needs to wrap. In other testing, Chip's shown that pre DOESN'T work the way we need it to. J.Ja In the old TR theme, scroll bars were automatically added with the pre tag to fix the chopping problem. The votes shown in "My Stuff" seem to have no relationship to actual activity. My picture in my bio box on my articles is missing. In the bio box for authors, the normal right-click menu is broken. J.Ja Appear to be the sum of votes for the entire thread, not the votes for a particular post. It still makes no sense to the user why it would work that way... the only metric that anyone would care about is the number of votes on the post, not the thread. J.Ja I can down vote my own comment. I can up vote my own comment. Both are broken. Down voting a comment when it is at "0" seems to not update the display to say "-1", but upvoting does. J.Ja I haven't seen any activity with the "up" and the "down" vote. We could all go down in flames when it starts working. The apostrophes are still messed up in many places. This is a Unicode issue, the system needs to replace the various apostrophe characters with the single one that HTML likes. A good example is the header on the poll here: J.Ja The problem is all those asinine "smart quotes" on the Web. Jeebus cries, people, this is a textual medium perused by technically oriented people; we need characters that can actually be trivially duplicated from a standard QWERTY keyboard whenever at all reasonable. Copying text from articles and pasting them in comments has been kind of a problem because of auto-replacement of totally printable characters with totally unprintable characters off and on here at TR, and it drives me up the wall. People are bending over backwards to do things that shouldn't be done in the first place, putting ungodly amounts of effort into implementing systems to automatically change text so it's less usable. It's absurd. <a href=""><em>Smart Quotes Considered Harmful</em></a> Read it for a start on understanding the problem, if you don't already get it. It's especially important in the P&D section to not use smart quotes, because you need to be able to copy/paste into code and have it work. Many languages have a huge distinction between the apostrophe character and the backtick which is often what comes out when copying smart quotes and smart apostrophes. This is even more important for *Nix admins, now that I think about it... J.Ja Furthermore, the font used for rendering code samples needs to make clear distinctions between ' and `, as well as between all other characters -- especially between members of the following sets: <code>0O</code> <code>|l1</code> <code>S$</code> <code>.,</code> <code>;:</code> Personally, I prefer when sites leave that up to the browser's interpretation of the <code> tag -- I can set that to Deja Vu Sans Mono and be happy. That would involve just setting it to <tt>monospace</tt> in the CSS, and not setting any other options. Unfortunately, most users just use whatever defaults came with the browser, which in many cases is downright hideous -- and would totally break up the look and feel of a Website. In general, though, that's much less a problem for code blocks than it is for other fonts being set to <tt>sans-serif</tt> or <tt>serif</tt>. In fact, I'd say that usually serif fonts are the biggest problem, because of how many people go with the default, and on MS Windows that's generally Times New Roman (a downright awful font for digital display). It's great for narrow-column justified text in black-on-white print media, though..
http://www.techrepublic.com/forums/discussions/techrepublics-official-feedback-thread-for-the-2011-upgrade/?count=25&view=expanded
CC-MAIN-2014-52
refinedweb
5,715
72.46
Alex DiFazio579 Points sys.exit code not working? Can anyone give me a tip on why this code isn't working? From the guidelines, I feel like I am giving exactly what is needed. What am I missing? import sys def start_movie(): input('Do you want to start the movie? Y/n').lower() if input == 'n': sys.exit() else: print("Enjoy the show!") 1 Answer behar10,781 Points Hey Alex! Your close, but you have a couple of issues. One is that you dont need to define this as a function, just write the code. Secondly is that an input will return the value that it recived, so you cant say: input("Input something") if input == "whatever" Input is never the same as something, instead you should store the returned value from input in a variable, and then check if that variable is the same as "whatever". So your final code should look something like: import sys returned_value = input('Do you want to start the movie? Y/n').lower() #Added a variable for whatever input returns if returned_value == 'n': sys.exit() else: print("Enjoy the show!") Hope this helps!
https://teamtreehouse.com/community/sysexit-code-not-working
CC-MAIN-2019-30
refinedweb
189
77.33
From Documentation Purpose In the previous small talks ( Integrating Google Maps and Integrating FCKeditor), we introduced how to add a new component into ZK component. In this article, we’ll describe how to add a new component for Yui-Ext and how ZK client engine interact with Ext engine. Live Demo A few days ago, we wrote an article regarding the demo example of yuiextz component which could be found in the smalltalks New Features of yuiextz 0.5.1. Introduction To integrate Ext Grid, we need to prepare a data format supported by Ext Grid, such as HTML table, xml data, data array, and so forth. So far, we only render the output of Y-Grid component into HTML table according to the format of Ext Grid. The output is generated by child components of Y-Grid, including grid, columns, column, rows, and row. Thus, we have to prepare corresponding DSP (as JSP) files of these components to generate required html tags. Architecture The interaction between Ext JS and ZK client engine is described as follows. In this example, we should not interact with the DOM tree directly to avoid conflict between ZK client engine and Ext engine. Thus, we should only interact with Ext engine instead of the DOM tree. Now we’ll explain to you step by step. The difference between ZK client engine and Ext engine - ZK Client Engine: - The key part of ZK client engine use the uuid of each component to identify the ZK component at client side. Actually, when the visiting page is loading, the ZK server engine will use the DSP language to produce the raw HTML text and to put the uuid of each component into the according HTML tag as id (One to One). Therefore, ZK client engine will use the id of HTML to manipulate the JavaScript object by ZK client engine and use the id to notify the server side object accordingly from browser to server. - Ext JS: - In the Ext JS, it uses JavaScript object to store its data model. In this case, it uses the Ext.grid.Grid object to operate the grid component in JavaScript. In this grid object, it uses the Ext.data.Record object to cache the row data and uses the Ext.grid.ColumnModel to store the information of header. The four steps to register a new component To implement a new component into ZK component.we need to prepare four types of files. - The lang-addon.xml file: Register the new ZK component. - The Java files (*.java): Component as Java objects. - The template files (*.dsp): Template to generate final HTML tags. - The Javascript files (*.js): Glue logic to link the Javascript component to the Java class. Register a new component in lang-addon.xml file We have to register the new component in lang-addon.xml file so that ZK loader will know how to identify it. In the file we define the following elements to describe the new component. ... <javascript-module ... <component> <component-name>grid</component-name> <component-class>org.zkforge.yuiext.grid.Grid</component-class> <mold> <mold-name>default</mold-name> <mold-uri>~./yuiextz/grid.dsp</mold-uri> </mold> </component> ... - javascript-module: - Specifies the dynamic JavaScript source(Load-on-demand). This source of JavaScript is used by Y-Grid component. - component-name: - Defines a component name as grid. - component-class: - Defines the component class is used by the. Note:You could define as many as component as you want. A Java object of Component: the Grid.java The way to interact the Ext JS Grid at the client side in the Java Object depends on the smartUpdate() method to manipulate the JavaScript object of grid. Here we choose one as our example: The track-mouse-over function. public boolean isTrackMouseOver() { return _trackMouseOver; } public void setTrackMouseOver(boolean trackMouseOver) { if (isEditable() && trackMouseOver) throw new UiException("Appliable only to the uneditable type: " + this); if (_trackMouseOver != trackMouseOver) { _trackMouseOver = trackMouseOver; smartUpdate("z.trackMouseOver", String.valueOf(trackMouseOver)); } } In ZK framework, we usually use the smartUpdate() method to send a command from server to browser side. It is used by the ZK server to send command back to the browser in an Ajax response. The first argument of the smartUpdate is the attribute name or the command name. The second argument is a String that is the new value of the attribute or the associated data of the command to be sent to the browser. E.g. The smartUpdate("z.trackMouseOver", "true") will send z.trackMouseOver command along with the boolean string (true) to the browser and tells the Ext Grid at the browser side to change status to true. Notice that for each http request, command with same name will be sent only once. Thus within one event handling (one XMLHttpRequest), if the Java API called the smartUpdate with same command name more than once, only the last one is sent. A dsp file used in ZK is a template file similar to a typical jsp file. You can use the dsp tag libraries provided to generate the final HTML tags that will be read by the browser. We use an htmlA dsp file used in ZK is a template file similar to a typical jsp file. You can use the dsp tag libraries provided to generate the final HTML tags that will be read by the browser. We use an html <%@ taglib <div id="${self.uuid}" z. <table width="100%" border="0"> <c:if <thead> ${z:redraw(self.columns, null)} </thead> </c:if> ${z:redraw(self.rows, null)} </table> </div> In the dsp file, we use a special property to define a related JavaScript object. - z.type: - Using the pattern of name specifies JavaScript object.("PackageName.FileName.JavascriptObjectName") Note: The name of JavaScript object must add a prefix name with zk such as (zkExtGrid) and yuiextz.grid is used to locate the js file under /web/js/yuiextz/grid.js by the Java classpath. The catcher of the smartUpdate command As we mentioned in Grid.java, to use the smartupdate() method control the behavior of grid. So ZK client engine will use JavaScript object which is predefine in the z.type property to catch the command of smartUpdate from server to browser side by the setAttr function as below. zkExtGrid.setAttr = function (cmp, name, value) { if(name == "z.trackMouseOver"){ var grid = zkau.getMeta($real(cmp)).grid; if(value == "true"){ grid.on("mouseover", grid.view.onRowOver, grid.view); grid.on("mouseout", grid.view.onRowOut, grid.view); }else { grid.un("mouseover", grid.view.onRowOver, grid.view); grid.un("mouseout", grid.view.onRowOut, grid.view); } return true; } ... In the setAttr() functon, we use the if/else statement to check which name of command is we need. In this example, we need to catch the name with "z.trackMouseOver" and to control the Ext Grid API to register or unregister an envent. The JavaScript component life cycle: init and cleanup The HTML tags generated by ZK dsp language will not be recognized by Ext Grid, so we need to reorganize the result generated by Ext Grid in the initialization of ZK JavaScript component life cycle by ZK client engine invoking its init() function to reorganize the Ext Grid HTML structure. zk.addModuleInit(function () { zkExtGrid.init = function (cmp) { ... var grid = getZKAttr(cmp, "gridType") == "tableGrid" ? new zkExtGrid._tableGrid(cmp,zkExtGrid._config(cmp)) : new zkExtGrid._editorGrid(cmp,zkExtGrid._config(cmp)) ; grid.container.id = cmp.id; grid.container.dom.id = cmp.id; grid.node = cmp; // store the first HTML dom of grid component. grid.view = new zkExtGridView(); // reorganize the data structure of HTML cmp.grid = grid; zkau.setMeta($real(cmp),cmp);// store the relation of cmp and grid ... }; }); In the above code, you see that we use the zkExtGridView() function to reorganize the Ext Grid HTML structure. When the component will be destroyed, ZK client engine will invoke its cleanup() function to clean the component as below, zkExtGrid.cleanup = function (cmp) { var gcmp = zkau.getMeta($real(cmp)); if(gcmp)gcmp.grid.destroy(); }; In this sample, we use the destory() function of Ext Grid to clean the component entity. Attach UUID of ZK component to Ext Grid The following fragment code is in the zkExtGridView() function. tpls.master = new Ext.Template( '<div class="x-grid" hidefocus="true">', '<div id="' + grid.container.id + '!topbar" class="x-grid-topbar"></div>', '<div id="' + grid.container.id + '!scroller" class="x-grid-scroller"><div></div></div>', '<div id="' + grid.container.id + '!locked" class="x-grid-locked">', '<div class="x-grid-header">{lockedHeader}</div>', '<div class="x-grid-body">{lockedBody}</div>', "</div>", '<div id="' + grid.container.id + '!real" class="x-grid-viewport ' + attrs.cls + '"' + attrs.attrs + ' ' + attrs.style + ' >', '<div class="x-grid-header">{header}</div>', '<div class="x-grid-body">{body}</div>', "</div>", '<div id="' + grid.container.id + '!bottombar" class="x-grid-bottombar"></div>', '<a href="#" class="x-grid-focus" tabIndex="-1"></a>', '<div id="' + grid.container.id + '!proxy" class="x-grid-resize-proxy"> </div>', "</div>" ); As you can see, we glue the id of ZK component to Ext Grid's template manually. ZK client engine can use this id to find the ZK client component. And this way can’t break the behavior of Ext Grid. Send command from browser to server: the zkau.send() method The event registration code is done in zkExtGrid.init method. zk.addModuleInit(function () { zkExtGrid.init = function (cmp) { ... grid.colModel.on('columnmoved', zkExtGrid.onColumnMoved, grid.colModel); ... }; }); We have registered an columnmoved event in above example. The event will notify the Ext Grid listener when user to move the column on the grid, and then we use ZK client engine to send the event to notify the ZK server component as follows, zkExtGrid.onColumnMoved = function (cm, oldIndex, newIndex){ var cmp = cm.cmp; zkau.send({uuid: cmp.id, cmd: "onColumnMoved", data: [cm.getDataIndex(newIndex),oldIndex,newIndex]}, zkau.asapTimeout(cmp, "onColumnMoved", 10)); }; The zkau.send function is used to send a commend with a necessary data array from browser to server via an Ajax request. To use the zkau.send function, you must specify two arguments. The first argument is including, uuid, cmd, and data. The uuid is a id of JavaScript component accordingly. The cmd is a name of command at server side. The data is a necessary data array for this event. The second argument is a number of millisecond regarding whether how soon this command should be sent to the server. The catcher of the onColumnMoved command on server side The catcher (ZK update engine) of the command at the server side will first wrap the command into an AuRequest object then based on its command name (in this case, the onColumnMoved), delegate to the specific command processor. In this case, to process the passed back onColumnMoved command, we need a ColumnMovedCommand command processor to update the order of moved column. public class ColumnMovedCommand extends Command { ... protected void process(AuRequest request) { final Component comp = request.getComponent(); final String[] data = request.getData(); ... final Grid grid = (Grid) comp; final Columns cols = grid.getColumns(); final Rows rows = grid.getRows(); final Desktop desktop = request.getDesktop(); final Column col = (Column) desktop.getComponentByUuid(data[0]); final int oldIndex = Integer.parseInt(data[1]); final int newIndex = Integer.parseInt(data[2]); try { ((Updatable) (cols).getExtraCtrl()).setResult(Boolean.TRUE); if (newIndex == cols.getChildren().size() - 1) { cols.insertBefore(col, null); } else { cols.insertBefore(col, (Column) cols.getChildren().get( oldIndex < newIndex ? newIndex + 1 : newIndex)); } } finally { ((Updatable) (cols).getExtraCtrl()).setResult(Boolean.FALSE); } ... Events.postEvent(new ColumnMovedEvent(getId(), comp, col, Integer .parseInt(data[1]), Integer.parseInt(data[2]))); } } This class extends Command and overrides process() method to handle the passed in AuRequest. As you can see, the target component has been fetched back by its uuid when wrapping the command into an AuRequest object. The data set is converted into a String array as the form it was prepared in zkau.send() Javascript method. After updating the order of moved column, it posts a ColumnMovedEvent to the ZK event queue so application developers can register the event and handle it. Notice that to make the ZK update engine aware of a new command processor, you have to register it into the ZK's command processing map. The onXxx command name is where all things are associated together. public class Grid extends XulElement { ... static { new ColumnMovedCommand("onColumnMoved", 0); } ... } Summary We welcome your contribution to integrate more Ext JS components into ZK framework. If you come up with any problem, feel free to ask us on ZK forum.
http://books.zkoss.org/wiki/Small%20Talks/2007/July/Behind%20The%20Scene:%20Integrating%20Ext%20Grid
CC-MAIN-2016-07
refinedweb
2,070
59.3
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Inheriting views in custom module I'm making a custom module that requires modified email templates. So I'm starting off by inheriting from the email_template module. I've created a view spec and it's now adding in my new template into the sales menu (temporary location), but when I try to create a new entry it doesn't use the form I've created/inherited for it. It creates what seems to be a default form. The steps I've taken: - I've made a new model that inherits from `email.template` - And then I've made an "act_window" that references my new model and the original tree and search view for `email_template` - Created a new form for my own model which inherits from `email_template.email_template_form` which is set to priority 5 I've tested and I can see that the form is used when I reference it directly in my `act_window` instead of the tree view. But then I get a default tree view instead, as well as entering at create a new from the menu instead of the listing. Is it not possible to inherit things in this way I've been thinking or am I missing something obvious here? :) I'm developing this off the 8.0 branch of GitHub and I'm learning to develop from the documentation from the 8.0 branch as well. (No training in my part of the world until end September) Included below is the code I'm using for my tests, please say if I should paste something more: My model: ``` from openerp.osv import osv class sale_order_email_template(osv.osv): _name = 'sale.order_email_template' _inherit = 'email.template' _defaults = {} ``` My view: ``` <?xml version="1.0"?> <openerp> <data> <record model="ir.ui.view" id="email_template_form"> <field name="name">sale.order_email_template.form</field> <field name="model">sale.order_email_template</field> <field name="priority" eval="5"/> <field name="view_type">form</field> <field name="inherit_id" ref="email_template.email_template_form"/> <field name="arch" type="xml"> <xpath expr="//page[@string='Email Configuration']" position="replace"/> </field> </record> <record model="ir.actions.act_window" id="action_email_template_tree_all"> <field name="name">Templates</field> <field name="res_model">sale.order_email_template</field> <field name="view_type">form</field> <field name="view_mode">form,tree</field> <field name="view_id" ref="email_template.email_template_tree" /> <field name="search_view_id" ref="email_template.view_email_template_search"/> </record> <menuitem id="menu_sale_order_email_templates" parent="base.menu_sales" action="action_email_template_tree_all" sequence="20"/> </data> </openerp> ``` About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/inheriting-views-in-custom-module-59195
CC-MAIN-2018-17
refinedweb
447
50.84
12-A.2: chroot Jail - Page ID - 42502 What is a chroot Jail? A chroot on Linux an operating-system level virtualization and is often used instead of virtual machines to create multiple isolated instances of the host OS. This is a kernel level virtualization and has practically no overhead as compared to Virtual Machines, which are an application layer virtualization. As a result it provides a very good method for creating multiple isolated instances on the same hardware. A virtual machine (VM) is a software implementation of a machine and they often exploit what is known as the Hardware Virtualization to render virtual images of a working operating system. How to use chroot The basic command to create a chroot jail is as follows. Steps to create a mini-jail for the ‘bash’ and the ‘ls’ command: 1. Create a directory which will act as the root of the command. pbmac@pbmac-server $ mkdir newroot pbmac@pbmac-server $ cd newroot 2. Create all the essential directories for the command to run: Depending on your operating system, the required directories may change. Logically, we create all these directories to keep a copy of required libraries. To see what all directories are required, see Step 4. pbmac@pbmac-server $ mkdir -p bin lib64/x86_64-linux-gnu lib/x86_64-linux-gnu 3.Run the ‘which’ command: Run the which. pbmac@pbmac-server $ unalias ls # Required only if you have aliased ls command pbmac@pbmac-server $ unalias bash # Required only if you have aliased bash command pbmac@pbmac-server $ cp $(which ls) ./bin/ pbmac@pbmac-server $ cp $(which bash) ./bin/ 4. Copy appropriate libraries/objects: For the executables in our newroot directory to work we need to copy the appropriate libraries/objects in the JAILED directory. By default, the executable looks at the locations starting with ‘/’. To find the dependencies we use the command ‘ldd.’ pbmac@pbmac-server $: pbmac@pbmac-server $ cp /lib/x86_64-linux-gnu/libtinfo.so.5 lib/x86_64-linux-gnu/ pbmac@pbmac-server $ cp /lib/x86_64-linux-gnu/libdl.so.2 lib/x86_64-linux-gnu/ pbmac@pbmac-server $ cp /lib/x86_64-linux-gnu/libc.so.6 lib/x86_64-linux-gnu/ pbmac@pbmac-server $ cp /lib64/ld-linux-x86-64.so.2 lib64/ So, ALL of the necessary file are copied into the new location to be a part of the chrooted environment. This is necessary because the user running in the chrooted environment cannot access the file system outside of the newroot folder. 5. Sudo chroot: Run this command to change the root to the JAILED directory, along with the path to the shell. By default it will try to load ‘/bin/sh’ shell. pbmac@pbmac-server $ cd .. pbmac@pbmac-server $ sudo chroot newroot /bin/bash You might face this error while running the chroot command: chroot: failed to run command `/bin/bash': No such file or directory This may be due to two reasons: either the file does not exist (which is obvious), or when the loading library fails or is not available. Double-check if the libraries are in the correct location. 6. A new shell must pop up: It's our newroot bash. We currently have only two commands installed, bash and ls. Fortunately cd and pwd are built-in commands in bash shell, so you can use them as well. Roam around the directory - try accessing ‘cd /../’ or something similar. Try to break the jail; probably you won’t be able to. To exit from the jail: pbmac@pbmac-server $ exit The most important and interesting part is that, when you run, pbmac@pbmac-server $ ps aux and find the process, you’ll find that there is only one process: root 24958 … 03:21 0:00 /usr/bin/sudo -E chroot newroot/ /bin/bash Interestingly, processes in the newroot shell run as a simple child process of this shell. All the processes inside the JAILED environment are just simple user level processes in the host OS and are isolated by the namespaces provided by the kernel. Thus there is minimal overhead and as an added benefit we get isolation. Adapted from: "Linux Virtualization – Chroot Jail" by Pinkesh Badjatiya, Geeks for Geeks is licensed under CC BY-SA 4.0
https://eng.libretexts.org/Bookshelves/Computer_Science/Operating_Systems/Linux_-_The_Penguin_Marches_On_(McClanahan)/12%3A_Linux_Systems_Security/1.02%3A_chroot_Jail
CC-MAIN-2022-05
refinedweb
703
55.64
I have a List of objects. I want to iterate through this list of objects and process some subsets of objects based on a condition and finally create a new list ... List I have a Candidades class that holds Candidate objects, as follow: import java.util.*; public class Candidates<Candidate> extends ArrayList<Candidate> { public int getTotalVotesCount() { Iterator it = this.iterator(); int ... I'm iterating an ArrayList of clients named clientList that contains clients from the class Client (user,pass) ArrayList Client (user,pass) ArrayList<Client> clientList= new ArrayList<Client>(); I am trying to populate an ArrayList with SQL update statements to pass to an update object while still adding statements to the ArrayList. Is this possible? A little background for this program. It will have five different versions all doing nearly the same thing. I have decided threading each variation would speed things up a bit. Each variation will get ... Hi all, I am creating a tree structure using JSON Object,I have a class (bean class called Tree) where in i have 3 attributes namely id, parent id, title. In another java class i connect to a database and retrieve these values and store it in a arraylist of type . Now i have to assign individual attributes like id ,parent ...
http://www.java2s.com/Questions_And_Answers/Java-Collection/ArrayList/Iterate.htm
CC-MAIN-2013-20
refinedweb
211
64.2
Suppose we have a string s with only numbers. We have to check whether we can split s into two or more non-empty substrings such that the numerical values of those substrings are in non-increasing sequence and the difference between numerical values of every two adjacent substrings is 1. So for example, if the string is s = "0080079" we can split it into ["0080", "079"] with numerical values [80, 79]. And the values are in descending order and adjacent values differ by 1, so this way is valid. We have to check whether it is possible to split s as described above, or not. So, if the input is like s = "080076", then the output will be True because we can split it like ["08", "007", "6"], so numeric values are [8,7,6]. To solve this, we will follow these steps − Define a function dfs() . This will take s, pre, idx, n if pre is not -1 and integer form of (substring of s[from index idx to end]) is same as pre -1, then return True for i in range 1 to n-idx-1, do curs := substring of s[from index idx to idx+i-1] cur := curs as number if pre is same as -1, then if dfs(s, cur, idx+i, n) is true, then return True otherwise, if cur is same as pre - 1 and dfs(s, cur, idx+i, n) is true, then return True return False From the main method, do the following n := size of s if n <= 1, then return False return dfs(s, -1, 0, n) Let us see the following implementation to get better understanding − def dfs(s, pre, idx, n): if pre != -1 and int(s[idx:]) == pre - 1: return True for i in range(1, n-idx): curs = s[idx: idx+i] cur = int(curs) if pre == -1: if dfs(s, cur, idx+i, n): return True else: if cur == pre - 1 and dfs(s, cur, idx+i, n): return True return False def solve(s): n = len(s) if n <= 1: return False return dfs(s, -1, 0, n) s = "080076" print(solve(s)) "080076" True
https://www.tutorialspoint.com/program-to-check-whether-we-can-split-a-string-into-descending-consecutive-values-in-python
CC-MAIN-2021-49
refinedweb
363
59.5
Excellent, very easy. Explain, how get in Bean in FacesConfig. Post your Comment Declaring the Bean class in the faces-config.xml file Declaring the Bean class in the faces-config.xml file...; This example illustrates how to define the bean in the faces-config.xml file...; The source code of the faces-config.xml file is as follows:   Registering managed bean and defining navigation rules are required to be registered in the faces-config.xml file of the application. Managed bean name and its actual class name is defined within the "managed...-rule" element of the xml file. Full view of the faces-config.xml Bean Bean i have this problem when i registry after that check the valid value whit useBean classes org.apache.jasper.JasperException: /proces1.jsp(7,0) The value for the useBean class attribute ../public_html/WEB-INF/classes Managing Bean Example ;faces-config.xml". Just add lines given below in this file... with the value specified in the "faces-config.xml" file... in the "faces-config.xml" file. welcome.jsp Binding the Form Elements With the Bean Properties with CheckValidUser() method of the Bean class. This method returns a String value... value and the navigation rules specified in the "faces-config.xml"...Binding the Form Elements With the Bean Properties   JSP bean get property bean get property. In this example we define a package bean include a class... the bean with the scope . class - This indicate package. class is used to give the reference of the bean class. <jsp:get Property> bean creation exception bean creation exception hi i am getting exception while running..." org.springframework.beans.factory.BeanDefinitionStoreException: IOException parsing XML document from class...: class path resource [beans.xml] cannot be opened because it does not exist how A Message-Driven Bean Example . A message-driven bean has only a bean class i.e. unlike a session bean... in the JNDI namespace from within the bean class so that we do not need... the annotation to a class, the annotation declares a resource that the bean will look An Entity Bean Example an entity bean: Every bound persistent POJO class is an entity bean...; } @Entity declares the class as an entity bean... bean class: In the Book catalog example, we define a Book entity bean Form processing using Bean Form processing using Bean In this section, we will create a JSP form using bean ,which will use a class file for processing. The standard way of handling... bean. You just need to define a class that has a field corresponding Bean life cycle in spring to retrieves the values of the bean using java file. Here in the file given below i.e.... "Bean" is the name of the bean class which would be further referred...; <property name="company" SpEl-Wiring bean to wire bean. Product.java: The Product class contains property named part which... bean. package spel.wiringbean; import java.util.*; public class Part...;bean <property Bean Tag (Data Tag) Example Bean Tag (Data Tag) Example In this section, we are going to describe the Bean Tag. The Bean tag is a generic tag that is used to instantiates a class that confirms to the JavaBeans How to get the browsed file path into controller bean using JSF icefaces How to get the browsed file path into controller bean using JSF icefaces Hi All, I've a requirement as, 1) Need to browse a file from my local system/machine 2) Path of the file from which I browsed need to be stored Spring Bean Example, Spring Bean Creation java bean file having two methods getMessage() and setMessage() and a default...Basic Bean Creation The Spring bean can typically be POJO(Plain Old Java...;} } The context.xml connects every bean to every other bean Writing Calculator Stateless Session Bean ; } Enterprise Bean class All Bean class are defined as public and implements the javax.ejb.SessionBean. In the bean class we have implemented...; Above deployment descriptor defines remote, home and bean class for the bean Deploying and testing Stateless Session Bean Deploying and testing Stateless Session Bean... Session Bean developed in the last section. We will use ant build tool to build ear file. We will deploy our application using WebLogic console. Building ear class file class file How to create class file in jsf easily Java bean example in JSP , Inside this package, we define a class My Bean. The class define a newly created... us go through the list of steps to be carried out to call a My bean class.... class - This is used to specify the package. class and instantiate a bean file class file class Hi Friend, I have a file class it lets me extract all the data from a csv file but I need to align the data properely, seperating... java.util.*; public class ReadCSV{ public ReadCSV (){ } public static bean object java.util.*; public class Bean { public List dataList(){ ArrayList list=new...bean object i have to retrieve data from the database and want... javax.servlet.*; import javax.servlet.http.*; public class BeanInServlet extends class file class file how to convert java classfile into java sourcefile Session Bean is a Session bean A session bean is the enterprise bean that directly.... A session bean represents a single client accessing the enterprise application.... A session bean makes an interactive session only for a single client and shields Writing Deployment Descriptor of Stateless Session Bean this JNDI name to lookup the Bean and call it from Servlet. Web.xml file for war...Writing Deployment Descriptor of Stateless Session Bean... for the session bean. We need the deployment descriptor for application Stateful Session Bean Example : Remote business interface (Account) Session bean class... class. If the bean class implements a single interface, that interface is assumed... it is annotated with the javax.ejb.Remote annotation. The bean class may also Creating Bean Class In JSF using NetBeans Creating Bean Class In JSF using NetBeans  ... illustrates how to create the Bean Class. This is also called the Backing Bean Class. This bean class is used to store the values of the form components Send Mail Bean ; Now set this bean and property in applicationContext.xml file... Send Mail Bean In our application we have developed mailer bean that sends welcome email Use Of Form Bean In JSP Use Of Form Bean In JSP  ... about the procedure of handling sessions by using Java Bean. This section provides... or data using session through the Java Bean. Program Summary: There are  Congratulationsmaocb November 15, 2011 at 3:21 AM Excellent, very easy. Explain, how get in Bean in FacesConfig. Post your Comment
http://www.roseindia.net/discussion/24007-Declaring-the-Bean-class-in-the-faces-config.xml-file.html
CC-MAIN-2013-20
refinedweb
1,107
66.74
On 07/03/2018 17:08, Ash Gokhale wrote: > On Wed, Mar 7, 2018 at 6:51 AM, Kurt Jaeger <p...@opsec.eu> wrote: > >> Hi! >> >>>> I've made peace with poudriere in 10.4, 11.1 and 12 current jails with >>>> USES= uidfix, and also fixed the spurious pthreads cast that was >> choking >>>> gcc. Would you all try it again please? >>>> >> commits/master >>> >>> I've tested it on 10.3-i386, same build error as before. Maybe >>> the problem happens with 10.3, but not with 10.4. I'll retest with 10.4. >> >> Yes, builds with 10.4, fails with 10.3. >> >> -- >> p...@opsec.eu +49 171 3101372 2 years to >> go ! >> > > > > mat@'s feedback accepted, > > This is the error from 10.3 release via poudriere jail: > > =======================<phase: build >============================ > ===> Building for viamillipede-0.7 > make[1]: "/usr/share/mk/bsd.own.mk" line 505: MK_DEBUG_FILES can't be set > by a user. > *** Error code 1 > > > I'm not sure where to go from this; can I fence the port to build only on > 10.4+ ? > > I'll poke around in the jail for clues. Been a while but I think its something like Something like this in the makefile .if ${OSVERSION} > 1004000 BROKEN= Needs features from at least 10.4 .endif Vince > _______________________________________________ >"
https://www.mail-archive.com/freebsd-ports@freebsd.org/msg78312.html
CC-MAIN-2019-04
refinedweb
214
97.09
Hello together, i have a question regarding the configfile.I want to write some INT values in it, but the value should be a const char*.I'm working with ALLEGRO 5.0.4. I think there are two solutions for it,1. convert int to const char* (not knowing)2. there is a function to write a int value to the config file (not knowing) Maybe anyone can explain me the "right" way with an short example? With best regards MacGyver Write yourself a function int write_int_to_config(int i) { char s[30]; // convert i to string - that's a tricky job but not too hard. // use al_write_to_config( (const char *)s, .... ) or whatever the function is } snprintf() strtol() -- Move to the Democratic People's Republic of Vivendi Universal (formerly known as Sweden) - officially democracy- and privacy-free since 2008-06-18! My solution Is there a better solution for it? Click on the links gnolam posted. Hello Arthur Kalliokoski, and what's wrong with my posted solution? Greats MacGyver For a C++ solution, use stringstreams. [EDIT] Two major things from a quick skim: Won't handle negative integers returnvalue goes out of scope, so the pointer that .c_str() returns becomes invalid as soon as that function returns. and what's wrong with my posted solution? I didn't check it at all, but reinventing the wheel is usually a waste of time and often leads to bugs. Hello, thanks for that quick response. Won't handle negative integersreturnvalue goes out of scope, so the pointer that .c_str() returns becomes invalid as soon as that function returns. Negative integers are not given, but i understand you. I'm not so confirm with snprintf()strtol() can anyone explane me with a little example? can anyone explane me with a little example? Those are C functions. Since you are using C++, you should use stringstreams as gnolam said. something like: #include <sstrream> ... stringstream str; str << int_value_I_want_to_convert; al_set_config_value(config_file, section, key, (str.str()).c_str()); str.str(""); Ok works for me.Thanks Part of being a newb (I know this from experience) is that you find yourself doing things the hard way due to lack of experience. Then you're reluctant to do things the easy way because it means all your previous hard work was a waste of time.Anyway, here's the easy way : I should have posted what Gnolam posted. you find yourself doing things the hard way due to lack of experience. Been there, done that.
https://www.allegro.cc/forums/thread/609795/950338
CC-MAIN-2018-09
refinedweb
415
68.06
This is a general question about the logic behind the naming of the assemblies in a ServiceStack solution. Please excuse the directness of this question, I am a big fan of the framework but this has been puzzling me for some time so wanted to ask the question and better understand the reason behind the naming. When creating an ASP.Net Service Host solution named X, I start with the following assemblies: XX.ServiceInterfaceX.ServiceModelX.Tests With consideration to the pure MVC (pure, not asp mvc) naming convention (Model View Control) I am used to sharing the Model and View to a consuming application so that they know how to consume the service and avoid the need to recode the same classes for the model and service proxy, leaving the control code to be consumed by the service provider host. In doing so, the consumer has access to the model and interface (in this case service method signature) without gaining access to the implementation code which may contain sensitive IP. In the ServiceStack structure, the Model and Method signature (Which I would normally refer to as service interface) is contained in the ServiceModel assembly. While the implementation code of the service is contained in the ServiceInterface assembly. To me, a better naming convention would be XX.ServiceImplementation (Contains implementation code as per now)X.ServiceInterface (Contains Model and Method signatures, routes etc as per now)X.Tests Just a thought and was interested to get some feedback on this. It's the nomenclature used in the Service Gateway pattern: Service Gateway -> [Service Model] -> Service Interface The ServiceModel project contains your Data Transfer Objects (DTOs) i.e. "Models for your Service". Yes this makes sense, it's just when you provide objects to be consumed you want to provide the method signature and the DTOs. I don't really have a problem with calling this model, I mostly have a problem with calling the other assembly interface, when in fact the interface is being defined in the model assembly. In any case this is nitpicking, the framework is awesome Best Regards The "Service Interface" nomenclature refers to the it being the "facade" or external facing Interface into your System. For many small/medium sized projects, the ServiceInterface project contains both the Service entry points as well as the implementation but for larger Systems they may only contain the entry points which delegate to app logic spread over multiple project subsystems. In either case the "ServiceInterface" project contains the external interface into your system. Hi Mythz, yes, not that I'm wanting to start a debate on this, but from a purist perspective I think its interesting. Within ServiceStack the "Service Interface" is defined by the following code for example: [Route("/applications/{id}", "GET")] public class GetApplication : IReturn<Application> { public Guid ID { get; set; } public Guid AccountID { get; set; } } [Route("/applications", "GET")] public class GetApplications : IReturn<List<Application>> { public Guid AccountID { get; set; } } To me, this code alone defines the "interface" of the service and how an external service will "Interface" with the service. It defines the request and response types, names the method and provides the route. This code currently sits in the Model Assembly. While this code public object Any(GetApplication request) { Application response; using (var db = DbFactory.Open()) { CheckTableExistsAndInitialize<Application>(db); response = db.SingleById<Application>(request.ID); } return response; } This does not define the interface, it only provides implementation of the interface that has been defined in the model. An external consuming application, needs the interface definition and the message models in order to know how to connect (e.g. to proxy etc). In soap this is provided by the WSDL, in ServiceStack by your Model assembly. As I mentioned I have no problem with the assembly being named Model, but to me, the Interface assembly is named incorrectly as it does not define the interface, it implements it. One key points is that the consuming application does not require the Interface Assembly in order to work against the service. That is why in my mind the naming is not quite right on that assembly as in pure OO, the consumer does need to know the providers interface. The Request DTOs defines the "Service Contract" containing the Request / Response messages sent from the "Client Gateway" to the "Service Interface", i.e. the facade and the entry point into your System. A visual of what this looks like with respect to your entire system is in Roles of DTOs. So on one side is the "Client Gateway" which is the implementation that sends Request DTOs to your Service. The implementation for this is realized by the "Service Interface", i.e. the implementation which accepts Request DTOs sent by the Client Gateway. This is the nomenclature used to define the Services Gateway pattern, with the "Services Interface" being the server implementation which accepts the Request DTOs, it's not to be confused with an abstract C# Interface, the "Interface" means the entry point into your System, it's a Services concept not a code one. Hi Mythz, firstly thanks so much for taking the time to respond. I wish again to state that I love this framework, it is a beautiful thing. In my posts I was not referring to an Interface in terms of coding but rather the "View" of a piece of software. For a windows forms app, the interface would be the forms, for a service application the interface is the contract. The interface defines how the outside world interacts with the software. Application Logic should not be found in an interface assembly. This is because the consuming application should consume the interface (so it knows how to use the service) and sharing of application logic between tiers is not normally a good thing. You shared above a link to the Service Gateway Pattern, that page links to another link Service Interface Pattern. You will see here in figure 1 that the contract is part of the interface and that the implementation (or application logic) is separate. It makes sense if you think about it because the consuming application must know about two things, the interface and the models (or Model and View in MVC terms). In the context of service architecture, View in my mind is the interface. At present we have the Interface as the Control and that does not seem right to me. Perhaps in my mind I am expecting the Gateway Pattern to daisy chain to the Interface Pattern, and perhaps you are saying that its one or the other. I'm more than happy to agree to disagree, again I'm in awe of what you have created, I'm a big fan. Best Regards.
https://forums.servicestack.net/t/assembly-naming/5135
CC-MAIN-2018-43
refinedweb
1,122
50.67
I have two interfaces, MyapiA and MyapiB with 2 methods each. Two classes - SortA does all the sorting routines and SortB defines the arrays and generates random numbers. In the main class, Sort I have the following code which produces the expected results using a default argument of 6 for the "x length" or number of numbers to sort. This all works well, where I am a bit stuck is getting it to accept user input or use the default value. I could hand the assignment in without this but I really want to learn how this can be done. I have played a bit with the Scanner object but am not really sure how to proceed from this point. Here is the code from my Sort class which is very simple: public class Sort extends SortB { // start the main method static public void main(String[] args) { SortB mySortB = new SortB(); mySortB.sortIncrease(6); mySortB.sortDecrease(6); } } If anyone can offer some advice on how I might introduce a small routine to have it accept user input otherwise take the default it would be greatly appreciated. Thanks very much for an advise.
https://www.daniweb.com/programming/software-development/threads/226559/java-program-user-input-and-default-argument
CC-MAIN-2017-26
refinedweb
192
67.28
Created on 2019-01-04 11:04 by Petter S, last changed 2019-01-25 23:30 by FR4NKESTI3N. The ``ANY`` object in ``unittest.mock`` is also pretty useful when verifying dicts in tests: self.assertEqual(result, { "message": "Hi!", "code": 0, "id": mock.ANY }) Then it does not matter what the (presumably randomly generated) id is. For the same use cases, objects like ``APPROXIMATE`` (for approximate floating-point matching) and ``MATCHES`` (taking a boolean lambda) would be pretty useful, I think. Looking at the code ANY is simply implemented with __eq__ always returning True at . I am not sure how APPROXIMATE can be implemented in terms of floating point like rounding up or down? Do you have any examples in mind over a sample implementation or example of how they should behave in various scenarios? Yes, something like this: class APPROXIMATE: """Takes a floating point number and implements approximate equality.""" def __init__(self, value): self.value = value def __eq__(self, other): return abs(self.value - other) / (abs(self.value) + abs(other)) < 1e-6 def __repr__(self): return f"APPROXIMATE({self.value})" Then the following would hold: got = { "name": "Petter", "length": 1.900001 } expected = { "name": "Petter", "length": APPROXIMATE(1.9) } assert got == expected But not got["length"] = 1.8 assert got == expected I feel it would be better to have tolerance as an argument. Agreed! The code above was a quick example. There are also functions in the standard library for approximate float matching that the "real" code would use. APPROXIMATE feels like it might lead to code smell to me, if I know roughly what the float should be why would I not want to test it for exactness? It could end up hiding inconsistencies the tests should be catching. I am of the opposite opinion. :-) > if I know roughly what the float should be why would I not want to test it for exactness? When testing algorithms, it is often the case that the answer should be mathematically exactly 2, but due to floating-point inexactness it becomes, say, 1.9999999997 in practice. If I then test for exactly 1.9999999997 the test becomes very brittle and sensitive for e.g. order of multiplications. Testing floating point numbers with a relative error is essential in many application. > due to floating-point inexactness +1 Its not easy to predict when calculated value will not be equal to expected theoretical value. for example math.cos(radians(90)) is something like 6e-17 rather than 0. Testing for this exact value would be just awkward. assertAlmostEqual() is already there in unittest for such comparisons so it wouldn't be completely nonsensical to have something like APPROX
https://bugs.python.org/issue35656
CC-MAIN-2019-26
refinedweb
444
60.01
hey i am currently working with java to do assignment. i dont know how to call form from menubar to open new form,,, so anyone can help...... What part of it do you need help with... how to respond to a menu selection? how to open a new form? something else...? What part of it do you need help with... how to respond to a menu selection? how to open a new form? something else...? oh i have made a form using this code /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package mainmenu; import javax.swing.*; import java.awt.*; import java.awt.event.*; /** * * @author new */ public class MainMenu extends JFrame { public MainMenu() { super("Main menu"); JMenuBar menuBar = new JMenuBar(); setJMenuBar(menuBar); JMenu file = new JMenu("File"); JMenu about = new JMenu("About"); menuBar.add(file); menuBar.add(about); JMenuItem detail = new JMenuItem("Enter details"); JMenu query = new JMenu("Query"); JMenuItem exit = new JMenuItem("Exit"); file.add(detail); file.add(query); file.addSeparator(); file.add(exit); JMenuItem detailofallpieces = new JMenuItem("Details Of All pieces"); JMenuItem showlocation = new JMenuItem("Show Location Of All pieces"); query.add(detailofallpieces); query.add(showlocation); } public static void main(String[] args) { JFrame f= new MainMenu(); f.setVisible(true); f.setSize(800,700); } } now i want to add form named detailedform by clicking sub menu items adddetailed.... please help The first thing you need to learn is how to respond to a menu selection. Here's the tutorial: you can skip directly to the part titled "Handling Events from Menu Items"
https://www.daniweb.com/programming/software-development/threads/390898/calling-form-from-menu
CC-MAIN-2018-30
refinedweb
259
66.54
« Return to documentation listing MPI_Group_difference - Makes a group from the difference of two groups. #include <mpi.h> int MPI_Group_difference(MPI_Group group1, MPI_Group group2, MPI_Group *newgroup) INCLUDE 'mpif.h' MPI_GROUP_DIFFERENCE(GROUP1, GROUP2, NEWGROUP, IERROR) INTEGER GROUP1, GROUP2, NEWGROUP, IERROR #include <mpi.h> static Group Group::Difference(const Group& group1, const Group& group2) group1 First group (handle). group2 Second group (handle). newgroup Difference group (handle). IERROR Fortran only: Error status (integer). The set-like operations are defined as follows: o union -- All elements of the first group (group1), followed by all elements of second group (group2) that are not in the first group o intersect -- all elements of the first group that are also in the second group, ordered as in first group o difference -- all elements of the first group that are not in the second group, ordered as in the first group Note that for these operations the order of processes in the output group is determined primarily by order in the first group (if possible) and then, if necessary, by order in the second group. Neither union nor intersection are commutative, but both are associative. The new group can be empty, that is, equal to MPI_GROUP_EMPTY. Almost all MPI routines return an error value; C routines as the value MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error. MPI_Group_free 1.3.4 Nov 11, 2009 MPI_Group_difference(3)
http://icl.cs.utk.edu/open-mpi/doc/v1.3/man3/MPI_Group_difference.3.php
CC-MAIN-2014-42
refinedweb
250
52.8
A few months ago a client asked me to import some XML data into a spreadsheet. The data was for managers who couldn't handle raw XML. I was happy that the company had standardized on the popular office suite OpenOffice.org (popularly abbreviated "OOo"), and I thought this would make the task trivial. After all, OOo is one of the most XML-savvy projects of its size. As I discussed in a recent developerWorks article (see Resources), OOo uses XML for its file format. Once I set to the task, however, I was surprised to find that none of the built-in or readily available third-party importers handled importing generic XML into the spreadsheet component, OOo Calc. OOo Calc can import from some specialized XML formats, such as XHTML tables, and several existing tools can import general XML into OOo Writer (the word processor component). But when it comes to importing generic XML into OOo Calc, the user is on his own. I can understand this limitation: it's not usually easy to superimpose a regular lattice of information upon an XML document; such is the mismatch between extensible hierarchies and rigid tables. In this article I discuss a technique I used to solve the practical problem of converting XML into a form that's easily digestible by OOo Calc. Please make sure you are familiar with XSLT and XPath, and that you understand HTML and XHTML table structures before reading on. I ended up writing a quick XSLT program to convert the XML to an XHTML table structure, which unmistakably signals how the information structure should be worked into the spreadsheet. The technique I offer in this article extends this idea, making it useful for arbitrary XML formats, with just a bit of extra work from the user. Listing 1 (labels.xml) is a sample XML source file that I want to import into a spreadsheet. (To download this code, see Download.) The XML code is quite regular, but I still need to process it before I can easily import it. Listing 1. Sample XML to be imported into a spreadsheet (labels.xml) Listing 2 (quick-xml2table.xslt) shows the XSLT tool to create an XHTML table. (To download this code, see Download.) Listing 2. XSLT module to convert arbitrary XML to a regular XHTML table grid (quick-xml2table.xslt) The key to this technique is the table-spec-source parameter, which is passed to the XSLT processor. I set this parameter to a node set that gives an XML structure in the namespace, which outlines how to construct the output table. The top-level table-spec element contains one or more row-spec elements, each with an XPath in the select attribute that determines what main elements in the source document are the basis for table rows in the output. These elements contain col-spec elements whose select attribute specifies, relative to the current row element from the XML source, what expressions are used for each row value. A default value for table-spec-source tells the processor to look for the table information right in the XSLT itself, as a top-level element. This default spec in Listing 1 is a good overall example of what table specs look like, so I highlight it in Listing 3. Listing 3. The sample table spec for turning Listing 1 into an XHTML table for OOo Calc Within each row are four columns, which come from: - The labelelement's idattribute - The labelelement's namechild element - The result of applying the normalize-spacefunction on the addresselement child of the labelelement - The labelelement's addedattribute The power of dynamic XSLT The main template uses row-spec to gather all the elements that serve as the basis of the rows. Since the XPath it uses is not actually written into the XSLT instructions, but is given at run-time as a result of evaluating $row-query/@select, I must use the EXSLT extension function call dyn:evaluate($row-query/@select). EXSLT is a set of XSLT extensions that are developed and sanctioned by the XSLT community, and supported in most XSLT processors. The dyn:evaluate function converts its argument to a string and dynamically evaluates that string as an XPath expression, returning the result of this evaluation. Note: Remember that the row elements are computed as XPaths, not as XSLT patterns, so you have to express the full path from the document root to the elements in question. In other words, it would not work for you to replace /labels/label with label in the table specs. To handle each row element, apply templates in process-cols mode, which loops over all the col-spec elements and again uses dyn:evaluate to compute the text that is output for each column. To demonstrate the XSLT utility, I ran the transform using the open source XSLT tool 4XSLT, part of 4Suite (see Resources) as follows: The -o option redirects the output to a file with the given name, labels-table-output.html. Listing 4 shows the output. Listing 4. Output of Listing 2 run against Listing 1 (labels-table-output.html) A key feature of the OOo preparation utility is your ability to substitute different table specs for the output. Suppose you only want to export the ID, name, and province for each label. Use a table specification file like that in Listing 5 (tspec.xml). (To download this code, see Download.) Listing 5. Alternative output table specification in a separate XML file (tspec.xml) To instruct the XSLT utility to use this alternate set of output table specifications, override the table-spec-source parameter in your XSLT processor of choice. With 4XSLT, do this using the -D command line parameter: This time the output is different. See Listing 6. Listing 6. Output from alternative table specs Once you prepare the import file, actually performing the import is a snap. I performed the following tasks with my OOo 1.1.3 installation on Fedora Core 3 Linux. I selected the menu entry Insert > External Data. In the field URL of external data source, I entered the output file from the XSLT transform (Listing 4 in this case). In the next dialog box (see Figure 1), I made the indicated selection. Figure 1. The OOo Calc "Select Filter" dialog box I selected HTML_tables from the resulting dialog. OOo Calc then performed the import, and Figure 2 shows the resulting data in the spreadsheet. You will probably want to tweak the formatting of the results of such imports. For example, you might want to change font type or color, enable word wrapping of text fields, or set a specific formatting for numerical fields. Figure 2. Result of import into OOo Calc I hope that OOo develops more sophisticated import facilities for XML, or that contributors develop such tools for people unable to go to such sophisticated lengths in XSLT. For example, I can imagine a dialog box where one enters a series of XPaths to provide column information. Until such a time, though, I think that some variation on the XSLT technique I've presented here will be useful whenever you need to pull some XML data into OOo Calc. Information about download methods - Download the code listings presented in this article. - Learn more about OpenOffice.org at the home page, and especially the page dedicated to the role of XML in the project. Also, look at several third-party resources for OpenOffice.org, including the OpenOffice.org Utility Library and OOoMacros.org. - Read about the XML file format for OpenOffice.org in Uche Ogbuji's article "The open office file format" (developerWorks, January 2003), an installment in his Thinking XML column. - For more on EXSLT, try these resources: - Read Uche Ogbuji's article "EXSLT by example" (developerWorks, February 2003). - Visit the EXSLT home page for details on all the modules, elements, and functions. In particular, see the EXSLT dynamic module. - Learn how to contribute your own additions to EXSLT, if you'd like. - For discussion of EXSLT, join the EXSLT mailing list. - Check out 4XSLT, the stylesheet processor used to test the examples. 4XSLT supports EXSLT and is part of 4Suite, which is co-developed by Uche Ogbuji. - Find more XML resources on the developerWorks XML zone. - Browse for books on these and other technical topics. - Find out how you can become an IBM Certified Developer in XML and related technologies. .
http://www.ibm.com/developerworks/xml/library/x-oocalc/
crawl-003
refinedweb
1,408
53.92
The latest version of Java, as on today (September 1, 2013) is Java 7 released on July 28, 2011 by Oracle Corporation. The latest update to Java 7 is 25 and was released on June 18, 2013. Always two ideas basically remain in Java designers’ minds while designing the language; while adding new features to every new Java version released. - How to make the language simple to use by the developer, by decreasing the coding part? - How to increase the performance? JDK 1.7 Features Following is the list of new features added, of daily importance. 1. String in Switch Expression 2. Underscores Between Digits in Numeric Literals 3. Integral Types as Binary Literals 4. Handling multiple exceptions in a single catch block 5. Try-with-resources Statement 6. Automatic Type Inference in Generic object instantiation Only feature (the first one) is shown here as it is very interesting for every one even for C/C++ Programmers. Generally switch takes a parameter of int data type or convertible to int (remember does not take long data type). But, one of JDK 1.7 Features is switch accepts string as parameter. Following is the example on switch with parameter string (of JDK 1.7 Features) public class SwitchStrings { public static void main(String args[]) { StringBuffer sb = new StringBuffer(); for(String str : args) { switch(str) { case "Idly": sb.append(str +", "); break; case "Dosa": sb.append(str +", "); break; case "Puri": sb.append(str +", "); break; case "Vada": sb.append(str); break; default: sb.append("No breakfast menu"); } } System.out.println("Your breakfast menu: " + sb); } } Switch does case-sensitive comparison with case statements. Switch gives a more efficient and cleaner code than if-else if-else code. All the above features with full example programs are discussed elaborately at length in JDK 1.7 Features.
https://way2java.com/java-versions-2/latest-java-7-jdk-1-7-new-features-added/
CC-MAIN-2022-33
refinedweb
300
60.31
Juniata Jlfittiitcl. ( PF I 4 A of tale, a.,,1 a vuiun oflamh, j A ...'..a oMrrf,,a),.( ,u:n A ! the Am.ri- .in Vml.n furevrrl r.e,.y M.t.t.t. X.,r,bVr 7, 161. r--r- . - FEARLESS AM FREE." - ' " " r. A. I., t'.l s ic W. H. 1 l is, Fditors. v. 10 : ?i. rrerlaiia Liberty -RT Ttirnn:hnt thr l.aad "TVt t ALL the uyz 7-C Ibkitl.itants Thrriof. -rij fr tva: y i ma ta sextixei. - liaj liic Lmjft C.icaUIi- of nv paper ub-li.-lied in tliU C'uiidijt. I! in therefore tbe t...vl, ai.iy e-inJucietl, a fir.u ci toealist. aul well wirtlir of the :ronage of every; 1 i-Vir.r Heaps. The Copj rhesds have ali joincii the src liea'.'. A sicker trt of bur you never saw. II"W ARK V'V lt Tr? .V tliTehailt iii this tou 'ire a man a J air of b'M'ts, ju .-l"rn, bau Jkfi '1 i and hat to vote i'.r Met'klljB 1 ik u l-ue intox icated auj told I- . tenant iuvrstaicnt to flink ..f? !..T Oi. Tii-lv evenin? at 7 o'- rl.iek "old JjVe's ehance Tr the Toxt Of- ' vote of five persons, all Uuion men, fi-'n ' ' Any iufornutjnn io regard tostorc ! were rejected. They were all good citi ik1i boxci, f. uiuleirt votes, false and de- reus their only crime was that they were ticicnt a-wssincnti, or other effieieut dem tieratiu measures which would revive the above rhancos will be gratefully received l y the chairman of ye verdigris kounty komuiiUec. ... . , r!ii..li lVe W li'wtun Irt i...rfrvut ntfi 0 " a few poor creatures v-ho lately went wcr to the Copperheads. We hope they are xnntcntcd with thc:r new companions. They are known aad spotted in their sev eral districts, just as all who desert in the trying hour ought to be marked and re membered. They went out from us be cause they were not ' of us. Let them tay. They "waited to sec a change," bul they changed uothing but themselves". Wreaths. On Saturday sonic Union boys in a wagon going to rerrysville pul led down a wrcaih of spruce and flowers stretched from tae house of Lieut. Wm. llecs to that of Gou. George B. McJordan, in Patterson. AVe have bceu asked what we thought of that. We auswer freely thnt it was wrong and ought not to have been done. At the same time the wreath had subserved its purpose and ought to have been take 1 down by its owners. In I'attcrsca they falsely charged Uni on girls with stealing their copper polluted wreath?, which resulted in inyal lady chas ing three verdegris women across the commons. The Last Arou.ment On Tuesday a wouuded democrat soldier who wanted to vote for Lincoln, was taken by his vrFE(?) who was hired by well known democrats in town, and furnished with a buggy, whiskey and 55,00 in money, to take her husband and prepare him to wtcfortbe Gun Boat General. When they returned he was forced up to the polls, and against his wishes and repeated cheers for LINCOLN, Mechanically voted for McClellan, his wife standing by aud be fore a large crowd declaring that if he vo. tod for Lincoln he should never come In ber house, and, ilic irvuM ncrer ?rl htm rkrp with Iter again..' Truly oar demo, crats have novel arguments. . Still another Argument. A dem ocrat adjoining town told hitcnaut, "I'll heG d d d if you can't 'rote as T.do,1-you must 'leave my place. We ticar of several other such threats. TIiun-iiAXDED Game. We arc told that a certain Knight of the Golden Cir cle and retailer - of treason and liquid damnation at the Lock in Mexico, got up subscription, on eloction day to raise mon 'iry to free. Walker township from the next oral I. which he said was to come off soon. Tbe pseudo-subscription was freely sign-r-d by dc-mwats and presented na a fare to thc"wflvrring young men subject to liniTr. '.We do tint suppose the devil ever a(tr a uit-nn, d'rty plot which ho rould in.t.p,l ti'ppe.rLc.'id.). 'tu carry cut .for Great Excitement in McAlisteraville: A Soldier Attacked bv the Enemy The Soldier Charge Face Spoiled liar Ieraolihed Stove I pset Mas terly Retreat, dee. dec. . , At the election yesterday in McAlister- vilie is Gen. David Dunn stopped upon I :ortns pon n, ne was assaulted oy a ; Cotniuwioner and Knight of the Co! Jet) Circle and thrust off the porch. Here upon followed a scene of the richest char- ! actcr since the war began. Gen. Dunn charged upon his assailant, who retreated into th bar-room in a most precipitous manner. Gen. Dunn then ordered his right and left to advance which they im mediately did, takiDg the enemy over the tinder the cya which promptly checked hiui. f ..,, The copperheads tried to nut nff the tlrijrral's retreat hv loekiner . , . . - . . . . . " liini;ir, .'though t.uly hnlf thcjwe.ght of hi. antagonist. Gen. Dunn then flank- led the cnomy, pouring in an enfilading j fire tmaahin- np the bar, upsetting the st0ve, spreading consternation among the ! enemy generally. Gen. Dunn next, ad- I rancc l his foot force titling him in the rcar C4U.siug d d.im.;. and causing dis astrous rr'rusi. i'J uus unit: iuc uuiuu . force outride had penetrated the door in ' a manner which left it look ubont as well !pliuterrj as McCIcllau's party. They found the battle over, the victory won, and the enemy leaving in a southerly di- I rcction. Although no whiped JKnouse vet cjuM 1 not withstand the gallant Gen. l)uu0. The w.ion reinforcements cheered and the General held the camp of the ; enemy. A nanason c reward ns ieen 1 raUcd fur Gen. Dunn by the Union forces ctieamjicJ iu tba.t section. FitAt l ANI DKI'RAUIilNit jn Mii. ii!'.D. t)ne of the most high handed gauici of fraud nnd dishonesty was prac ticed at the election in Milford yesterd.iy. It was decided that the old assessor's time expired with the last State election and that the new assessor did not come into ! office until after thin election. Thus the assessed between the elcctious and hence could not vote. These copperheads well knew who was the assessor, snd it can ' only be looked upon as a regnlnr plot to ', defraud legal voters. In fact the dcnio- crats have for years practiced cheating ' . . ... ..... . , i mon vt)tcrs lD tins towuslnpl'y detective assessments. It is nowust high time that our people kno their rights and knowing dare to maintain. - Wn call m the Union men iar 5!iWwd and elsewhere to bring the guil'.y scoundrels to justice. If we can not have justice ina de'jiocrstic court lot Ui show that wc have te tiorex now to set one on the judicial throne who will give us justice. Forbesrancc is no longer a virtue with our Union citizens. Besides this the board refused the proxy vote of J. Wcstley Jones, a soldier, ; in plain violation of law. And to top the climax there were two more votes in the bos than voters. And there arc just as many men in Milford who are willing to testify th.-t they voted for Lincoln aa Lin coln had votes, thus proving that d Mc Clellan defiaudcrs also stuffed the ballot box- Now let the law be enforced, and show that it is ordained of God to be a terror to evil doers. - The old "unjust judge'' and his click ought ta blow about gaining five in face of the above record. They were mean and dishonest to no purpose. - Another Soldier Gone. Wc regret to learn that Mohts Dhessler of Sus quehanna township aged 17 years, 9 mouth, and 25 days, diod at General Hos pital, Little Bock, Arkansas., S'cptcmLer 29th, 18G4. lie eulibted last fall in Company C. 3rd U. S. Cavalry for the term of 5 years years; but it pleased the Great Kulcr of the universe to iciuovc him from time to eternity. Ilc.aicd of lever. Thus another of our citizens has ufferod up his life a sacrifice, on the alters of hi coun try. Two of bis brothers arc out for three years," one fur one year, one bus served nine months, and one was drafted and paid commutation. Truly thi family has done something for the. country. .. Wc understand that S. G. Dressier, Esq., intends to leave for Little Boek iu a few days to recover the remains of his brother and bring his body home. Deserved PROiroTiof? -It gives us great pleasure to announce that T. T. Da vis, late a" Sergeant in Co.' I." 53rd Begt- Pa. Vol.. and a brother of the Tunior of this paper, and who has been in the ser vice since the outbreak' of the rebellion, longer than any man in the county', "has been appointed Quarter Master of the 53rd with the rank of 1st Lieutenant. No promotion" cotild' have been more dC' bcrvcd. i' Three tnings tbj Democrats don't ke Trails SnrV.'WtrAtR'g rircu iKivftiKf the m'-f "-s h Klpvti-ntr ' TRAIN IN PATTERSON. HE SPEAKS FOR LINOOLN WHILE T -r.i. THE TRAIN STOPS. ,!e importance ofthegreat political strug- . . - Lfe in which we were involved, asd then The f'om.erlen.l nf 1fiW; .J r ... ....... . . . . . . -f' y Ulier' ton JliUitrate whut they mean lu "Free Sptcch.". ? .They Prevent Him from Speaking by Dvllowing Bqwliujr and Cheers for 4 McClellan. MOST OI'TRAOr.Or Alt ujftp Mun s ai-u isiii U.Ml XT. , On laRt Monday it was announced t' George Francis Train, ' the great spt maker for the Union in England and ' cently a oerefiate to the . Vy f?-. tion. wonl.l n.k f,r I.;n.l .n.l ,1,. Union while the Mail Train Eastward stopped in Patterson. Accordingly a large lot of people gathered at tie depot. Everybody was anxious to hear t man who had a world wide ropntation. In the crowd there were 'juite a numher of our leading copjerhcau3. We heud it whis pered beforehand that a certain person was raising a crowd to go over and disturb the meeting, but we read in the Chicago Platform that the supprci-sion of freedom of speech was calculated to prevent a res toration of the Union, and wo then felt sure these 'hum men (?) would not thus destroy the cause they love. However, as the crowd returned we learned to our as toniehmcut that the copperheads bellowed blowed, suorted, cheered, rarcd, and bawl ed in such a demoniacal style tn to prevent the speaker from talking to the trowd. A persou can hardly realize that these men would have engaged in such a thins Yet it is so and they were onr leading copperheads. Prominent among whom was seen the man who fathered the lies agniost Mr. Lyons, who also wants the Post Office, w ho kept bawling ''bah ! bah ! bah ! bidi !'' uulil his moiilh was etrctched opeu as wide as the ears of a jackass arc long. And there was the smooth-tongued lawyer, differing from the above only be cause if possible he is a more arrant and consuinutc hypocrite, whose snorting showed that be is as much opposed to Re publicans using freedom of speech as he is to that of the press. The infernal, damning malignity anil hypocracy of this wretch no pen can depict or words do suribc. Aud there was the old spluttering doctor blowing snd yelling until the slob bers ran down from both corners of his mouth. And there was the man who traveled GG luilcs during the battle of Crosn Keys, who of course proposed tbe ropat44 ehera lor- MoOlellawf ai2 W was only dried up when Mr. Train asked him if he had not been dishonorably dis charged from the army. . Tbeisauwho i i sjwouiuUid oh tl.e fund.v of the county was there, the man who perjured his soul to criminate a man was there, the man who is going to march on Washintnn to hurl old Abe from his scat was there with bia big month stretched from ear to ear j tho shave shop was present, also, and had his mouth slapt fso it is reported) by a loyal lady for hollowing 'McClellan in her face snd scattering his spittle on her per son the monkeys and bed bugs were there in short every dirty loud mouthed cop perhead that could be raked up was there aud they were all engaged in the same itngenllimanh bvtiaexs rf bellowing in onlcr to prevent the tpeaker from being iieartl. . Now talk to as about freedom of speech, will you ? You dis turbers of good order aud common decen cy, will you have the impudence again to charge us with your crimes. Oh you, dirty, hypocritical, hollow, canting, Phar isaical, unfair, dishonest, uncandid, unhal owed, bigoted, unsanctificd, . ill-begotten, ill-natured, untelcnting, cold-hearted, trea sonable, malignant, cheating, diabolical, infernal, devilish, mob-and-riot, soklier- def rauding, traitor-sympathizing, hell-born hell-bound copperhead party, with the cor ruption, perjury, murder, destruction and blood of this war on your backs, with heads full of evil and hearts black as Tar tarus, what language can portray your scandalous damning record, or what good or noble emotion ever warn Ruch a genera tion of vipers and copperheads to flee from the wrath to come. What a humiliating spectacle ! Here were the Ministry of our town who with commendable curiosity had gone to hear and see the renowned orator of two hemispheres, compelled to sec t cuurcn memoers 10 wuora uey naa been preaching religion, good manners, urbani ty and common decency for years, behave more like mad men and 'drunkards thao' followers of Jesus Christ. . Is it any won-, tier that whoa men behave in this way and i then attempt to justify it, that gentlemes can not respect them more than if they were dogs. Oh shame, - shame is - this Democracy 1 "lay God in mercy ever ikliyer us from' its power. Let our pco- pie remoaiber "free speech" in 1'aUc.rsoB Ion the 7th of November. .... Since the above was iu type wc fiud the following iu tbe Teh-graph; , . "Colonel John J. l'atterson introduced 'r. Traiu fa the people' T?e rrrakpr A ha hul no tim to indulge in extended f p.. --- O i.. i .1 r e.A i jfpiarKa, HU IUCICIV1C IUCIV1J ICICI PI w pked all those who were in favor of hav ting a President who would be control ed it American influence to say Yes after Ue first vote had been taken, those in tovor of a Chief Mssristrate to be controled W Brituh influence could also say Ye. 31,18 excited 'm of the copperheads ifho were present, and they at once set up fierce howl for McClellan. These men psrtaintly gathered at the depot resolved x insult Mr. Train. An Ex-District At- d a soldior who ws dishonor .hvtv-..vrr; V- . .... !oy nscjiargea trom tne army lea ina dbturWcc; whi,e hoary-Loaded 0,J copteibeail, wlio naa Kneit at me i?ac ment Table last Sunday thus professing love foi all mankind, exhibited in his vio lence tovards Mr. Train, tbe utter hypoc risy of h's cant ns well as the foulness of i treason.' From all we have heard of the 1 tiffair, it vas the mos.t brutal exhibition of bIackg'-ardisin"on "record against the Democratic leaders and their ignoraut fol lowers." It is worthy of remark that every word that Mr. Train did get to say, made the fur fly. That was what's the matter ! WTlie Lowbtown Gazette says, that it learns that some of the codfish or more properly herring aristocracy of Mitiliu, were in lead as disturbers of Jr. Train, in his speech at that station on Monday last, and that their blackguardism must have given the passengers on the train a high idea of thair breeding. Incideats of the Battle. The battle isJnow over, and it is Scrip tural to let the dead bury the dead, yet ' there arc incidents of couflict which ought not be passed over in silence. i 1. Olr Meetings. The meeting we ' held over the county were tolerably wel' ' attended they were good, but not what i they to ought to have been. We were not furuwbed with lists of the officers aud j con3cqnently can not publish them. They were addressed as follows : Rich field, Davis and Guss; Mexico, Patterson and Wilson; Oakland, G us.! and Porter; Johnstown, Davis and Lyons; Thomp- j sontown, Davis and Guss; Vanwcrt, Ly ons and We'dman ; Spruce Hill, Lyons aud Davis; McCnysville,' Patterson; Mc Alitersville, Lyons and Davis : Wateili, Tattcrson ; Pcrrjsville, Taitcrton and at Bed Bank. Wilson ; MifHiutowu, : a general conference. We close these rc- j marks by asking the members of the 1'ni- j on Farty why It !s that the copperhead. ! can get out to their mectiugs, twioc the ! crowd with half the trouble arc the ! children of the world who love flavry ; aud treason, wiser iu their day and guuer- i atiou than the children of light who luve -Liberty, Government aud Law '. 2. f iiKta Mbeti.nos. Tho Copj hull a few open msctings in the county 'and l jnitc a number of secret oucs, On Fri- day last they made their bir cfTrt. They : came to town from the country iu whii h ' they called delegations, with banners, flairs, i Ac. Thoygot up their proeeion and Father Abraham ami Fntlc wound around our streets uutil they passed ! ' Andy Fleeted. our office some half dozen times, yelling T . ,r n 1 t t and hallooing for "Mac" like forty mad . C oHiprOSUtSC U ltil RcllCh ! bears with their tongues wine open ndiI0,are YoH ..jc jiaCVrals! tnoutbj banging out. . Thej had aft r all was told only 140 voter in tbe pros.. , -on, The Clilekalionioiiy Scfoll asaln , . . -be worst licked man ever was), and yet finally after stretching out their I teams and marching and countcr-marthing. j ?E?.r3SYlVAT. JA the man who speculated on the county j funds rode' to the bcaJfor tbe procwioa TliC Keystone or th Areh! and declared amid the most -horrible 1 . , , , , . , . ., S II h N h V It U r A I. 1 fi 1' o oaths that they must disperse as thcyj could not find room t-v streit h out. j Vv'c 6nd it uiinccts.-ary to rxtsai t'-e The Union citizens irfood alougihcst.tiets! pUbIicatiou of the returns as t!. are re- and laughed heartily at their silly foolibh-, ec;vc from tiC dillvrent cautie!, It is ness in parading a lot of women audooun- ! gufi.;tt to say that the majority for the try babies through the rain and mud. j t'nion ;s overwhelming and may reach The man who wants the Post Office in thia ( f fl ( town kept Pacini: up and don the stret, ! V V V V instead of attending his Church. This is 1 not strange, for it is now, one of the first principles of rfemon-ocracy to forsake pas tor, church, country, and God, rather than their party. The old cubs daughter was in a wagon and showed how uriuU of a lady sic was by proposing three groans for Guss every time they past our house. Had f be taken sctp and washed her mouth it would have been more to . ber credit. Their great man Nortrrop did not come, so tley had no stranger but Kob. Lambcr-. ton, a poor creature from Harrisburg. 3. The.Fmai.b Pbocssio.-AVci understand some of tbe Patterson women are threatening cowhides, because wc al luded to their marching our streets at ad unseasonable hour and in bad company. We did not give them half their dues. A set of things in crinoline who behave ai they did ought to get Jits. Several gentle men were standing in front of Mr. Wills Hotel as they passed, not sayiug a word, when these wodld-bc ladies kept saying "look at tho niggers. ' ' ' BgwSmall favors thankfully received th? rejoicing" of the Democrats over email s-sisE iu a few townshirv 1 ,TlSl!. Yankee Doodle. TW copperheads of MifHintowtt - As welt as oiber places, IUre showed to us our forked tongue s By many different faces. " ' Lincoln is the mu we need ; Johnsnu, too, is tnly ; Yankee Doodle ! boyn, hurrah For Uncle Abe snd Andy ! . They howled at Tbais in fattersoa And showed themselves completely, Jlut the election dny has pased around And dried them up so neatly. We've got a Grant frora Abraham, To beat the rebels hollow ; And when we have a man to lead. Why, we're (he buys to follow. Old Butler IhlnliS tbe wy to fight Is witiitbe gun and s.ibre; And doesn't see that Contrabands' Are "fugitives from labor '." The copperheids begin to squirm, The rebs are looking surly. Since Sheridan has made them run, lty fighting late and Early. An t of our gallant Sherman now We feel a little prouder, BecnuiC he's made a lively-Hood, By stirring rcbs with powder. Our country in the Navy, too, Hug many a brave defender ; There's Farragut knows how to shoot, And wake the foe surrender. Poor little Mac hn&Unglit us this And for hiiu then let none vole, His "Napoleonic strategy'' ' In hiding on a Oun Boat. We have a nan for president Vthose courage never fail him. That common sene which built the fmcc Is just the thing that ails him ! Lincoln is the man we ned, Johnson, too, it handy, Vfnkt e Poodle ! bosjiiirrah fr'or Uncle- Abe and Andy I r. -v .-c IJII8.. FOREVER! ILLINOIS SUSTAINS HER SON ABRAHAM. Chicago, Nov. 9. j Niac wards in Chicago give 2,50.1 Rc-1 publicau majority; six wards to r from. Y - - '- ' .." " 1 Chicago, Nov. -9. (Jo county gives . about 4,000 Union majority.' The wires ! arc Working badly, aud tbe returns come ; in slowly. Those received so far show i gins over Lincoln's majority io X9G0, j leading the Republicans to claim a major- j ity of 20,000, ' . , . ' Chicago, Nov. 8. Eleven wards of this city gives Lincoln 2,577 majority. Tbe other waids reduce this majority to .'? n . f '"' m s.f .11 . t Chicago j Nov. 9, midnight.Commu nication with Iowa is interrupted by a storm, but the leading Republicans and" Democrat adtaii that it Las goue for Liu colu by 5,00tl majority. Chicago, Nov. S. Complete returns from this city thow 1,7 id majority for Lincoln. - .A Rcpublicau Senator and UuiOn'meiii bers to the' Asjediblr have been elected. MARYLAND Ratifies Freedom. ' . ". " Ualtimoue, Not. 9. The official vote of the city is as fol lows .. .. Liucoln ? 14,826 McClellan 2,8'.0 ' 11,930 Baltimore county gives a I'cion gain of some 200. WASHINGTON COl'NIT. ' Proclaims ber adberanee by a rc?pecta-" Die majority for the Union. ..-.; ; Wisconsin WHEELS IflTOUnE. Mamso, Nov, 8. Scattering returns show a Union Iobs on the vote of lat fall, when lhs Union may. was 16,000. It is' estimated by the Re publicans, that tbe has State tiven 10,00(1 Union maioritv iho home vote, which will be largely increased by the soldier's vote. Boston, Nov. ?. Union majority nearly 73,000. This j city gives Lincoln about 5,000 majority. Rice's nnjority in the Third District is I 3 S :" lliu.npr in the Fourth Dittrier. . ' : -J ' - ' '- J . has nearly .1.000 majority. Returns from the State indicate that Lincoln's majority will be nearly 75,"W in the State. In ISt'.J it w:i about 4.'i.r00. Minrs. lliee an I II .wper h iw been re-elected ti ('-itiresR in the Third I and Fourth listrict-. 'fb.-y will ni: ! their cotufifurnr t;i rtceivfl tlniirc mgr:tt hilatiuiis iu b'anenil Hall tliieveiiin. I The H"pttl)!iiMiis h ivt carried ail th-; I (Congressional districts. I Boston, Nov. 8, 10 oVioi-k. Return ; from 1 11 t.viu in M liMch'KiitL; f it u; i r. 1 .;-..! t.t jr.4- f..r M..firtii:in a ? Lincoln's majority will reach over J0,iKO, Tbe Unionists have elected every mem ber of Congress by a heavy mtijoritj; a!s. the enlirt: State tk-fcet, atil probably every State Senator and uc.r!y the entira IIone. IVwtnn, No.'. 811.:;.) p. M. Uiw hundred and ?sveury-sia tv,s'us in Ma.- 1 cbnsetts fui.t up: i l'.r ineolii..... ?;l.(t(lt Tor M'CIellan JJii.tvil T A grand Union jubilee was hrld in t Faneilit llnli fn-nij;bt. Among thi? i speakers were Kdward liverett. St'nntiir j Wilson, Rfprestntative. Hooper ami l'icir I Dr. Luring, Rev. Dr. Kirk and other. THE EMPinE' STATE i G( V IMLNOU Kl'MOl U DKFh'ATrTD Fernando Wood overboard- .. . New Yiirk, Nor. 9. 1 The press of this cit-agree-that Abra- ham Lincoln bus carried the State by :t ; uiMjority ranging froii: I,Wt0 to " l.j.UOV j ' Un. Beyuiour isiii-e:tted. I Tift Jl.rttlil reports ihiit the indication.1 arc that New York has gouc for Liucoln by from 1'.000 so l.j.ixiij. 1 vehmoxtI I Mn.NTI KI.lKK, Vt., Nur. a. 1 A heavy vote has been polled in this ! State to-day. Lincoln is good for oL!0W. 7I1CIIIUA'. ! Dltkoit, Nov., S-10 r. v. The Republicans t-l:iim t hae earric-I ; tb Mats- by 1..H0.I nnjority. Tho r I turus ulc iiicairro. VjEST VJBfi.3.1A, Al5 Her rise Ucetorat Xmttr. Wheeling, Ya. Nov. 8. Returns from nine counties how hup Union pains. It is believed Linroln wiH carry the States by large ' majorities .iu every county. i , . . . . i 1 m llrtr Congressional District OtAr rat. The following is the ofGciar result of this district ; C. V. Miller, iftinphiq .. 4!'.-7 Juniata . J,.U Northumberland 2,tl'i fuyder ' lrf"7 ' Uuion - H. 3'iUer. 3,237 1,2.16 I 6. F. Miller's nwj oil Tbe Vot tor Asscmblr. tjwop..J5lebch. Kearas Afrie. Mifflin . 1570. 156S "1617 1611 .Tuuiata 1211 1230 1503 160 Huntia5,n'2580 ''.523 ' 20'jy ' 21U .V 5577 .5625 5t5 5353 S13 Swoope's "majority over Africa Kros ' 72: balsbacns utaj. over Africa 287 j " " " Keams . U20 ! Ja The latest war newai3. that-en.. George tt McClellan,- lias rcsigetebl bis " commi&siou as Major Gcueral. . This chap reeefctly f tru ags'!;t a rail ; tcrr v to telf.' xml | txt
http://chroniclingamerica.loc.gov/lccn/sn84026118/1864-11-09/ed-1/seq-2/ocr/
CC-MAIN-2016-44
refinedweb
4,539
75.3
Results 1 to 3 of 3 Thread: Compiler ... - A.I. BOTGuest Compiler ... I have a stupid problem. I made a C++ file called test.cpp. It has the fallowing contents: Code: #include <cstdlib> #include <iostream> using namespace std; int main() { cout << "Hello, World!" << endl; return 0; } wiley:~ whammy$ g++ /Users/Whammy/Desktop/test.cpp wiley:~ whammy$ How can I get my compiler to work correctly?? so that it produces a programm rather then a a.out file. Thanks. - A.I. BOTGuest OMG im stupid >.< ./a.out and it shows my programm i feel stupid forgetten that if you don't given the compiler a file name for the executable, that is will use the default of a.out.Life isn't about waiting for the storm to pass, It's about learning to dance in the rain! Thread Information Users Browsing this Thread There are currently 1 users browsing this thread. (0 members and 1 guests) Similar Threads C CompilerBy girgroupie47 in forum Switcher HangoutReplies: 6Last Post: 09-21-2009, 12:02 AM g++ compilerBy trooper83 in forum OS X - Development and DarwinReplies: 3Last Post: 05-18-2009, 12:38 PM C++ CompilerBy nyzwerewolf in forum OS X - Development and DarwinReplies: 4Last Post: 09-25-2008, 11:43 PM C\C++ CompilerBy belitos in forum OS X - Operating SystemReplies: 6Last Post: 05-05-2008, 01:50 PM De-CompilerBy Big Beard in forum OS X - Development and DarwinReplies: 1Last Post: 02-18-2004, 02:17 AM
http://www.mac-forums.com/forums/os-x-development-darwin/24159-compiler.html
CC-MAIN-2017-04
refinedweb
246
61.77
Product Information FAQ – B2B Contact-to-Account Relationships in SAP Marketing Cloud You find questions and answers to the topic B2B Contact-to-Account Relationships and SOAP-based Integration: How do I switch on B2B Relationship functionality? You will find the “contact projection” switch in the Set Up Your Marketing Solution app in the scenario Contacts and Profiles under Contact Projections for B2B Relationships. Can we turn off the B2B Switch again or it is irreversible? Switching back to the state without contact relationships is only possible in the Q-system. In the P-system you must request this from SAP via a service ticket. Does the migration of the old ODATA to the new SOAP-based integration impact existing data? Yes, you must run a migration program after you have switched from the old to the new version of integration. You need to submit a service ticket to request this migration. The details are described in the migration guide, which is part of the integration and setup guide. What happens with Contacts that have been merged before upgrading to the SOAP-based Integration? It’s important to understand the design principles of the SOAP-based integration with regard to this point. For each business partner in Sales Cloud, a corresponding local business partner record is created in SAP Marketing Cloud. The subsequent extraction creates separate interaction contacts for the persons and accounts. No merge is performed because the business partner in marketing would prevent this since it is the primary data provider for the interaction contact via the ID Origin SAP_MKT_BUPA. As a result of the migration program, which is executed for the SOAP-based integration, the ID Origin SAP_C4C_BUPA becomes an additional ID, which provides the reference to interactions loaded from the Sales Cloud. If you have already merged business partners into one interaction contact with the ODATA-based Integration, it’s difficult to migrate to the SOAP integration. One option could be to delete merged interaction contacts in SAP Marketing Cloud before you start to migrate to SOAP. Unfortunately, we currently do not provide a solution that splits merged interaction contacts, because it is simply not possible to provide an adequate solution in the standard. However, you can request from SAP Marketing Cloud development a query on your interaction contact base to figure out all cases, where contacts contain after a merge two or more C4C_BUPA Ids. To get this list, you must submit a service ticket with the headline “Please provide a list with all the merged Interaction Contacts – those having 2 or more SAP_C4C_BUPA ids.” How do we handle Marketing Permissions regarding Marketing Area assignments? According to the SetUp Guide you have to activate marketing areas for campaign execution in order to be able to use Contact-to-Account Relationships. However, it is sufficient to use one single marketing area, e.g. GLOBAL, if, during campaign execution, you use the BAdI CUAN_MKT_EXEC_MARKETING_AREA to set this single Marketing Area to perform a proper permission check. How can you set the marketing area for permissions and subscriptions? - If implicit permissions are used, you are fine, because they are created with marketing area in any case, i.e. if the use of marketing area for campaign execution is switched off. - Subscriptions have a marketing area assigned as well. So, it is critical and tricky for explicit permissions: - If you are just starting with your implementation, you must fill the proper Marketing Area, when loading permissions initially for the contacts. - If you are already running marketing cloud productively, it’s more difficult. One approach could be to extract the marketing permissions for the relevant contacts and reload them into SAP Marketing Cloud, using the API API_MKT_CONTACT. Alternatively, as of release 2108 SAP provides a possibility to assign a single marketing area to the explicit permissions on behalf of a service incident. For more details refer to the chapter in the administration guide. Do we have to assign a Marketing Area when loading Interactions? Yes, when loading interactions, for example from a first-party website or business documents like opportunities from sales cloud, or sales orders from S/4HANA, you must pass on a value for the marketing area, e.g. using the BADI Revise Interaction Data before Data Import, refer to the following documentation. This is especially important if you want to further use interactions for trigger-based campaigns. As of release 2108 a new application job is available which can be used to update existing interactions with a marketing area using a live target group. For more details refer to the administration guide. Is there Integration for SAP CRM available that supports B2B relationships? Currently there is no standard integration for SAP CRM that supports this feature set. However, there is the CX works best practice article available, which guides you with regard to changing integration towards the SOAP-based Business Partner replication. Can B2B contact-to-account relationships be enabled for multiple Marketing Areas? Yes, the Marketing Area plays an important role in this feature set. For contacts with multiple assignments to marketing areas, every relationship to the related account will be available in the corresponding marketing area. For example, if a contact is related to Marketing Area A and B, and has relationships to two different accounts, the system provides the cartesian product for the relationships, i.e. in total 4 relationship best records will be created. The same applies if the contact has relationships to one account with different relationship categories. Why is the attribute COMPANY_NAME not available anymore in API_MKT_CONTACTS? Thanks for this valuable feedback. This is a requirement which is still being evaluated. We will look into this issue. Are relationships between two accounts supported? Currently only contact-to-account relationships are possible. This is a conscious decision as marketing engagements are primarily executed for contacts. You can add these relationships by means of a CBO, which captures the relationship records between accounts, e.g. to use this in segmentation. Is there a new API for Interactions that supports Account and Contacts? Yes, there is a new sub-node on the interaction, which allows you to assign additional contact or accounts. Note, the interaction does not differentiate between accounts and contacts. How do we handle existing Custom Fields after Upgrade to SOAP-based Integration? Existing custom fields can continue to be used after you have migrated integration to SOAP-technology. To do so you have to: - Create another custom attribute in the local business partner (business context “master data: business partner”) - Use the available BADI to map the values of the new attribute from the business partner to the existing attribute of the interaction contact. - Afterwards reload data from C4C as described in the migration guide Alternatively, you can create a new attribute for the business context “master data: business partner” and apply the scenario-based integration. In this case the new attribute will be created also in the interaction contact. This has the consequence that segmentation content for the new attribute needs to be adjusted and data needs to be loaded from additional data sources as well. Thanks for this FAQ Josef! Something that I thought would be useful to add, is concerning mapping of Functions. When interfacing Relations, you might get the error like "Function Z006 does not exist". Previously, with oData-based integration, configuration for Functions was required in Manage your solution -> Configure your solution -> Marketing | Marketing | Contacts and Profiles -> Functions for Contacts. But, with migration to SOAP-based integration, configuration is now needed in this location instead: Manage your solution -> Configure your solution -> Database and Data Management | Business Partner | Contact Person -> Define functions for Contact Person What's more, this new configuration only allows for 2-digit-codes, while the previous configuration (and sales cloud) allowed for 4-digit codes. To top off the complexity, in the "code list mapping" in Sales, you must not add mapping from the 4-digit codes of sales cloud to the 2-digit codes of marketing cloud. No, you must add two leading zeroes on the marketing side. So if you configured in marketing cloud function code "Z6" then your mapping should be "sales = Z006 to marketing = 00Z6". Hope this info can help, Kr Joyca Hi Joyca, the same happens for the departments. Best regards, Markus Thanks for that info Joyca! I was already searching where I made a mistake... using 4-digit codes solved my problem! 🙂 Joyca Vervinckt did you already came across an error saying "Invalid namespace for Function"? I wanted to create new entries starting with Z1 (e.g. R&D Manager) - assuming we cannot easily go on with 11 - but it says Invalid namespace? Am I doing something wrong? Best Regards Tobias Hmm, no, I made my function codes on Z1, Z2, etc, as well and didn't have that error... By the way, here is some more communication on a ticket that I had at SAP about this, as I was thinking about the limits of a 2-digit-code. If you could only use Z1 to Z9 then there's only 10 available codes, while Sales has 4-digits so can have 999 different codes. This was their reply (so Z should be certainly within the allowed namespace) Thanks also for that Joyca. I tried already with departments and I was able to use also values without "Z" as far as they are not in SAP namespace. Regarding the error with functions I created an incident to let SAP check that. Kind regards Dear Tobias, Did you by any chance already got some feedback from SAP? I'm experiencing exactly the same error.. Many thanks in advance for your feedback. Kind regards, Amina Hi Amina Vloeberghs, yes, we are in Contact with SAP Support and they told us that the issue will be fixed this weekend. So it should work again next Monday (May 24th) 🙂 BR Tobias Hi Joyca Vervinckt thanks for sharing this solution. I just have this problem that you mention about the functions. I followed the suggested solution step by step: However, it does not allow me to save in Marketing cloud Z1, etc ... and when I migrated from c4sales to Marketing, I keep getting the error. Please could you tell me if you know what code I should save in marketing for it to work? thanks in advance Hi Carla, Tobias Schneider had the same error regarding the namespaces. Tobias, did you get a reply on your support ticket about that? ok Joyca gracias! By the way, Tobias Schneider , your comment reminded me that after I wrote this comment here on this blog, I had a subsequent issue: the Function descriptions didn't appear in Segmentation. For that, you DO need the configuration on "contact" level instead of the one on "business partner" level. And you need a Badi to map the two... I made an Influence Request about that just now with more details. Hi Joyca, thanks for your quick reply. Okay, I already thought that something like that will be necessary... Thanks for the confirmation and the BAdI. That will save some time 😉 Hi Joyca Vervinckt A query: When you make the previous comment you mean this phrase: It is precisely that I am in the part of configuring the contact functions and then proceed with the implementation of the BAdi. but the second path is not located in the system: So, I doubt if I follow the first path (If I have it identified) to do the configuration. Best regards Elio C Hi Elio, It should be just 1 line below your screenshot on the left side. Here it is in the overview list: Thanks for your answer Joyca Vervinckt I had already been looking for that option as well, and I would not think that it is a matter of roles and catalogs either. But with the image you gave me, what I did was apply a search filter and I would understand that it is in this section. BR. Elio C. Hi Josef, very informative summary, thank you! Hope there will be some more articles on CX Works soon! 🙂 BR Tobias Hello Josef, the integration with Sales Cloud now works so far that the Origin SAP_C4C_BUPA is saved as additional ID as you mentioned Now we implemented the integration scenario "Request Marketing Permissions and Subscriptions from SAP Marketing" to display the Marketing Permissions in the Sales Cloud. The problem is that the scenario calls the API with the Sales Cloud ID of the Contact using SAP_C4C_BUPA. As in Marketing Cloud the Origin ID is now SAP_MKT_BUPA nothing is passed back. Maybe you have an idea how we can solve this issue. Kind regards, Markus Hi Markus, thanks for this feedback. We are not aware of this issue and would like to ask you to raise a service ticket that we can have a deeper look into your situation. Best Regards, Josef Thanks for pointing that out Markus, I didn't test that yet. You are correct, I also get errors on the Marketing Permissions in Sales Cloud, but even more errors than just nothing found. I will also open a ticket, but Josef Ehbauer my findings are: In the meantime there is a new Version 1.1.4 of iFlow 'Request Marketing Permissions and Subscriptions from SAP Marketing' available. With this version it works fine. Hmm, I did update this CPI artifact to that version, but I'm still getting the error "SoapFaultCode:5 An internal error occurred. For error details check [CPI]" in Sales, and in CPI it sais "Invalid system query option: '$sap-language'." :s With the old version I never got an error. It just did not provide the data. This is the result of a successfull message mapping: Thanks, turned out to be a typo in CPI in the url of the marketing cloud system. So the error message about "$sap-language" was completely irrelevant/unhelpful. It works now for us too 🙂 (though we also had an error that the Marketing Area description was longer than 40 characters, so we had to shorten our descriptions in smc to make it work with sales) Hi Markus Greutter Joyca Vervinckt we're already on Version 1.1.4 but still have the same issue. And as I looked into the API Call in the iFlow it still uses SAP_C4C_BUPA as described by you above. Did you do anything else to get it working? BR Tobias No, not really. Just everything before that point needs to be working. So your bupa's from sales to marketing must be replicated using the new flows, and marketing needs to send back its confirmation & its own ID too, those are some of the other artifacts in the package. The 1.1.4 flow still needs to work for those who are not using the SOAP interface yet, so there's some filter in there that tries to determine if you're on the SOAP version or not. Here is a screenshot of the mapping in the artifact of version 1.1.2 vs 1.1.4. The field "ContactPersonReceiverID" from Sales seems to be important in the determination. Hi Joyca, thanks again for your quick reply. Okay, that makes sense. Then I assume it has something to do with our error we're facing in the Business Partner Confirmation (MC doesn't send its own System ID but another one...) which SAP Dev team is still checking. In our case then I assume that C4C doesn't get the MC-BUPA ID back and thinks we're still using the old integration and by that uses of course SAP_C4C_BUPA to retrieve the permissions. That helps a lot to understand. Thank you! BR Tobias Hello Josef Ehbauer, you anser the question "Does the migration of the old ODATA to the new SOAP-based integration impact existing data?". We have a different situation as we directly started with the SOAP-based integration between Sales and Marketing Cloud. But in the Marketing Cloud we already have Interaction Contacts which have been imported from a third-party system. Now this third-party system shall be replaced by Sales Cloud and the data must be migrated to Sales Cloud. From my understanding if we then send an Account / Contact from Sales Cloud to Marketing Cloud a new Business Partner is created and additionally a linked Interaction Contact. But there is already the existing Interaction Contact from the import. What would be your recommended approach? Kind regards, Markus Hi Markus, if you started already with the SOAP-based integration from the very beginning, you are in a lucky position. In this case the loaded business partner data will create interaction contacts which in turn will be merged with already existing interaction contacts, imported from the external source, if for example the email-address as matching criteria is met. Alternatively you can also delete the data for the interaction contacts, which have been loaded from the external source as they became actually obsolete. Best Regards, Josef Hi Josef Ehbauer in another question regarding Custom Fields () you pointed out that it is planned in the 2105 release to provide the possibility to enable Custom Fields - created for business context "Master Data: Business Partner" - also for Lead Replication to SAP Sales Cloud. Two questions: Would be great if you could answer and confirm that. Kind regards Tobias Hi Tobias, Ad1: You are right that we plan to deliver in 2105 that the scenario-based integration will also enhance the LeadReplicationRequest_Out service. However, from it is advisable that you wait until the upgrade of your Q-system to the 2105 release before you apply the scenario-based integration to have this enhancement available in the lead outbound service LeadReplicationRequest_Out. Ad2: Your assumptions are correct. Best regards, Josef Ehbauer Okay cool, thanks for your quick reply Josef! 🙂 Hi Josef Ehbauer one more question regarding that topic. Is it still possible to use Version 2 of API_MKT_CONTACT_SRV if we use the SOAP integration with Sales Cloud? If not, why do we need to use Version 4? Because we would like to use the standard integration package from SAP for External Landing Page Integration In the latest integration guide (2102) it is recommended to use the Communcation Arrangement SAP_COM_0342 in SAP Marketing Cloud. But this only includes Version 1 & 2 of the needed service. What is the recommended approach if we are using the new SOA-based integration from C4C and would like to use the External Landing Page Integration package? Kind regards Tobias Hi Tobias Schneider Version 2 (and indeed the External LP integration package) completely STOP working when activating the Relationships function in SMC. See also and I did get into contact with SAP about this, and there should be an update of the CPI flow and the linked communication arrangement in SMC for this external lp integration ready in version 2105. Hi Joyca Vervinckt thank you so much! I remembered that I read something about that from you, but wasn't able to find your question anymore. That helps a lot! Okay so let's hope that SAP will provide an update on the iFlow in 2105!! Did you already try to implement the Extensibility of the current iFlow and make use of version 4 of the Contact API? Kind regards Tobias No, we didn't, we decided to not spend the development effort and to just wait for the upgrade (thus postponing our go-live with the relationships feature). We did have other integrations not running on CPI too though, that did have to change from using older versions to V4. Tobias Schneider : the update of the CPI package for the external landing page integration is available now! There's a new box in Configuration where you can set the Version to 4. Hi Joyca, that's great, thank you so much for the update! 🙂 Joyca Vervinckt did you already try to use it with version 4? In our case we receive an error message regarding the CorporateAccountName field. Checking the API documentation I saw that it is not included in the ContactOriginData entity set anymore. In addition to that it looks like there are fields missing like email address. Do you know the reason for that? For me it makes no sense and I think I will create an incident for that. We need the CorporateAccountName to create B2B leads in C4C. BR Tobias Hi Tobias Schneider Yes I did. I'm not using the CorporateAccountName field though. The forms using that interface flow are from a B2C oriented area. Regarding email: that does come through in my test. One of the main differences from V2 to V4 is that Email is no longer part of the ContactOriginData, but really sent separately as a PUT AdditionalIDs part. Which is more logical imo, as email and mobile nrs were always "the special fields" that got transformed into an additional ID anyways. On there's a chapter dedicated to "switching to Version 4" with some details. Hi Joyca, thanks for providing the link. I agree that the part regarding email as additional ID makes sense. But I still can't find any logical explanation why CorporateAccountName has gone in V4. Hi Josef Ehbauer could you maybe briefly explain what technically happens if we switch on the Contact Projections? Actually we integrated C4C and MC using SOA-based integration for the first time and already get Business Partners together with relationships. Some data (e.g. departments and functions) are related to the Relationship between the two Business Partners in MC (Contact and Account), which I could see in the Inspect Contact app (Relationship Category BUR001). So, just to make sure: To activate the Contact Projections is only necessary if the same Contact is related to more than one Account in C4C. Is that correct? Or is it mandatory to activate that functionality for any other reasons? Sorry to bother you with such basic questions. BR Tobias Hi Tobias, SOAP-based integration between Sales Cloud and Marketing can be used independent from the Contact-to-Account relationship functionality, but it is a prerequisite for this capability. The new way of integration brings some advantages in terms of robustness and extensibility and is also a prerequisite for the connection of multiple Sales Cloud systems to Marketing Cloud. If your system provides data that one contact is related at to a unique account only, it is not necessary to activate the switch for contact-to-account relationships. In this the case the basic B2B capabilities of SAP Marketing Cloud should be sufficient for the customer’s processes. Best Regards, Josef Hi Josef, thank you so much again for your reply. That's great and makes it a little bit easier, because in this case for example the API_MKT_CONTACT_SRV v2 is still working (e.g. for the Landing Page Data Integration) 🙂 So, if the Customer at a later point in time decides to use Contact-To-Account-Relationship (because maybe in the future he will have that scenario, that a contact will be related to multiple accounts), then we could "easily" switch on that functionality because using the SOA-based integration, MC is already "prepared" for that scenario, right? BR Tobias Absolutely! BR, Josef Hello Josef Ehbauer thanks for this blog. And I see this functionality "B2B Contact-to-Account Relationships" is currently been delivered based on integration from SAP Sales Cloud(C4C) to SAP Marketing Cloud. Are we considering integration from SAP S4 (on-premise/Cloud) to the SAP Marketing Cloud to support this feature ? I could not see any planned innovation for this. And I have few customers and they having S4 as leading system and considering this feature. Since S4 based integration is not being planned until 2021, we are considering for custom built integration to bring in this feature in SAP Marketing Cloud from S4. I believe we can still use the same SOAP APIs on the SAP Marketing Cloud to generate this feature. Any thoughts(suggestions) from you ? Cheers Kamesh Hi Kamesh, in order to integrate S/4HANA we recommend to use SAP Master Data Service for Business Partners. For details refer to the corresponding chapter in our integration guide. Best Regards, Josef Hi Josef, I posted a question regarding ECC integration that needs the Contact-to-Account relationship. Hope you can advise. Thanks!
https://blogs.sap.com/2021/02/18/faq-b2b-contact-to-account-relationships-in-sap-marketing-cloud/
CC-MAIN-2021-39
refinedweb
4,064
62.27
This quick code tip shows how to check whether a file exists in Java. The Java code example used in the tutorial uses the NIO API’s java.nio.file.Path and java.nio.file.Files classes to determine whether a file at the given path in the file system exists or not. OUTPUT of the above code Java NIO-based code example to check for existence of a file package com.javabrahman.corejava; import java.nio.file.Files; import java.nio.file.Path; import java.nio.file.Paths; public class FileExists { public static void main(String args[]){ //Example of file which doesn't exist Path filePath_1= Paths.get("C:\\xyz.txt"); boolean fileExists_1= Files.exists(filePath_1); System.out.println("File 'xyz.txt' exists: "+fileExists_1); //Example of file which exists Path filePath_2= Paths.get("C:\\JavaBrahman\\LEVEL1\\file1.txt"); boolean fileExists_2= Files.exists(filePath_2); System.out.println("File ‘file1.txt’ exists: "+fileExists_2); } } Explanation of the code File ‘xyz.txt’ exists: false File ‘file1.txt’ exists: true File ‘file1.txt’ exists: true - The above class FileExistschecks for existence of 2 files – xyz.txt(which doesn’t exist) and file1.txt (which exists)at their respective paths mentioned in the code. - The check for file existence is done in 2 steps(which need to be repeated for each file check). These steps are – - Create an instance of Pathusing Paths.get()method with the complete file path passed as a Stringto it. - Use Files.exist()method with the Pathinstance variable created in step 1 passed to it as parameter. Files.exist()returns a boolean value indicating the existence of the file – trueif the file exists and falseif it doesn’t. - In the above example, the output is as expected – check for xyz.txtreturns falseas the file does not exist, while the returned value is truefor file1.txtas it exists at the given file path.
https://www.javabrahman.com/quick-tips/how-to-check-for-existence-of-a-file-in-java/
CC-MAIN-2020-16
refinedweb
311
61.73
I am able to run a Python3 script at system startup on my SBC4. I added the following line in the /etc/rc.local file, right before the exit(0) line at the end: (sleep 60 ; echo $(date) >> lastboot; /usr/bin/python3 /root/testBoot. py)& Without going into a nauseous amount of detail, this line (consisting of 3 commands) does run correctly. It sleeps for 60 seconds to make sure the system completes the boot process, adds the current timedate stamp to a file called lastboot, then runs the Python3 script "testBoot. py". "testBoot. py" does two small tasks: it prints an output to the screen and prints an output to a file; this way, I can make sure the python script is running by the presence of the file it created. My problem is not being able to get any of the output from my Python script to go to the console. I have tried several different methods of sending output from the script, but none have worked. I've tried: print("test") import sys, sys.stdout.write("Hello") import os, os.system("echo test") import os, os.system("echo 123456 > /dev/tty") I'm not a total noob, but I am only a novice with regards to standard outputs on linux. However, the above method (Python3 script in rc.local) prints to the console correctly on a Raspberry Pi. I know the Phidget Web GUI will let me run a script on startup, but that stdout is redirected to the web page view, and I need to see output on the terminal. Any help is greatly appreciated. -Terry
https://www.phidgets.com/phorum/viewtopic.php?f=39&t=9473
CC-MAIN-2020-40
refinedweb
270
72.36
Monkeying with Clojure’s deftest Say you have a namespace a that needs to be tested: 1 (ns a) (defn ^{:private true} foo [] 42) Using Clojure’s clojure.test libs you might think it would be as simple as the following: (ns b (:use [clojure.test :only [deftest is]])) (deftest test-foo (is (= 42 (a/foo)))) ; java.lang.IllegalStateException: var: #'a/foo is not public Whoops! That function a/foo is private and not readily available to call. Now there are a number of ways to get at that function to test it, but the shortest path is the following: (deftest test-foo (is (= 42 (#'a/foo)))) (test-ns 'b) ; Testing b ;=> {:test 1, :pass 1, :fail 0, :error 0} So using the #' reader macro we can resolve the a/foo var directly, but what if there was a way to declare a test that specifically for testing private vars: (defmacro defprivatetest [name [local-name private-name] & body] (when *load-tests* `(def ~(vary-meta name assoc :test `(let [~local-name (resolve '~private-name)] (fn [] ~@body))) (fn [] (test-var (var ~name)))))) This is almost exactly like the implementation of clojure.test/deftest except that it allows for a single named binding that is resolved to a function (private or not) in another namespace, like so: (defprivatetest test-a [foo a/foo] (is (= 42 (foo)))) (test-ns 'b) ;=> {:test 1, :pass 1, :fail 0, :error 0} Granted this is a lot of work just to avoid #', but this is much more flexible in the face of change. I leave it to the reader to change the implementation to allow multiple bindings. :f This post is much ado about nothing, but I thought I’d push it into the ether while everyone else is distracted by Clojure Conj. ↩ 3 Comments, Comment or Ping icemaze Why don’t you just test in the same namespace? If you worry about namespace pollution you can just write your tests in a different file that doesn’t get loaded in production. Is there some hidden benefit I’m missing? Sep 3rd, 2010 fogus I do not like to to keep the tests with the code under test. This is my psychosis. The part that I think is not clarified is that when testing the namespace private Vars (in this case functions) are not immediately accessible from another namespace (the test ns for example). I like to change my code alot, but hate to go and change my tests, so I’m looking for a way to (effectively) be lazy here. Thanks for posting. :f Sep 3rd, 2010 cgrand why not simply implement a refer-privates or refer-all fn? (.refer ns ‘x #’foo/x) works like a charm even when x is private. Sep 30th, 2010 Reply to “Monkeying with Clojure’s deftest”
http://blog.fogus.me/2010/09/03/monkeying-with-clojures-deftest/
CC-MAIN-2019-35
refinedweb
466
66.47
List This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here Propagate AStar.cs private void Propagate(Track TrackToPropagate) { int random = // Get a random int, either 0 or 1 int iStart = (random == 0) ? 0 : TrackToPropagate.EndNode.OutgoingArcs.Count - 1; int iEnd = (random == 0) ? TrackToPropagate.EndNode.OutgoingArcs.Count : -1; int iDelta = (random == 0) ? 1 : -1; //foreach (Arc A in TrackToPropagate.EndNode.OutgoingArcs) for (int i = iStart; i != iEnd; i += iDelta) { Arc A = (Arc)TrackToPropagate.EndNode.OutgoingArcs[i]; // Leave the rest of the code as-is... } } // // This source file(s) may be redistributed by any means PROVIDING they // are not sold for profit without the authors expressed written consent, // and providing that this notice and the authors name and all copyright // notices remain intact. // THIS SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, // EXPRESS OR IMPLIED. USE IT AT YOUR OWN RISK. THE AUTHOR ACCEPTS NO // LIABILITY FOR ANY DATA DAMAGE/LOSS THAT THIS PRODUCT MAY CAUSE. //----------------------------------------------------------------------- <br /> #include<iostream.h><br /> #include<math.h><br /> #include<stdio.h><br /> #include<conio.h><br /> #include<stdlib.h><br /> <br /> // these constants save calculation time<br /> const unsigned long power10[10] = {1,10,100,1000,10000,100000,1000000,<br /> 10000000,100000000,1000000000};<br /> <br /> // define a Boolean type<br /> typedef enum BOOL {TRUE, FALSE};<br /> <br /> typedef struct _NETWORK<br /> {<br /> unsigned char nodes; // the 2 connected nodes<br /> unsigned long arcFlow;<br /> unsigned long arcFloworiginal;<br /> } NETWORK;<br /> <br /> // function prototypes<br /> double permutation(int,int);<br /> double numPossiblePaths(char);<br /> unsigned long padPath(unsigned long,char);<br /> unsigned long convertBase(unsigned long, unsigned char);<br /> <br /> /*<br /> * This function returns the number of possible paths using<br /> * permmutation, not including restrictions posed on arcs and nodes<br /> *<br /> *----------------------------------------------------------*/<br /> double numPossiblePaths(char nodes)<br /> {<br /> double sum=0;<br /> char i;<br /> <br /> for (i=2; i<=(nodes-2); i++)<br /> sum += permutation(nodes-2,i);<br /> <br /> // nP0 and nP1 are always 1 and n, so save time by doing this<br /> return (sum+nodes-1); // or 1+(nodes-2), whatever<br /> }<br /> <br /> /*<br /> * This function returns nPk (permmutation)<br /> *<br /> *----------------------------------------------------------*/<br /> double permutation(int n, int k)<br /> {<br /> double sum=0;<br /> int i;<br /> <br /> if (k>n || n<2)<br /> return -1; // return with error if k>n<br /> else<br /> {<br /> // do n!<br /> for (i=n; i>1; i--)<br /> sum += log(i);<br /> // do (n-k)!<br /> for (i=(n-k); i>1; i--)<br /> sum -= log(i);<br /> }<br /> // return nPk<br /> return exp(sum);<br /> }<br /> <br /> /*<br /> * This function pads a path with the start and end nodes<br /> *<br /> *----------------------------------------------------------*/<br /> unsigned long padPath(unsigned long num, char nodes)<br /> {<br /> unsigned char ctr=10;<br /> <br /> while ((num / power10[--ctr]) == 0);<br /> <br /> num = num + nodes * power10[ctr+1];<br /> num = (num*10)+1;<br /> <br /> return num;<br /> }<br /> <br /> /*<br /> * This function takes in a decimal number and a base to convert it to<br /> *<br /> *----------------------------------------------------------*/<br /> unsigned long convertBase(unsigned long num, unsigned char numBase)<br /> {<br /> int ctr=0;<br /> unsigned long convertedNum=0;<br /> <br /> while (num!=0) // dont bother converting when quotient is 0<br /> {<br /> convertedNum = convertedNum+((num % numBase) * power10[ctr]);<br /> num /= numBase; // go down 1 digit in old base<br /> ctr++; // go up 1 digit in new base<br /> }<br /> <br /> return convertedNum;<br /> }<br /> <br /> /*==========================================================*/<br /> int main(void)<br /> {<br /> char nodes=6; // number of nodes in network<br /> unsigned long countTo=0, // number to count in decimal<br /> ctr,ctr2,ctr3,n,n2,n3, // general counters and holders<br /> path, // a path<br /> networkFlow=0, // initlally network flow is 0<br /> smallestArcCapacity; // Duh.<br /> NETWORK network[34] = {1}; // holds network information<br /> <br /> // this value changes along the process<br /> BOOL nodePath,validPath=TRUE; // of path selection, and determines whether<br /> <br /> // a path makes it into the path array.<br /> network[0].nodes = 21;<br /> network[0].arcFlow = 2;<br /> network[1].nodes = 31;<br /> network[1].arcFlow = 6;<br /> network[2].nodes = 41;<br /> network[2].arcFlow = 3;<br /> network[3].nodes = 12;<br /> network[3].arcFlow = 0;<br /> network[4].nodes = 42;<br /> network[4].arcFlow = 1;<br /> network[5].nodes = 52;<br /> network[5].arcFlow = 4;<br /> network[6].nodes = 13;<br /> network[6].arcFlow = 0;<br /> network[7].nodes = 43;<br /> network[7].arcFlow = 3;<br /> network[8].nodes = 63;<br /> network[8].arcFlow = 2;<br /> network[9].nodes = 14;<br /> network[9].arcFlow = 0;<br /> network[10].nodes = 24;<br /> network[10].arcFlow = 1;<br /> network[11].nodes = 34;<br /> network[11].arcFlow = 3;<br /> network[12].nodes = 54;<br /> network[12].arcFlow = 1;<br /> network[13].nodes = 64;<br /> network[13].arcFlow = 3;<br /> network[14].nodes = 25;<br /> network[14].arcFlow = 4;<br /> network[15].nodes = 45;<br /> network[15].arcFlow = 1;<br /> network[16].nodes = 65;<br /> network[16].arcFlow = 6;<br /> network[17].nodes = 36;<br /> network[17].arcFlow = 0;<br /> network[18].nodes = 46;<br /> network[18].arcFlow = 0;<br /> network[19].nodes = 56;<br /> network[19].arcFlow = 0;<br /> <br /> <br /> <br /> // get the number to count to<br /> countTo = (unsigned long) pow(nodes,nodes-2);<br /> <br /> // start the path building<br /> for (ctr=1; ctr<countto; ctr++)< {<br /> validPath = TRUE; // path is valid so far<br /> // convert number to path by converting to base NODES number<br /> n = path = convertBase(ctr, nodes); // n is temp var<br /> <br /> do<br /> {<br /> n3 = n % 10; // number that is checked for doubles<br /> n2 = path; // whole path is checked each time<br /> ctr2 = 0; // used to count doubles, none yet<br /> <br /> do<br /> {<br /> if ((n2%10)==n3) // if digit = number we're looking for,<br /> ctr2++; // found double, so increment<br /> n2 /= 10; // go down 1 digit<br /> } while (n2!=0);<br /> <br /> // check if there were doubles and if number is less then 2<br /> if (ctr2>1 || (n%10)<=1)<br /> validPath = FALSE;<br /> n /= 10; // go down 1 digit<br /> } while (n!=0 && validPath!=FALSE); // stop converting when quotient is 0<br /> <br /> // add the start and end nodes to path. e.g. if path is 423 and<br /> // there are 6 nodes, the path becomes 64231<br /> path = padPath(path,nodes);<br /> <br /> // check if path is proper by checking the restrictions given<br /> // by the user<br /> if (validPath==TRUE)<br /> {<br /> ctr2 = 0;<br /> smallestArcCapacity = 255;<br /> nodePath = TRUE; // variable used to check path at each node<br /> <br /> while ((n=(path % power10[ctr2+2] / power10[ctr2]))>9 && (nodePath!=FALSE))<br /> {<br /> nodePath = FALSE; // check each path with the information<br /> <br /> for (ctr3=0; ctr3<20; ctr3++) // given by user<br /> <br /> // check if nodes correct and arc flow capacity>0<br /> if ((n == network[ctr3].nodes) && (network[ctr3].arcFlow != 0))<br /> {<br /> nodePath = TRUE; // in the noetwork array, path is bad<br /> <br /> if (network[ctr3].arcFlow < smallestArcCapacity)<br /> smallestArcCapacity = network[ctr3].arcFlow;<br /> }<br /> ctr2++; // as soon as path is know to be bad, loop quits<br /> }<br /> validPath = nodePath; // result of path , bad or good<br /> }<br /> <br /> // if path is valid, decrease all flows on path by smallestArcFlow<br /> // and increase in oppositte direction<br /> if (validPath==TRUE)<br /> {<br /> ctr2 = 0;<br /> <br /> while ((n=(path % power10[ctr2+2]) / power10[ctr2])>9)<br /> {<br /> ctr3 = 0;<br /> <br /> while (n != network[ctr3++].nodes);<br /> network[ctr3-1].arcFlow -= smallestArcCapacity;<br /> <br /> ctr3 = 0;<br /> <br /> while ((((n%10)*10)+(n/10)) != network[ctr3++].nodes);<br /> network[ctr3-1].arcFlow += smallestArcCapacity;<br /> ctr2++; // as soon as path is know to be bad, loop quits<br /> }<br /> networkFlow += smallestArcCapacity; // Duh.<br /> }<br /> <br /> if (validPath==TRUE)<br /> cout << path << " " << smallestArcCapacity << endl;<br /> }<br /> cout << networkFlow;<br /> return 0;<br /> }</stdlib.h></conio.h></stdio.h></math.h></iostream.h> <br /> Graph G = new Graph();<br /> Node N1 = G.AddNode(0,0,0);<br /> Node N2 = G.AddNode(5,0,0);<br /> Node N3 = G.AddNode(5,5,0);<br /> Node N4 = G.AddNode(5,5,5);<br /> G.AddArc(N1,N2,1);<br /> G.AddArc(N2,N3,1);<br /> G.AddArc(N3,N4,1);<br /> G.AddArc(N1,N3,1);<br /> <br /> Graph G = new Graph();<br /> Node Start = G.AddNode(0,0,0);<br /> G.AddNode(5,0,0);<br /> G.AddNode(5,5,0);<br /> Node Ziel = G.AddNode(5,5,5);<br /> <br /> G.AddArc(new Node(0,0,0),new Node(5,0,0),1);<br /> G.AddArc(new Node(5,0,0),new Node(5,5,0),1);<br /> .<br /> .<br /> .<br /> G.AddArc(new Node(5,5,0),new Node(5,5,5),1);<br /> G.AddArc(new Node(0,0,0),new Node(5,5,0),1);<br /> General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/4391/C-A-Star-is-born?msg=2773016
CC-MAIN-2017-43
refinedweb
1,541
50.02
Code. Collaborate. Organize. No Limits. Try it Today. The C++ preprocessor is probably the most ubiquitous text transformation tool out there. However, it has its drawbacks. For one, it does not let you iterate. I often wish for a #for(...) directive to complement the #if(..). This is specially useful for writing repetitive code that follows a pattern. #for(...) #if(..) Fortunately, there is a way around this. I discovered this when I accidentally #included a file from within the same file. The compiler complained of infinite recursion. Voila! The preprocessor supports recursive includes! The only other piece to this puzzle is an index variable which can be modified. #include Here is where it gets ugly. Preprocessor variables can not be incremented/decremented using the usual arithmetic operators. In other words, the following is illegal: #define ITER 0 ITER = ITER+1 However, a similar result can be obtained using brute force: #if(ITER == 0) #undef ITER #define ITER 1 #elif(ITER == 1) #undef ITER #define ITER 2 #endif This can be continued for as many iterations as will be needed. The source code for this is hosted on github: Basically, the code defines an iterator that is incremented as discussed above. The file loop.h keeps including itself until the iterator reaches a specified value. Make sure to check out the code sample for details. uniform sampler2D Tex_01 ; uniform sampler2D Tex_02 ; uniform sampler2D Tex_03 ; uniform sampler2D Tex_04 ; It is much easier to automate this with preprocessor iterations. To demonstrate how preprocessor iterations work, we create code to print numbers less than 10. The looping is done at compile time, and there is no runtime overhead. int main() { #include "loopstart.h" #define LOOP_END 10 #define MACRO(x) printf("%d\n", x); #include "loop.h" return 0; } Basically, we have to do the following: LOOP_END MACRO(x) We could, of course, add more bells and whistles to this, like a starting point for the iterator other than the default 0. This is pretty easy to do once you get a hang of the general concept, and is left as an exercise for the reader..
http://www.codeproject.com/Tips/444338/Pre-processor-Iteration?fid=1768402&df=90&mpp=10&sort=Position&spc=None&select=4345358&tid=4345055
CC-MAIN-2014-23
refinedweb
350
66.74
Odoo Help This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers. [SOLVED] How to return a tree in a function with several fields? I have made a button which calls a function. This function shows a popup with all the records of a model (in mode tree view). When I click on the button, I am seeing the tree view with all the records of the model I want, but only one of their fields (the name). I would like to show more properties. Anyone knows what I have to do? Here my code (the return of the function): return { 'view_type': 'tree', 'view_mode': 'tree', 'res_model': 'my.model', 'type': 'ir.actions.act_window', 'target': 'new', 'flags': {'tree': {'action_buttons': False}} } Thank you in advance. Hello, Hint : By default, If no view specified at action return time (which you have did in your example), then OpenERP creates itself tree view and return only "Name" column. In Order to achieve your purpose, you need to find the tree view and form view and add those into return dictionary. For example see below code, ( See Bold Lines & you will understand what I am trying to say ) def button_click_method(self, cr, uid, ids, context=None): self_obj = self.browse(cr, uid, ids, context=context)[0] product_ids = [] if self_obj.product_ids: product_ids = [ int(x) for x in self_obj.product_ids.split(', ')] data_obj = self.pool.get('ir.model.data') data_id = data_obj._get_id(cr, uid, 'product', 'stone_tree_view_ept') view_id = False if data_id: view_id = data_obj.browse(cr, uid, data_id, context=context).res_id form_data_id = data_obj._get_id(cr, uid, 'product', 'stone_form_view_ept') if form_data_id: form_view_id = data_obj.browse(cr, uid, form_data_id, context=context).res_id context.update({'active_ids': [],'no_complete_name':1}) return { 'name': _('Stones'), 'view_type': 'form', 'res_model': 'product.product', 'view_id': False, 'domain':"[('id', 'in',%s)]" %(product_ids), 'context': context, 'views': [(view_id, 'tree'), (form_view_id, 'form')], 'type': 'ir.actions.act_window', 'target': 'current', 'nodestroy': True } If you have not created Tree and Form view then please create them first. Hope this answer will help you, Thank you Hiren Vora. I created the view (id="my_model_view", name="my.model.tree"). I copied your code and changed the necessary lines, but I got an error. What do I have to put in data_id = data_obj._get_id(cr, uid, 'product', 'stone_tree_view_ept')? I mean instead of 'product' (my model?) and 'stone_tree_view_ept' (the id of the view?) Oooohhh!!! I gotta!! It was the name of the module! Thank you so much! You helped me a lot! I am happy that your problem is solved. After all you learned something new that matters!
https://www.odoo.com/forum/help-1/question/solved-how-to-return-a-tree-in-a-function-with-several-fields-66196
CC-MAIN-2016-44
refinedweb
433
69.58
Google Ports Capsicum To Linux, and Other End-of-Year Capsicum News 71 Posted by Unknown Lamer from the eros-lives-on dept. from the eros-lives-on dept.. Who came up with the name Capsicum? (Score:3, Funny) Its a real mouthful! Re: (Score:1) Re: (Score:1) The word reminds me of an oddly-named, (in my mind), quiet little village on the coast of Holland. [wikipedia.org] Re: (Score:1) OMG! (Score:5, Insightful) Re: (Score:3, Insightful) Its got a really hip name so probably not. With a name like that its probably more of a "concept", "architectural attitude" or something similar. Re: (Score:2) API for sandboxing processes. Re:OMG! (Score:5, Informative) Re: (Score:1) The video explains it, but it allows programs to 'drop' capabilities they no longer need. For example, tcpdump needs root access to open the network interface, but after that it can give up those capabilities, so if there is a bug in tcpdump and it gets compromised by a maliciously crafted packet, the attacker does not have an excess privileges to exploit. Actually it's different than "needs root access" and "it can give up". Tcpdump needs to open a raw socket, which is a particular capability (which can be granted without root access). The capabilities are quite fine grained. If you give tcpdump the "-w" option to save the capture to a file, it will keep the "file-write" capability (but not "file-read" or "file-seek"), but otherwise drop "file-write". If you say not to do hostname looks up ("-n"), then the "dns-lookup" capabiltiy is dropped. When gzip was port Re: (Score:2) Dropping privileges means that the application (or one of its sub-processes) gives up the ability to perform a certain class of operations, such as switching user ID, binding to ports below 1024 etc. This is already done in a lot of networking applications. If I understood it correctly, using capabilities means that you cannot perform an operation unless you have some kind of token (such as a file descriptor) that represents the resource and the type of access you're allowed on that resource. So instead of d Re: (Score:1) Well, it's still dropping features in Capsicum, at least AFAIK. (I have a friend who's doing his PhD dissertation on something related to Capsicum-like systems.) Capsicum just allows you to drop the ability to drop operations at a finer granularity than would otherwise be possible. Re: (Score:2) What I understood from the presentation is that in Capsicum your process starts to run with traditional access control. You acquire the capabilities you'll need and then you switch into a different mode in which resources can only be accessed using capabilities. In other words, you drop all possible operations at once, not a selected subset. Except when using capabilities you acquired in advance or were given by a different (sub-)process, you can do nothing at all with resources. The word "capability" is a b Re: (Score:1) I still say that it has to do with what level you want to think of it. If you look at the mechanism by which a policy is achieved, yes, a capability-based system is quite different from the typical Unix model. But in both cases, you have a program that's saying "if I get told to do something I shouldn't (e.g. by being compromised), I want to make sure I can't do something -- I know, I'll stop my future self from doing it by removing the ability." In the traditional Unix model you do that by changing users, i Re: (Score:2) One difference is that if you drop privileges, you cannot regain them, while capabilities can be passed from one process to another. For example, if a web site asks for access to your web cam, it would be good if the user can allow or deny that on a case-by-case basis. With capabilities, the capability to access the web cam can be transferred to the tab sandbox that asked for it. With privilege separation, you'd either have to give all sandboxes the ability to access the web cam, thus risking exploited tabs Re: (Score:2) So instead of dropping the privilege to write files at all, which would not be feasible in many applications, you would have a file descriptor to a directory that allows writing files under that directory. That's an interesting choice of example, since I've often seen the *opposite* used to demonstrate capability systems; specifically that we can keep a filesystem capability in our shell and do everything else with stdio and pipes. For example, rather than giving a filesystem capability to "cp", we can get rid of "cp" altogether and use "cat file2". Since "cat" is only operating on stdio, we don't need to give it any filesystem capability. Re: (Score:1) Root access is more like a "privilege" than a "capability". It lets you do whatever you want in a global namespace. There's potential for confusion here, because there are things that Linux has called "capabilities" for a long time, but they are not "capabilities" in the (even older) sense that Capsicum uses. Linux process "capabilities" like CAP_CHOWN or CAP_NET_RAW are not "capabilities" in the sense Capsicum sense. They're just basically limited subsets of root privilege. Capsicum's use of the word origina: (Score:2) Much of this sounds familiar. Fine grained rights on file handles sounds awfully like SELinux, which is itself merely an implementation of access control. Though it sounds like Capiscum has left it up to the app to decide what rights it needs, whereas SELinux maintains a big file of extended rights, basically a big extension to the old UNIX security model of "rwx" for owner, group, and world. Last time I tried SELinux, many years ago, I found I was always having to expand privileges so that utilities and Re: (Score:1) An example of this is a web server. Say you need to load config files once when the web server is starting up. You give the web server file system access at the OS level, but then the web server loads the files then promptly drops permissions. This means the app willfully give policy updates have fixed SELinux (Score:2) > Last time I tried SELinux, many years ago, I found I was always having to expand privileges so that utilities and apps could do their jobs. > We finally said the heck with it, and gave pretty much every permission to every program. Other people had the same problem and reported exactly what the log said, so the distro default policies have been updated. That pretty much solved the problem, so SELinux is ready for you to give it another try. Also, there exist "targeted" policies which restrict only th ps Capsicum prevents the app from being exploited (Score:2) > but it doesn't sound too secure, leaving it up to the app to police itself. > We've seen how well that didn't work in places like Wall Street. Is Capiscum's real security the sandboxing? As a simple example, many web server exploits write files to /var/tmp and then to /sbin. It's not that Apache or Nginx is malicious, Apache is being tricked into writing those files. With Capsicum, Apache would, on start-up, declare to the OS "don't let me write any files outside of cgi-data/." Re: (Score:2) cgi-data? Is this a time-traveling web server from the 90s? Snark aside, it's also advantageous to prevent Apache/nginx/whatever from being able to run/interpret/process PHP/Perl/Python code outside of a carefully restricted set. Re: (Score:1) That's actually a really neat idea. Re: (Score:3) There are a few big differences. The first is that SELinux is really an extension of the ACL approach: you have a big matrix with things that can do stuff, and things that can have stuff done to them, and a bit in each position indicating whether that combination is allowed. SELinux (and most ACL implementations) compress this, because a matrix with one row for every process-user pair and one column for every filesystem or kernel object would be huge. The goal of things like SELinux is for system adminis Re:OMG! (Score:4, Funny) Can someone explain what exactly Capsicum is? Capsicum [wikipedia.org]:. ... The piquant (spicy) variety are commonly called chili peppers, or simply "chilies". Use milk to calm the burn, not water or beer. Re: (Score:2) No, you'll need to visit an actual tech oriented site in order to learn something. This sounds so... (Score:1) SPICY!!!@! Capsi . . . what? (Score:3) Apparently, it has nothing to do with the issue some geeks have with their caps lock key after surfing the web one handed . . . Re: (Score:3) In some countries (Australia, NZ and maybe more) bell peppers are called capsicum. You do know some plant genera: Cannabis, Nicotiana, Thymus, Narcissus, Olea, Rosa, Tulipa, Prunus, Pinus, Crocus, Iris, Brassica, Coffea, Eucalyptus. And there's always Amorphophallus titanum. Great approach for mitigating web-borne threats (Score:2) Google is funding this (both the direct research [cam.ac.uk] on FreeBSD and the port to the Linux kernel) because it addresses an aspect one level above the browser. Google Chrome would then be quite tightly sandboxed. This sure beats my method of running browsers as another user (I symlink ~/Downloads to my web user's version of that area and move things out of it quickly), especially since my method wouldn't do anything against actual privilege escalation (to root). This should also help fight web server exploits Re: (Score:3) This sure beats my method of running browsers as another user No, not really. It's just that modern OSs weren't designed for the damn security that hardware gives them, and they're too general purpose to utilize these hardware features properly. For instance: Instead of memory barriers and capability based security I've experimented with hypervisory mode sandboxing in some of my toy OSs. Every application thinks it in its own OS so instead of constantly verifying capabilities I can pre-allocate permitted resources and be fucking done. I could also mention that x86 h Re: (Score:2) Should be great ammunition for advocating *Nix over Windows 8. Yes, because windows doesn't have the concept of User Privileges. They aren't as fine grained or powerful, I'll grant you, but they could easily be made to be so if Microsoft (or customers) cared enough. The problem is _they don't_ because the real-world security improvements you'll see from something like capsicum are minimal. Re: (Score:2) Looks interesting - I have a few questions (Score:2): Re: Answered Capsicum questions (Score:1): Re: (Score:2) Does Capsicum only work at the process level? I can't have a more privileged thread that is still uncontained (i.e. still able to perform a blocked syscall) while other threads are contained? Yes. There's no point sandboxing a thread, because if you compromise one thread you can write over every other thread's memory, and trivially do ROP tricks to make another thread make the system calls on your behalf. How do you envision codebases supporting Capsicum in a way that they leaves them still portable to platforms where Capsicum is not available? Is it going to be a case of #ifdefs all the way down? The Capsicum APIs can mostly be #ifdef'd away. The things that restrict rights on a file descriptor and the cap_enter() syscall can just be turned into no-ops. Would it be possible to make a sandbox program that uses Capsicum to in turn sandbox another (Capsicum-unaware) program that it goes on to run or is it likely going to be too restrictive for the second program? I don't think I understand this question. Re: (Score:1) The last question was basically "Can I use Capsicum to create a program that in turn isolates other arbitrary programs in a meaningful way (e.g. in the style of sandboxie)"? Re: (Score:2) It's probably more interesting to in What problem does capsicum fix? . Re: (Score:3) Re: . Isn't the whole point of dropping privileges (whether using something fancy like Capsicum, or simply switching to a non-root user after initialisation) to mitigate what happens after a process gets successfully attacked? Whether a zero-day vuln or an old vuln that hasn't been patched, if the attacker can't do anything important once they get in, doesn't that help? Re: (Score:2)
http://tech.slashdot.org/story/14/01/08/1635210/google-ports-capsicum-to-linux-and-other-end-of-year-capsicum-news?sdsrc=prev
CC-MAIN-2014-15
refinedweb
2,136
60.14
I've always liked this word programatically, and from time to time I find myself using it when searching Google for a really specific problem. Often, doing something "programatically" leads to an elegant solution for some real-world problem or saves you some time automating a task. This last week I was set up to develop a product that had dealing with GitHub repos as one of its challenges. I had already worked with GitHub's API before, but mostly for Github Actions and some CI stuff, and this was a bit different, since, as its core, was Git, and Git is not simple. Basically, I had to create a repository if it didn't exist and push n files into it. At first, I started testing my guesses of how to do it using the GitHub API directly and a personal access token as an auth method. After that, I started coding the solution using GitHub's Octokit/REST, a cool wrapper for its API that supports Typescript and it will help you a lot. One thing that I have to mention is that this was one of the first times that using Typescript really sped up my work. After Step 2 I didn't stop for testing and, after finishing it, I was pretty sure that I would've missed something but the solution worked seamlessly at first try. So, one of my first concerns was what do I have to do to mimic a git add . && git commit -m "My commit" && git push using the API? After a bit of Googling, I wrote on my board: Note: Most of the referencing here uses SHA, a hash of the object within Git. That "code" that every commit has and you see on git log is the commit's SHA. Get the refof the branch you wanna push files to. You'll want the SHA of the last commit doc With the commit's SHA, you will fetch for that commit's tree, which is the data structure that actually holds the files. You'll want its SHA too doc After this, you will need the list of files you want to upload, this includes the files' paths and its contents. With those you will create a blob for each file. In my case, I was dealing with .mds and .yamls, so I've used utf8encode and sent each file content's as a plain text that I got from fs.readFile(path, 'utf8'), but there're other options for working with binaries, submodules, empty directories and symlinks. Oh, also: don't worry about files inside directories, it's just about sending the path on the next step for each file that GitHub will organize them accordingly (that's was one of the things I was afraid that I would've to deal with it manually) doc With all the SHA's from the blobs created on Step 3, you'll create the new tree for the repo with those files. It's here where you'll link the blobs to the paths, and all of them together to the tree. There are some modes that you can read about it on the docs, but I've only used 100644for regular files. You'll also have to set the SHA retrieved on Step 2 as a parent tree for this one you're creating doc With the tree (and its SHA) containing all the files, you'll need to create a new commit pointing to such tree. This will be the commit holding all of your changes for the repo. It will also have to have the commit SHA got from Step 1 as its parent commit. Notice this is a common pattern on Git. It's here where you set the commit message too doc And, finally, you set the branch ref to point to the commit created on the last step, using the commit's returned SHA doc Done! One of the problems that I've also had was that I had to create the repository in some cases, and there was no "initial" commit for me to work. After, I found that you can send a auto_init param when creating a repository that GitHub will automatically create a simple README.md initializing the repo. Dodged a bullet there. So, let's go to the code. import Octokit from '@octokit/rest' import glob from 'globby' import path from 'path' import { readFile } from 'fs-extra' const main = async () => { // There are other ways to authenticate, check const octo = new Octokit({ auth: process.env.PERSONAL_ACESSS_TOKEN, }) // For this, I was working on a organization repos, but it works for common repos also (replace org for owner) const ORGANIZATION = `my-organization` const REPO = `my-repo` const repos = await octo.repos.listForOrg({ org: ORGANIZATION, }) if (!repos.data.map((repo: Octokit.ReposListForOrgResponseItem) => repo.name).includes(REPO)) { await createRepo(octo, ORGANIZATION, REPO) } /** * my-local-folder has files on its root, and subdirectories with files */ await uploadToRepo(octo, `./my-local-folder`, ORGANIZATION, REPO) } main() const createRepo = async (octo: Octokit, org: string, name: string) => { await octo.repos.createInOrg({ org, name, auto_init: true }) } const uploadToRepo = async ( octo: Octokit, coursePath: string, org: string, repo: string, branch: string = `master` ) => { // gets commit's AND its tree's SHA const currentCommit = await getCurrentCommit(octo, org, repo, branch) const filesPaths = await glob(coursePath) const filesBlobs = await Promise.all(filesPaths.map(createBlobForFile(octo, org, repo))) const pathsForBlobs = filesPaths.map(fullPath => path.relative(coursePath, fullPath)) const newTree = await createNewTree( octo, org, repo, filesBlobs, pathsForBlobs, currentCommit.treeSha ) const commitMessage = `My commit message` const newCommit = await createNewCommit( octo, org, repo, commitMessage, newTree.sha, currentCommit.commitSha ) await setBranchToCommit(octo, org, repo, branch, newCommit.sha) } const getCurrentCommit = async ( octo: Octokit, org: string, repo: string, branch: string = 'master' ) => { const { data: refData } = await octo.git.getRef({ owner: org, repo, ref: `heads/${branch}`, }) const commitSha = refData.object.sha const { data: commitData } = await octo.git.getCommit({ owner: org, repo, commit_sha: commitSha, }) return { commitSha, treeSha: commitData.tree.sha, } } // Notice that readFile's utf8 is typed differently from Github's utf-8 const getFileAsUTF8 = (filePath: string) => readFile(filePath, 'utf8') const createBlobForFile = (octo: Octokit, org: string, repo: string) => async ( filePath: string ) => { const content = await getFileAsUTF8(filePath) const blobData = await octo.git.createBlob({ owner: org, repo, content, encoding: 'utf-8', }) return blobData.data } const createNewTree = async ( octo: Octokit, owner: string, repo: string, blobs: Octokit.GitCreateBlobResponse[], paths: string[], parentTreeSha: string ) => { // My custom config. Could be taken as parameters const tree = blobs.map(({ sha }, index) => ({ path: paths[index], mode: `100644`, type: `blob`, sha, })) as Octokit.GitCreateTreeParamsTree[] const { data } = await octo.git.createTree({ owner, repo, tree, base_tree: parentTreeSha, }) return data } const createNewCommit = async ( octo: Octokit, org: string, repo: string, message: string, currentTreeSha: string, currentCommitSha: string ) => (await octo.git.createCommit({ owner: org, repo, message, tree: currentTreeSha, parents: [currentCommitSha], })).data const setBranchToCommit = ( octo: Octokit, org: string, repo: string, branch: string = `master`, commitSha: string ) => octo.git.updateRef({ owner: org, repo, ref: `heads/${branch}`, sha: commitSha, }) Feel free to reach me for any questions :) Discussion (4) Congratulations for the article, very good material, helped me finish a feature that was stuck <3 Yo, thanks for creating this! Gonna help so much in a project I'm working on. One question I had was how to upload an image that is in the project? Hey, check the Github API around blobs. I'd guess you'd have to pass another mode (maybe binary or something) and pass the image as well.
https://dev.to/lucis/how-to-push-files-programatically-to-a-repository-using-octokit-with-typescript-1nj0
CC-MAIN-2022-33
refinedweb
1,236
53
This blog post explains our reasoning and motivation behind choosing RxJava as one of the integral components in our new Java SDK. Motivation There are many ways to design an API and every one has its own set of benefits (and drawbacks). In the process of designing our brand new APIs one of the main questions was how do we expose it to the user. One question we didn't have to ask ourselves was: should it be synchronous or asynchronous? We strongly believe that asynchronous APIs are the only sane way to get the performance and scalability you very often need, and also its is much easier to go from async to sync than the other way round. The current stable SDK (1.4.3 at the time of writing) already makes heavy use of Futures in various ways to provide async responses and this dates back into 2006/7 where spymemcached originally introduced the concept into its API. It is well known that the Java Future interface is very limited compared to other solutions (like the Scala futures). In addition, it is also a bit tricker to code if you need to build async dataflows where one computation depends on the other and you want to have the whole thing async. In recent versions we added support for listeners, which improve the situation quite a bit but still are not an ideal solution. Over the last few years, other libraries and patterns emerged which we followed closely. One of the mature concepts is known as Reactive Extensions, originating out of Microsoft and .NET. It is based around the idea that applications should be event-oriented and react to those events in a asynchronous ways. It defines a very rich set of operators on what you can do with the data (modify, combine, filter it and so on). Recenly, Netflix ported it over to Java and nicknamed it RxJava (note that while the project currently lives under the Netflix namespace, it will be moved to “io.reactivex” sooner than later). It is very stable and also provides adapters for other JVM languages like Scala, Groovy and JRuby which plays well with our plans to broaden support as well. The Concept The main idea of Rx revolves around Observables and its observers. If you haven't came across this concept, you can think of the Observable as the asynchronous and push-based cousin (or more formally called a dual) of an Iterable. More specifically, here is their relation: Every time data gets pushed into an Observable, every observe that is subscribed to it receives the data in its onNext() method. If the observable is completed eventually (which doesn't have to be always the case). the onCompleted method is called. Now anywhere in the process, if an error occurs the onError method is called and the Observable is also considered to be complete. If you like grammar, the contract looks like this: Specifically note that there is no distinction if only 1 or N data is returned, this can be normally inferred from the methods that you call and how it is documented. It does not change your programming flow anyway. Since that is a little abstract, let's look at a concrete example. On the CouchbaseCluster class, there is a method called openBucket which initializes all needed resources and then returns a Bucket instance for you to work with. Now you can imagine opening sockets, grabing a config and so forth takes some time, so this is a perfect candidate. The blocking API would look like: Bucket openBucket(String name, String password); } How can we make it asynchronous? We need to wrap it into an Observable: Observable } So we now return an observable which will eventually return with a bucket instance that we can use. Let's add a observer: @Override public void onCompleted() { System.out.println(“Observable done!”); } @Override public void onError(Throwable e) { System.err.println(“Something happened”); e.printStackTrace(); } @Override public void onNext(Bucket bucket) { System.out.println(“Received bucket: “ + bucket); } }); Note that these methods are called on a different thread, so if you leave the code like this and quit your main thread afterwards, you probably won't see anything. While you could now write all the rest of your code in the onNext method, thats probably not the best way to do it. Since the bucket is something want to open upfront, you could block on it and then proceed with the rest of your code. Every Observable can be converted into a blocking observable, which feels like an Iterable: You will find many methods to iterate over the received thata in a blocking fashion, but there are also shorthand methods if you only expect one single value (which we know is the case for us): What happens here internally is that the value called in onNext is stored for us and returned once onComplete is called. if onError is called, the throwable is thrown directly and you can catch it. Unifying APIs Now what you've seen already barely touches the surface. The bucket opening could very well be handled also with a Future Again, let's look at a concrete example. The SDK exposes a get method which returns one document. It looks like this: Observable } But we also support Querying (Views, N1QL) which potentially return more than one result (or even none). Thanks to the Observable contract, we can build an API like this: Observable } See? The contract implicitly says “if you pass in a query, you get N ViewResults back”, since you know how an Observable needs to behave. And for a bigger picture, here are even more methods that intuitevly behave the way you expect them to. <D extends Document>> Observable<D> insert(D document); <D extends Document>> Observable<D> upsert(D document); <D extends Document>> Observable<D> replace(D document); Observable<ViewResult> query(ViewQuery query); Observable<QueryResult> query(Query query); Observable<QueryResult> query(String query); Observable<Boolean> flush(); } Async my dataflow! So far we have seen what Observables can do for us and how they help us with providing cohesive, simple and yet asynchronous APIs. But Observables really shine with their composability aspects. You can do lots of things with Observables, and we can't cover them all in this post. RxJava has very good reference documentation which can be found here, so check it out. It is using marble diagrams to show how async dataflows work, also something that we want to provide as part of our documentation in the future. Let's consider a practical example: You want to load a document from couchbase (which is a full blown JSON object with user details), but you just want to do something with the firstname further down in your code. We can use the map function to map from the JsonDocument to the firstname String: There are two important aspects here: Every method that is chained here is also executed asynchronously, so it is not blocking the originating thread. Once the get call against couchbase returns, we map the firstname from the JSON document and then finally we print it out. You do not need to provide a full blown Observer, if you are only interested in the onNext value you can just implement that one (as shown here). See the overloaded methods for more examples. Also note that I'm deliberately showing Java 6/7 style anonymous classes here. We also support Java 8, but more on that later. Now, how could we extend this chain if we only want to print out the name if it starts with an “a”? .get(“user::1”) .map(new Func1<JsonDocument, String>() { @Override public String call(JsonDocument jsonDocument) { return jsonDocument.content().getString(“firstname”); } }) .filter(new Func1<String, Boolean>() { @Override public Boolean call(String s) { return s.startsWith(“a”); } }) .subscribe(new Action1<String>() { @Override public void call(String firstname) { System.out.println(firstname); } }); Of course a simple if statement would suffice, but you can imagine your code to filter could be much more complex (and probably calling something else as well). As a final example on transforming observables, we are going to do something that comes very often: you load a document, modify its content and then save it back into couchbase: .get(“user::1”) .map(new Func1<JsonDocument, JsonDocument>() { @Override public JsonDocument call(JsonDocument original) { original.content().put(“firstname”, “SomethingElse”); return original; } }) .flatMap(new Func1<JsonDocument, Observable<JsonDocument>>() { @Override public Observable<JsonDocument> call(JsonDocument modified) { return bucket.replace(modified); } }).subscribe(); FlatMap behaves very much like map, the difference is that it returns an observable itself, so its perfectly suited to map over asynchronous operations. One other aspect is that with Observables, sophisticated error handling is right at your fingertips. Let's implement an example which applies a timeout of 2 seconds and if the call does not return hands back something else instead: Here a dummy document is returned (pretending some reasonable defaults for our example) if the get call does not return in 2 seconds. This is just a simple example, but you can do a lot with exceptions, like retrying, branching out to other observables and so forth. Please refer to the official documentation (and Rx's documentation) for how to use them properly. Wait, there is more There are much more features available, like combining (merging, zipping, concat) of different observables, batching the results up in time intervals, do side-effects and others. Once you get over the initial (small) hurdle of understanding the concept, it feels very natural and we promise you don't want to go back (if we are wrong, though, you can always block on an Observable or convert it into a future). RxJava also has decent Java 8 support, so if you are a lucky one who is able to use it in their projects already you can simplify an example from above to this: .get(“user::1”) .map(jsonDocument -> jsonDocument.content().getString(“firstname”)) .filter(s -> s.startsWith(“a”)) .subscribe(System.out::println); Neat, right? RxJava also provides different language adaptors on top of it, at the time of writing Scala, Clojure, Groovy, JRuby and Kotlin. They can be used to provide even more language-specific integration and we are also planning to use some of them to enhance couchbase support for each of those languages as we see demand. Our topmost priority aside from the java SDK is definitely Scala, so be on the lookout for some announcements sooner than later! We hope that you are now as excited as we are and looking forward to your feedback and questions through the usual channels! 2 Comments Really looking forward to an announcement related to Scala. I\’ve just had a look at but it currently depends on the 1.4 Java SDK. Is it worth waiting for your announcement before I start porting an application which is currently using mongodb and ReactiveMongo? […] de code asynchrone. Certains éditeurs de base de données l’ont bien compris : le driver de CouchBase utilise déjà des Observable dans son driver asynchrone. MongoDB, de son coté, a publié […]
https://blog.couchbase.com/why-couchbase-chose-rxjava-new-java-sdk/
CC-MAIN-2019-35
refinedweb
1,853
59.74
This is the personal blog of Chris Wilson, Platform Architect of the Internet Explorer Platform team at Microsoft (and ex-Group Program Manager). Daniel. Really. ? -Chris. If you would like to receive an email when updates are made to this post, please register here RSS Tiny MCE text editor produces those mce_href attributes... It's pretty obvious you aren't wanted by the W3C as chair and we developers don't trust you as chair so why don't you just do us all a favor and give it up? Just like Microsoft and its browser, you are holding back web development in the interest of getting you and Microsoft's own way. In the interest of the internet community at large, please, go away! Rob - utter balls. The W3C holds up the web development community. You need good people with the attitude Chris shows to make the change there. From what I've read here of Chris' intentions I think he'll make a great Chair. Stephen, Intentions should be backed up by actions and Microsoft and the IE group have not shown good intentions or actions for many, many years, despite the changes in IE7. If their intentions were good, there would be promises and guidelines on future improvements in this browser but the only thing given is vague at best. Rob, it is simply not possible to make those sorts of specific promises. Sorry. Then you can't promise to be good. Without timelines or guidelines or "What we're working on now..." statements, we only have the past to base your future work on and the past is no indication of good performance. Pull your name out of the hat now, Chris. Someone said you don't want to do it anyway so we don't need a reluctant leader either. Chris, I believe that that all of the bruha over you being considered as a possible chair for the WG is due to Mircosoft's handling of Web Standards in the past. For a long while Internet Explorer was THE browser to use: on Windows AND on Macs. But with the advent of Mozilla's Firefox browser, Apple creating Safari, the Opera browser becoming free and more individuals shifting to Linux thus using the Konquerer web browser, people once again began to care about Web Standards and what that entailed. And once people began to care, the web dev community noticed that the engineers and the higher-ups at Microsoft did not care and hadn't cared in a good long while. Now having to resort to Internet Explorer-specific hacks to get things to render the way they do in the other browsers lead most in the web dev community to a feeling of betrayal. And it is this feeling of betrayal that has most not trusting any of the future supposed plans of Microsoft in regards to Web Standards. Now of course, you don't own or control, Microsoft. Not meaning to be disrepectful, but I think most people think that your level of influence at Microsoft is at best mid-level, meaning you can make all of the recommendations that you want as to the direction that the Tritan/Internet Explorer division should go, but ultimately you have no decision making power. So alot of these individuals believe that as far as Microsoft is concerned with Web Standards, it will be business as usual.. All I can say is to keep this open dialogue that you've got going and take things one day at a time. In time we will see the direction that this road needs to travel..... Frank The fact that the discussion with Daniel is out here in the open gives me more confidence about the "new" HTML WG and you as chair than anything else. I also think that the compliance (or not) of IE is irrelevant to how good a chair Chris is/will be. I presonally tend to agree with a lot of the Apple guys' comments and think that how those suggestions are handled will say a lot about the new WG. Regards, Rob... (another one!) Chris, your chairmanship is one thing, and again I know who _you_ are, what _you_ think, how _you_ act. I know _you_. I also know how work companies ; that's not a negative comment, that's just how it goes. My other comments about the Charter still stand, and I still think this Charter is a bad one as it is. Well not that bad, only not enough or unrealistic. I have raised a few points that are important enough, from my perspective, to trigger a negative vote. And it seems that Apple, looking at Surfin'Safari weblog, agrees with me on these points. Maybe the question for you, Chris, is to know how do you think you will be the most useful to the industry: as the highly regarded Microsoft representative to the HTML WG, or as an always suspicious chair? During your many years as representative, everybody saw in you the best possible advocate to bring good practices inside Microsoft. Now as a chair, and there is absolutely nothing personal in it, you - in spite of all your goodwill - cannot prevent the lack of credibility of your employer from tainting the appreciation that a large number of the actors and observers will have of your work. In addition, your replacement as Microsoft representative will be under pressure to demonstrate the same qualities of a good player that you have shown, knowing that every interaction between the two of you will be under intense scrutiny, if not outright distrust, from the community. It's unfair, I agree, but it cannot be ignored. I think that most people will let the M-word fog their vision at first. I think you should do the job because you want to, not just because you were asked. That being said, after reading 4 different blogs on the appointment I think you will probably do the best job possible. Now, you are aware of the huge desire people have for what the new WHAT-WG will produce and if it will be something truely innovative AND addaptable. Don't let people get you down. Keep the courage up and give us something great to look at and work with in the end. That will be the only way to keep them off your back. It is better to have tried and failed then to have never tried at all. Most of the comments you see are coming from people who lack the experience and understanding you have in this industry. I say, GIVE 'EM HELL and do a great job... for everybody's sake. I say let the man do the job! It's great that so many organisations are working towards compatibility within HTML but the whole argument is irrelevant unless the Flash format is opened up as more and more content is dependent upon this proprietory format and HTML essentially is only a wrapper. Although I do not know Chris Wilson in person, I feel I must post my experiences. I do not like prejudice very much, so I'll try to drop some facts into the picture here. I attend (as an observer/expert) the C++ committee meetings for several years now. Our convener is Herb Sutter from Microsoft. There are other regular attendees from Microsoft and companies strongly related to them. I list my experiences: They are - helpful - professional - hard working and personally very nice people. I have seen so far absolutely no obstruction from them, or any ways of pushing any agenda - other than making sure that 0x in C++0x won't be standing for hexadecimal digit... IOWs: to keep our schedule - which is in fact supported by all the members. I can only say that Microsoft is more than generous with this ISO WG. They have organized several meetings and do financially contribute to many of those that others organize. Also, in these last years Microsoft has shown exceptional efforts (also means: exceptionally successful ones) in getting their own product closer to the standard. So my opinion is that treating Chris Wilson with prejudice just because he works for Microsoft leads nowhere y'all would like to go. Looking at the facts (like the charter), making it realistic, making sure that balance is preserved (like the Apple comments do) can help. Listening to Chris Wilson, getting his facts, intentions etc. is also a good idea. Judging based on reality is always a good option. Judgment based on fear leads to no progress. IMHO the important task is to get the Charter right. Get it realistic and get it conform to the reality of the needs of the users. These both are important. Requirements are based on what users need. And those aren't the developers of the engines. Of course, realistic goals can only be set by taking into account what can be done, so when it comes to prioritizing, those designers are an important factor. I am writing this post from Linux using Firefox. I am programming on Solaris/Linux for many years now. So I am in no way a Microsoft addict or fan. However I have several years of experience working/mingling with Microsoft people at conferences and at the ISO C++ WG meetings. I can only tell you that NONE of the listed concerns have happened in the ISO WG21! None! And when Herb became the convener, MS was known about its Visual C++ 6.0 compiler as being incredibly bad at conforming to the standard. Today MS is a lot closer to conforming and Herb has make a great contribution for the C++ user community by making this sure and to the ISO C++ WG by being a professional and caring convener. Sorry for the long windedness, I luck the ability to be short. :-( This post is about the rechartering of the HTML Working Group, and AOL's stance on the subject. Cross posted from dev.aol.com Well, I think Chris will be a good chair. I said as much to W3C when they were first talking about an HTML working group. (Sorry Chris). A working group without a credible chair is in trouble. And the HTML group is a really really important one. I am a chair of a W3C group, and I know what fun it isn't. I am glad to have Dan Connolly as a co-chair. He is a no-nonsense guy who wants things that the world will use, and doesn't mind who he is telling that they are wrong, from Microsoft to my favourite blogger. Two people is probably barely enough for a group like this, and Chris and Dan are two of the best I could have hoped for. I am not a great fan of Microsoft's years without implementing basic standards, nor terribly upset because they have spent time on making money for themselves and others at the expense of cleaning up some things I thought were problems. They are a company, and what they do is make money providing goods and services to people. It seems a lot of people still believe they are getting something worth all that money, so good luck to them. I have seen Chris at work in standards (he was there when I first got involved in W3C a decade ago, and one of the people that spurred me to stay involved). I have seen other Microsoft representatives, and like any other large company some of their people were better than others. I hope that another representative from Microsoft does as good a job as I would expect Chris to do. Anyway, enough whining already. It's been a long time - let's get on with the work. I wish you all the best as chair, and hope the group manages to find a reasonable way forward with what is pretty important part of our world. (Hey Chris, have you formally joined the group yet?!) I am programming on Solaris/Linux for years now. So I am in no way a Microsoft addict or fan. Take it easy. like this article ... i thing same.. I am programming on Solaris/Linux for years now. So I am in no way a Microsoft addict or fan. Sinema Dunyası Aragıgınız herşey lol. As long as he doesn't design any comment forms or make any outrageous, nonsense claims about IE8 passing Acid2, everything will be fine :D Not that Firefox 3 passes said test either, I tested on 2 computers and found the nose was offset by 1 pixel for some reason. Total, miserable failure from IE8 beta AND Firefox 3. However, back on topic... It's hard to accept that someone from Microsoft could be involved in the standards process. There is simply no goodwill between the web development community and Microsoft employees. Really. We hate you guys. How do we know that Microsoft won't find some way to sabotage it? Maybe saying there should be two parallel and exclusive implementations of some critical point, or polluting the namespace with a lot of proprietary and/or presentational/behavioural junk where it's inappropriate?? But I often follow links to messages on WHATWG and W3c mailing lists etc. On the ones where you show up, Chris Wilson, I don't _always_ agree 100%, but then I'm just me. You are doing a great job AND it is great to have some input from Microsoft. I hope that Microsoft's involvement in the standards process aids Microsoft in finding the motivation to actually implement modern technology instead of shunning it. Based on projects where Microsoft employees are involved, I think there is good reason to believe that this is the case AND based on some of the pleasant surprises in IE8, I think there's good evidence that Mr Wilson is making a big dent in that horrible, archaic joke of a browser which is IE. OK so it's a great intranet/filesystem browser but we're talking about the internet here, right? And that's where internet explorer falls on its face. Another good sign is all the noise about proper test cases. They're absolutely essential given the variations on implementation across various browsers. Come on Chris, don't just bring us the standards, bring us the compliance with it. If anybody can do it, it's you ;).
http://blogs.msdn.com/cwilso/archive/2007/01/11/sigh.aspx
crawl-002
refinedweb
2,418
70.73
I have been diving through Python this year and have gotten stuck at modules. I have four scripts that perform some awesome fixes for MXDs when there is a database change. However, I have to put the same functions and variables at the top of each one of the scripts. I was thinking of turning this into a module or package? I cannot seem to figure these module/packages out. My question is for Python 2.6.5/ArcGIS 10... What would need to be in __init__.py? How would I set up the global variables/functions for all the scripts to use? How do I register all the scripts to be able to recognize what went on in the others? I have tried the helps out there but am further lost. Maybe someone can help me and then I can fix and post my product? import all the other scripts (modules) into it. You may want to adjust the python path to make sure your modules are found This can be done temporarily (for the session) by using sys.path.append(customPath) where customPath is the path to your modules. set this before the custom module imports set up the modules with function defs. I like pass all the objects, modules and variables used in the function into it as arguments. for example, in a hypothetical customModule_2: Then invoke it like this: The return can be whatever you want back: even if it is just a boolean for sucsessful completion. below all the def / functions in the supporting modules, one can add something like so the module script won't try to the start the overall process in the middle. As well, once the whole process is running correctly, a .pyc file will have been generated for each imported module. At that point you can remove those .py files from the accessable directory. It is not perfect security, but it means no one can casually damage the script by "just lookin' at it" in a text editor. Leave the main script as a py, and maybe a configuration module, too, for easy modification.
https://community.esri.com/thread/75424-modules-for-dummy
CC-MAIN-2018-22
refinedweb
354
73.88
Neutron with two external network attach floating ip to wrong router namespace I have configured neutron and openvswitch for two external networks (Juno on Ubuntu 14.04, gre). When I created and associate a floating ip to an instance, l3 agent attach the floating ip to other router namespace (rather than the router namspace of the same network with the floating ip). What did I do wrong? Help me please! I don't know what information I need to provide for my question. So I will provide what you want to know about my configuration. pls provide the configurations files for neutron on both network node and compute node then paste them to paste.openstack.org respectively.
https://ask.openstack.org/en/question/59537/neutron-with-two-external-network-attach-floating-ip-to-wrong-router-namespace/
CC-MAIN-2020-05
refinedweb
117
63.59
The event publish and subscribe feature of the Particle Cloud is easy-to-use and very effective in most cases. But sometimes you need to transmit more data than is allowed for publishing events. Or maybe you need to do some specialized processing. A server on your home or office network implemented in node.js is a useful tool in some cases. This sample implements: - Server discovery. The Photon uses publish to locate the server IP address and port to connect to, so you don’t need to hardcode it in your Photon firmware. - A HTTP POST TCP connection from the Photon to the local server, kept open for sending data in real time. - Authentication of the HTTP connection using a nonce (number used once). - A small web server so you can easily host HTML, Javascript, CSS, etc. for a web-based application. - A SSE (server-sent events) server to allow data to be streamed in real time into a web browser. By using the combination of the HTTP/TCP connection and SSE, you can stream large amounts of data from a Photon in real time, right into a web browser, where it can be handled by Javascript code. Even better, any number of web browsers on your home network can connect at the same time and view the same live data, limited by the capacity of your node.js server, not the Photon. Also, unlike using webhooks, your local server does not need to be open for incoming connections from the Internet if your Photon is also on your local network. It can stay safely behind your home firewall and you don’t need to worry about things like firewall/router port forwarding or dynamic DNS. These are the three basic steps to connecting: - The Photon publishes a private devicesRequest to the Particle cloud - The server subscribes to these requests and responds by calling the devices function on the Photon with the server IP address, server port, and a nonce (number used once) - The Photon issues a HTTP POST to the server IP address and port, with an Authorization header containing the nonce. It then keeps the connection open for sending data. In the examples below we also use another computer web browser to connect to the server. It loads HTML, CSS, and Javascript from that server, and also opens a SSE channel. This channel is hooked to the HTTP POST channel from the Photon, basically allowing the Photon to send data in real time directly to the browser, via the server. The server is just a computer running node.js. It could be running Windows, Mac OS X or Linux. It could even be something like a Raspberry Pi. There are two examples here: - livegraph, which uses a simple potentiometer to graph values - liveimu, which uses an accelerometer (IMU, Inertial Measurement Unit) and prints the location data to a web browser window in a scrolling table A video of it in action: This is what the Particle code looks like for server discovery and connection management: #include "Particle.h" SYSTEM_THREAD(ENABLED); int devicesHandler(String data); // forward declaration void sendData(void); const unsigned long REQUEST_WAIT_MS = 10000; const unsigned long RETRY_WAIT_MS = 30000; const unsigned long SEND_WAIT_MS = 20; enum State { STATE_REQUEST, STATE_REQUEST_WAIT, STATE_CONNECT, STATE_SEND_DATA, STATE_RETRY_WAIT }; State state = STATE_REQUEST; unsigned long stateTime = 0; IPAddress serverAddr; int serverPort; char nonce[34]; TCPClient client; void setup() { Serial.begin(9600); Particle.function("devices", devicesHandler); } void loop() { switch(state) { case STATE_REQUEST: if (Particle.connected()) { Serial.println("sending devicesRequest"); Particle.publish("devicesRequest", WiFi.localIP().toString().c_str(), 10, PRIVATE); state = STATE_REQUEST_WAIT; stateTime = millis(); } break; case STATE_REQUEST_WAIT: if (millis() - stateTime >= REQUEST_WAIT_MS) { state = STATE_RETRY_WAIT; stateTime = millis(); } break; case STATE_CONNECT: if (client.connect(serverAddr, serverPort)) { client.println("POST /devices HTTP/1.0"); client.printlnf("Authorization: %s", nonce); client.printlnf("Content-Length: 99999999"); client.println(); state = STATE_SEND_DATA; } else { state = STATE_RETRY_WAIT; stateTime = millis(); } break; case STATE_SEND_DATA: // In this state, we send data until we lose the connection to the server for whatever // reason. We'll to the server again. if (!client.connected()) { Serial.println("server disconnected"); client.stop(); state = STATE_RETRY_WAIT; stateTime = millis(); break; } if (millis() - stateTime >= SEND_WAIT_MS) { stateTime = millis(); sendData(); } break; case STATE_RETRY_WAIT: if (millis() - stateTime >= RETRY_WAIT_MS) { state = STATE_REQUEST; } break; } } void sendData(void) { // Called periodically when connected via TCP to the server to update data. // Unlike Particle.publish you can push a very large amount of data through this connection, // theoretically up to about 800 Kbytes/sec, but really you should probably shoot for something // lower than that, especially with the way connection is being served in the node.js server. // In this simple example, we just send the value of A0. It's connected to the center terminal // of a potentiometer whose outer terminals are connected to GND and 3V3. int value = analogRead(A0); // Use printf and manually added a \n here. The server code splits on LF only, and using println/ // printlnf adds both a CR and LF. It's easier to parse with LF only, and it saves a byte when // transmitting. client.printf("%d\n", value); } // This is the handler for the Particle.function "devices" // The server makes this function call after this device publishes a devicesRequest event. // The server responds with an IP address and port of the server, and a nonce (number used once) for authentication. int devicesHandler(String data) { Serial.printlnf("devicesHandler data=%s", data.c_str()); int addr[4]; if (sscanf(data, "%u.%u.%u.%u,%u,%32s", &addr[0], &addr[1], &addr[2], &addr[3], &serverPort, nonce) == 6) { serverAddr = IPAddress(addr[0], addr[1], addr[2], addr[3]); Serial.printlnf("serverAddr=%s serverPort=%u nonce=%s", serverAddr.toString().c_str(), serverPort, nonce); state = STATE_CONNECT; } return 0; } The project is here on github:
https://community.particle.io/t/local-server-in-node-js-example/24233
CC-MAIN-2018-17
refinedweb
946
55.03
Tim Moores wrote:I'm fairly certain that visual consistency with MS Word wasn't part of the Swing specification, so I'd tend to disregard any such discrepancies. If some part of Swing doesn't work according to your expectations, tell us in detail what you did, what you expected, and what the actual outcome was. Tim Moores wrote:MS Word is not the definition of text processing or layout. That its way would be the "proper" way is quite a stretch. Can you be more precise about what "some way" is? And also, what exactly it is that you wanted? Or does the question mark at the end of that sentence indicate that you're not sure what you wanted and/or expected? Tim Moores wrote:I'm familiar with the concept of justification of text. What I asked was what you got instead of what you thought you should be getting. Michael Dunn wrote:> this is how MS Word works and i m expecting this output how about you get apple to mimic everything MS does, then java/oracle might follow. Tim Moores wrote:I can't tell from 4 lines of code what they might do in the context of an actual program. Maybe you can post a screenshot of the output, or, even better, post an SSCCE. Darryl Burke wrote:ALIGN_JUSTIFIED works fine for me. Show your code, in the form of an SSCCE. Darryl Burke wrote:naved, please keep technical discussions on the forum. Private messaging has its purposes, but repeating forum posts in a private message is not one of them. You've been asked more than once to post your code in the form of an SSCCE, but it appears that you are unwilling to take the effort to make it easier for other members here to help you. Good luck with your problem. Rob Spoor wrote:That's not an SSCCE - you forgot the imports: import java.awt.*; import java.awt.event.*; import java.util.*; import java.util.logging.*; import javax.swing.*; import javax.swing.text.*;You should check if selexted inside your action listener is null. If you don't select anything you will get a NullPointerException. more from paul wheaton's glorious empire of web junk: cast iron skillet diatomaceous earth rocket mass heater sepp holzer raised garden beds raising chickens lawn care CFL flea control missoula heat permaculture
http://www.coderanch.com/t/566526/GUI/java/Java-alignment-Justify-same-as
crawl-003
refinedweb
400
71.34
What is Deque? Deque is a standard acronym for the double-ended queue which is basically a dynamic size sequence containers. Dynamic size refers here for contraction and expansion of a queue at both the ends. It’s an alternate of vectors because it allows us to insert or delete elements at both front and back. Vector doesn’t provide this feature of insertion and deletion on both ends. Deque is basically an implementation of data structure. Double Ended Queue is more efficient and faster than any other queue when it comes to insertion and deletion of elements on both the ends of any queue. Syntax: deque < object_type > deque_name ; The object type can be int, etc. then name as per your choice! How Does a Deque Work in C++? Now we will see how does a Deque actually work in C++ programming language. Basically there are two classifications of deque: - Output-Restricted Deque: In this classification, you can insert elements from both ends but deletion is only possible at the front end of the queue. - Input-Restricted Deque: In this classification, you can delete elements from both the ends but insertion is only possible at the rear end of the queue. For deque implementation in your code, we need to understand the basic member functions of the deque. Below are the functions we need to use: 1. push_back (element p): This member function of the deque allows a user to insert an element p at the end of the deque. 2. push_front (element p): This member function of the deque allows a user to insert an element p at the front of the deque. 3. insert(): This member function of the deque allows a user to insert an element in the deque. Where and how you want to insert is depend on the argument you are going to pass because this insert member function has three variations. Let’s have a look at them: - Insert( iterator x, element p): This method allows a user to insert element p at the position pointed by iterator x in the deque. - Insert( iterator x, int count, element p): This method allows a user to insert element p at the position pointed by iterator x in the deque while counting the number of times the position pointed by x in the deque. - Insert( iterator x, iterator first, iterator last): This method allows a user to insert elements in the range of [first, last] at the position pointed by iterator x in the deque. Example to Implement Deque in C++ As an example, we will see a C++ programming language code to implement the deque feature in our code. Code: #include<iostream> using namespace std; #define SIZE 10 class dequeue { int a[20], fr ,re; public: dequeue(); void insert_starting(int); void insert_ending(int); void delete_front(); void ddelete_rear(); void display(); }; dequeue::dequeue() { fr = -1; re = -1; } void dequeue::insert_ending(int i) { if ( re>=SIZE-1 ) { cout << " \n insertion is not possible, overflow!!!! "; } else { if ( fr==-1 ) { re++; } else { re = re+1; } a[re] = i; cout << " \nInserted item is " << a[re]; } } void dequeue::insert_starting(int i) { if ( fr == -1 ) { fr = 0; a[++re] = i; cout << " \n inserted element is: " << i; } else if ( fr != 0 ) { a[--fr] = i; cout << " \n inserted element is: " << i; } else { cout << " \n insertion is not possible, overflow !!! "; } } void dequeue::delete_front() { if ( fr == -1 ) { cout << " deletion is not possible :: dequeue is empty "; return; } else { cout << " the deleted element is: " << a[fr]; if ( fr == re ) { fr = re = -1; return; } else fr = fr+1; } } void dequeue::ddelete_rear() { if ( fr == -1 ) { cout << " deletion is not possible::dequeue is empty "; return; } else { cout << " the deleted element is: " << a[re]; if ( fr == re ) { fr = re = -1; } else re = re-1; } } void dequeue::display() { if ( fr == -1 ) { cout << " Dequeue is empty "; } else { for ( int i = fr; i <= re; i++ ) { cout << a[i]<< " "; } } } int main () { int c,i; dequeue d; do{ cout << " \n 1.insert element at the beginning "; cout << " \n 2.insert element at the end "; cout << " \n 3.displaying the elements "; cout << " \n 4.deletion of elements from front "; cout << " \n 5.deletion of elements from rear "; cout << " \n 6.exiting the queue "; cout << " \n Please enter your choice: "; cin>>c; switch(c) { case 1: cout << " Please enter the element to be inserted "; cin>>i; d.insert_starting(i); break; case 2: cout << " Please enter the element to be inserted "; cin >> i; d.insert_ending(i); break; case 3: d.display(); break; case 4: d.delete_front(); break; case 5: d.ddelete_rear(); break; case 6: exit(1); break; default: cout << " invalid choice, Please enter valid choice "; break; } } while (c!=7); } Output: First, it shows the number of choices to select. Here we have enter 1 to add the element at the beginning. In the below snapshot, you can see that we added 3 as an element. Then we select the second choice to enter the element at the end and added 6 at the end. Then we chose the third choice to display the elements in the queue. It shows 3 and 6. Then we enter the fourth choice to delete the element from the front. Again we chose option 3 to check whether the element is deleted from the front or not. It shows only one element i.e 6. This means that the front element is deleted. Then we chose 5 to delete the element from the rear. Again we chose 3 to check whether the element is deleted from the queue or not. It shows that the dequeue is empty. Then we enter 6 to exit the queue. Conclusion In Conclusion, for operations involving insertion and deletion of elements frequently in your program at the beginning and end of the queue then Deque is the best feature you can use as it is faster and will help in making code perform faster. For log sequences, the deque performs better. Recommended Articles This is a guide to Deque in C++. Here we discuss how does a Deque work in C++ programming language with sample code to implement the deque feature in our code. You can also go through our other related articles to learn more –
https://www.educba.com/deque-in-c-plus-plus/
CC-MAIN-2020-29
refinedweb
1,025
71.04
Home Automation with Windows Workflow - Posted: Oct 05, 2007 at 7:58 AM - 2,058 Views - 12 Comments Something went wrong getting user information from Channel 9 Something went wrong getting user information from MSDN Something went wrong getting the Visual Studio Achievements For me, home automation ranks near the top of the list of ultimate computer geek hobbies. You get to channel your inner “Scotty/Geordi” by playing with cool hardware, and you can channel your inner “Spock/Data” by playing with cool software. It doesn’t take much hardware to get started. A simple kit such as the ControlThink Z-Wave PC SDK or one of the myriad others will get you going. And you can add more bits and pieces such as motion detectors, alarm controllers, etc. - as you go. Many people think of Windows Workflow Foundation as being useful only for rule-driven, line of business applications, systems management, or one of several other “enterprise business application” uses. These are all good things, but what about using workflow for something a little more fun? After all, Windows Workflow Foundation has features like a cool graphical designer, an awesome rules engine, a robust persistence model, and an extensible service model just to name a few. Somehow, my thoughts about workflow and my thoughts about home automation converged. I wondered: “Can I use workflow to turn on a light?” Yeah, you know like that switch in the refrigerator that costs a few pennies. I wanted to use workflow for turning the light in my pantry on or off. The pantry door has a hardwired contact switch that connects to my Automation/Alarm system – an Elk M1G. My lights are controlled via several Z-Wave controllers including the USB stick that ships with the ControlThink SDK mentioned above. The problem was to get these two pieces of hardware talking so that I could have a workflow that looks like this: The idea is that the PantryDoorChange activity and the PantryLightOn/Off shapes would be instances of custom workflow activities. PantryDoorChange would monitor the Elk M1G waiting for the door’s contact switch to open or close. PantryLightOn/Off would send the appropriate Z-Wave command for turning the light on/off. Before we can get to the custom activities, we have to think about the services behind them, and how we’re going to communicate with the various devices. Communicating with the Z-Wave devices is made extremely easy with the rockin’ ControlThink Z-Wave SDK, so let’s leave that one aside for a moment. Elk Products, Inc. publishes a protocol that allows application developers to write code to interact with the Elk-M1G. This protocol is accessible via serial port or TCP/IP. I opted for TCP/IP which requires the optional Elk Ethernet module. In either case, the code is fairly straightforward. You can send the Elk-M1G an ASCII request message and it will either carry out an action, reply asynchronously with a response message, or both. You can also set some “global settings” on the Elk-M1G to instruct it to send unsolicited messages whenever events occur. I started by creating the Elk class. This is a component that implements a .NET interface into the Elk-M1G protocol. The class is a derived from the Component class which makes it easy to use in a Windows Forms application, but it can also be used in any other .NET application. The current implementation only supports a fraction of the request and response messages available within the Elk-M1G. The message that we care about for the above workflow is the ZC – zone status change. This message will tell us when the zone changes status and that that status is. The various status values are given by: 1: [Serializable] 2: public enum ZoneStatus : byte 3: { 4: NormalUnconfigured, 5: NormalOpen, 6: NormalEOL, 7: NormalShort, 8: notused1, 9: TroubleOpen, 10: TroubleEOL, 11: TroubleShort, 12: notused2, 13: ViolatedOpen, 14: ViolatedEOL, 15: ViolatedShort, 16: notused3, 17: BypassedOpen, 18: BypassedEOL, 19: BypassedShort 20: } The documentation for the Elk-M1G serial port protocol describes these values in detail. For us, the key is to find out whether a zone – e.g., the contact switch on my pantry door, is open or closed. The Elk component will listen on the socket for an incoming zone change message – ASCII string ZC followed by zone information – and raise a .NET event when the zone change is received. The .NET event contains the following event arguments: 1: [Serializable] 2: public class ZoneStatusChangeEventArgs : EventArgs 3: { 4: public ZoneStatusChangeEventArgs(byte zone, ZoneStatus status) 5: { 6: this.zone = zone; 7: this.status = status; 8: } 9: 10: private byte zone; 11: 12: public byte Zone 13: { 14: get { return zone; } 15: } 16: 17: private ZoneStatus status; 18: 19: public ZoneStatus Status 20: { 21: get { return status; } 22: } 23: } I’ve implemented only a few of the many functions available on the Elk-M1G. Following the patterns in the code, it should be fairly easy to implement the remaining features thus opening up the possibility for much more advanced applications than why I have here. The next step in this process is to create the actual services and the Workflow Activities which they serve. I designed two services for my application. The ElkService will funnel Elk-M1G events into a workflow activity and allow a workflow activity to send command messages to the Elk-M1G. The ZWaveService allows activities to send ZWave commands and monitor devices for status changes. As with the Elk component class, these have the basic features implemented, but can easily be extended to support the full range of features offered by their respective SDKs. Both the ElkService and the ZWaveService serve events to EventActivity derived classes. In order to make life a tad simpler, I wrote the EventActivity base class. The class definition looks like this: 1: public abstract class EventActivity: 2: Activity, 3: IEventActivity, 4: IActivityEventListener<QueueEventArgs> {…} This class is similar to the InputActivity class found in the custom activity framework sample code at. I needed to make a few tweaks because my services support multiple different event activity types, but the concepts are the same. There are three custom event driven activities in the current ElkWorkflow library. They are ZoneChange, TaskChange, and OutputChange. These activities will listen for zone, task, and output changes and will wake up when their respective events occur. If you look at the properties of these activities in the Workflow designer, you’ll see that you can set a Filter property to limit which zone, task or output to monitor. If the filter value is non-zero, the event will only be triggered when the corresponding zone, task, or output is triggered, otherwise all zone, task, or output changes will trigger the event. There is also an activity called ActivateTask that will activate one of the “automation” tasks on the Elk-M1G. These are essentially flags that can trigger events that are programmed into the Elk-M1G directly using its scripting capabilities. ActivateTask demonstrates how to write a shape that will allow you to send commands to the Elk-M1G. The ZWaveWorkflow library implements a basic ZWave service that currently supports basic dimmer switch capabilities. The Dimmer activity can set the dim/on/off level of a Z-Wave light. There are lots of additional capabilities that can be added to the ZWaveService including capturing device level changes, scene activations, and so on. The ControlThink Z-Wave SDK makes adding new capabilities extremely simple. Here’s a laundry list of some of the things that are left to do. I’m sure there are many things that are not on the list! 1. Better design time experience. This includes the mundane things like better graphics for the custom shapes, and more important things like having the designer show a list of device names rather than having to use device I.D. numbers. 2. More features in the services. Both the ElkService and the ZWaveService can be expanded quite a bit. Features like retrieving the state of devices on startup would make state machine workflows easier to program. 3. A real workflow host. Hosting the workflow in a console application is fine for experimentation, but a windows service might be a better choice for a real home automation system. 4. Dynamic update of in-progress workflows. You don’t want to have to shut down your home automation system every time you need to make a change to a workflow. Windows Workflow Foundation makes it easy to do this in several different ways. Probably the most effective way in the home automation scenario is to create a mechanism for the host to suspend the workflow of interest, and allow the user to edit the workflow design and restart it. I hope that I’ve shown the fun side of Windows Workflow Foundation. I also hope this is a good start on a home automation toolkit for people who like Coding4've been kicked (a good thing) - Trackback from DotNetKicks.com ok, while I appreciate the use of Win Workflow, here my "green" question. Doesn't it take more power to run a computer that monitors your lights than you would save if you left them on and didn't have the computer running? My computer is always running for other reasons. It's a media center, it runs my sprinkler system, my security system, and a few other things. It also shuts off the monitor and drives when thery're not in use, so it doesn't draw much power. Home Automation with Windows Workflow Published 05 October 07 10:58 AM | Coding4Fun There are some pretty... Workflow: Just for a change a fun workflow project >>It's a media center, it runs my sprinkler system, my security system, and a few other things. And what if it decides to BSOD you? Typically we think of robots as machines from science fiction or as industrial robots such as those that Обычно мы представляем себе роботов как машины из научно-фантастической литературы или промышленные устройства This looks like a very interesting project. Quite a bit of time has passed since the article was written and I was wondering if any additional development has been done particularly with the Elk class. Has anyone expanded the Elk class to allow use of the RS232 port? If not I may attempt to modify the code to allow for RS232 serial communication since I have an Elk M1G but do not have the ethernet option. I did see the robotics article, but thought I might keep things simple and use straight vb.net (or C# if I am not able to convert the code from C# to vb.net). I had been thinking about trying to write some vb.net code to talk to the Elk when I ran across this code that might be a good jump start for my home automation project. @Brian I'd imagine getting it to work on a serial port is pretty straight forward. What is your exact problem? Coding4Fun, Thanks for your response. I have not started yet, so I am not sure if I will have a problem or not. I was just curious if others had tried this project and if so, how well did it work out for them. Brian Remove this comment Remove this threadclose
http://channel9.msdn.com/coding4fun/articles/Home-Automation-with-Windows-Workflow
CC-MAIN-2013-20
refinedweb
1,899
61.46
Introduction Xamarin Insights is used for using analytics and crash reporting for Xamarin. For more details, you can click here. Let’s start Before starting an Android application, we need to register the Xamarin Insights dashboard new app. Step 1 Goto. Now, click New App button. Open New Window. Step 2 In the new window, enter new app's name. Click "Create new app" button. Afterwards, the system generates new API key. This key uses Android app, so copy the API key value. Step 3 Open Visual Studio->New Project->Templates->Visual C#->Android->Blank App. Select Blank App. Give your project a name and assign the project location. Step 4 Go to Solution Explorer-> Project Name-> References. Right click on "Manage NuGet Packages" and open new dialog box. This dialog box is required to search Xamarin Insights, followed by installing Xamarin.Insights Packages. Step 5 Now, we need to initialize Xamarin.Insights in MainActvity.cs and add the new namespace given below. using Xamarin; Step 6 Open Solution Explorer-> Project Name-> MainActivity.cs. Click Open CS code page view, followed by adding namespaces given below. C# Code Step 7 Now, implement exceptions result in Xamarin Insights. Step 8 Now, implement the Exceptions result with the user data in Insights. Step 9 Press F5 or build and run the application. Check the Xamarin.Insights dashboard. Step 10 View All
http://www.c-sharpcorner.com/article/xamarin-android-working-with-xamarin-insights/
CC-MAIN-2017-43
refinedweb
229
63.05
Screen scrapping in c#.net Posted by vivekcek on August 20, 2009 Hi after some weeks i got some time to post an article.This time i am coming with some good article that made a lot of headache to me.I cant put my real application code here.Now come to my problem.It was on monday i am back to office after enjoying weekends at home.As soon as a reached my office my manager called me and assigned this work.As i said earlier i am working in some online airline reservation portal,normaly online bookings are performed via webservice’s like galelio,amadeus etc.But some low cost carriers does nt have such webservice.And they may have their on sites to sell their flights directly.In such cases we have to extract information from their sites and show it in our site also submit some information to their site via our site,the user of our site dont know from where we fetch data.They can book flights on the provider airlines site via our websites UI.This technology is termed as screen scrapping which is not ethical in all means. Ok now i will try to explain the concept with windows application that automatically log in to twitter.For efficient screen scrapping in .net applications we can use webrowser control avilable in System.Windows.Forms.This control can be used in asp.net by some threading i will explain the concept in other post,it is very tricky. STEPS 1.Put two text boxes,labels and a button in your form 1.Find the webbrowser control in your tool box and drag it to your form name it as “WBrowser” align as shown below The webrowser controls Navigate(string URL) method is used navigate to a particular URL in our example it is. We can see the login page is rendered in our webbrowser control.After Login page is rendered fully the webbrowser control fires WBrowser_DocumentCompleted() event.So we can ensure that a pge is fully loaded after the above event is fired and we can extract the HTML of the page only after full rendering.So after calling Navigate() method we have to wait in a loop untill document complete event is fired. In the case of twitter for Login we have to provide username and password in their respective text boxes nad click on the submit button on twitter’s log in page.For that first of all we study the HTML of twitter page and find the ID’s of the above specified controls.And they are. Usernametextbox–ID->”username_or_email” Passwordtextbox–ID->”session[password]” Submit Button—–>”signin_submit” Now we have to set the value attribute of above textboxes from the values in our windows form and then invoke the click attribute of submit button using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; namespace TwitterTweet { public partial class Form1 : Form { private bool DocCompleted = false; public string LoginUrl = ""; public Form1() { InitializeComponent(); } private void WBrowser_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { this.DocCompleted = true; } private void Synchronize() { while (!this.DocCompleted) { Application.DoEvents(); } this.DocCompleted = false; } private void ccBtnLogIn_Click(object sender, EventArgs e) { this.Login(this.ccTxtUid.Text.Trim(),this.ccTxtPwd.Text.Trim()); this.groupBox1.Visible = false; } private void Login(string Uid,string Pwd) { this.WBrowser.Navigate(LoginUrl); this.Synchronize(); HtmlElementCollection htmlCol = this.WBrowser.Document.GetElementsByTagName("input"); foreach (HtmlElement el in htmlCol) { if (el.Id == "username_or_email") el.SetAttribute("value", Uid); else if (el.Id == "session[password]") el.SetAttribute("value", Pwd); else if (el.Id == "signin_submit") { el.InvokeMember("click"); this.Synchronize(); } } } } } Aravind.S said Hai Vivek, I am a GNIIT(SWE) student. I find your posts very useful and helps me to gain more knowledge. Please continue your venture. Asharaf said Good! keep writing
https://vivekcek.wordpress.com/2009/08/20/screen-scrapping-in-c-net/
CC-MAIN-2017-47
refinedweb
640
51.34
import "image/internal/imageutil" Package imageutil contains code shared by image-related packages. DrawYCbCr draws the YCbCr source image on the RGBA destination image with r.Min in dst aligned with sp in src. It reports whether the draw was successful. If it returns false, no dst pixels were changed. This function assumes that r is entirely within dst's bounds and the translation of r from dst coordinate space to src coordinate space is entirely within src's bounds. Package imageutil imports 1 packages (graph) and is imported by 30 packages. Updated 2020-06-01. Refresh now. Tools for package owners.
https://godoc.org/image/internal/imageutil
CC-MAIN-2020-29
refinedweb
102
59.5
I'm having an issue that so that Book A and Book B's Author and Title will print at the end of the program. So This will verify that the overloaded = is functioning. #include <iostream> using namespace std; class Book { public: Book(char*, char*); private: char* title; char* author; public: Book& Book::operator=(const Book& b) { if(this != &b) { delete [] author; delete [] title; author = new char[strlen(b.author)+1]; title = new char[strlen(b.title)+1]; strcpy(author, b.author); strcpy(title, b.title); } return *this; } }; int main(int argc, char* argv[]) { Book* pA = new Book("Aardvark", "Be an Aardvark on pennies a day"); Book* pB = new Book("Speelburgh", "ET vs. Howard the Duck"); pA = pB; //complete the program so that Book A and Book B's Author and Title print cout << "The value of Book A's Author and Title are: " << pA.author<<" "<<pA.title<<endl; cout << "The value of Book B's Author and Title are: " << pB.author<<" "<<pB.title<<endl; system("pause"); return 0; }
https://www.daniweb.com/programming/software-development/threads/157763/can-t-find-the-solution-and-looking-for-help
CC-MAIN-2017-43
refinedweb
169
67.25
How to split PDF from uploaded file for PDF splitting API in Python with PDF.co Web API How to split PDF from uploaded file in Python with easy ByteScout code samples to make PDF splitting API. Step-by-step tutorial Writing of the code to split PDF from uploaded file in Python can be done by developers of any level using PDF.co Web API. PDF.co Web API was designed to assist PDF splitting. The SDK samples displayed below below explain how to quickly make your application do PDF splitting API in Python with the help of PDF.co Web API. Open your Python project and simply copy & paste the code and then run your app! Enjoy writing a code with ready-to-use sample Python codes to add PDF splitting API functions using PDF.co Web API in Python. Trial version of ByteScout is available for free download from our website. This and other source code samples for Python and other programming languages are available. On-demand (REST Web API) version: Web API (on-demand version) On-premise offline SDK for Windows: 60 Day Free Trial (on-premise) SplitPDFFromUploadedFile.py import os import requests # pip install requests # The authentication key (API Key). # Get your own by registering at API_KEY = "*********************************" # Base URL for PDF.co Web API requests BASE_URL = "" # Source PDF file SourceFile = ".\\sample.pdf" # Comma-separated list of page numbers (or ranges) to process. Example: '1,3-5,7-'. Pages = "1-2,3-" def main(args = None): uploadedFileUrl = uploadFile(SourceFile) if (uploadedFileUrl != None): splitPDF(uploadedFileUrl) def splitPDF(uploadedFileUrl): """Split PDF using PDF.co Web API""" # Prepare requests params as JSON # See documentation: parameters = {} parameters["pages"] = Pages parameters["url"] = uploadedFileUrl # Prepare URL for 'Split PDF' API request url = "{}/pdf/split"}.pdf"-Split-PDF-From-Uploaded-File.pdf
https://pdf.co/samples/pdf-co-web-api-pdf-splitting-api-python-split-pdf-from-uploaded-file
CC-MAIN-2020-45
refinedweb
299
58.18
This one was just a side-product of something else that I was toying with but I’d made this little Silverlight screen that let me search FlickR; which you can run by clicking on the image above if you like ( it’s not too polished ) but I used the Reactive Extensions (Rx) in a couple of places in the code and so I thought I’d just share those bits. Firstly, there’s no “Go” button on that search term up above in the UI; I had the application wait until you’d typed at least a few characters and then give you a little time to make sure you’d finished typing and then it goes off and does a search based on what you typed. I did that in my view model and I did it with Rx; ObservableEx .FromPropertyChange(this, "SearchText") .Select(p => this.SearchText) .Where(text => text.Length > Constants.MinSearchStringLength) .Throttle(TimeSpan.FromMilliseconds(Constants.SearchTextTimeoutMs)) .ObserveOnDispatcher() .Subscribe(OnNewSearch); this is a slightly weird bit of code. Firstly, ObservableEx.FromPropertyChange is just a little wrapper around Observable.FromEventPattern that I wrote; public static IObservable<object> FromPropertyChange( this INotifyPropertyChanged propertyChange, string propertyName) { return ( Observable .FromEventPattern<PropertyChangedEventArgs>(propertyChange, "PropertyChanged") .Where(p => p.EventArgs.PropertyName == propertyName) .Select(p => p.Sender)); } Secondly, because INotifyPropertyChanged doesn’t actually pass you the changed value, you can see I’m making a call back in the previous snippet to Select( this.SearchText ). That is – I’m subscribing to a property changed event on a property on my own class but then I have to hit the property accessor again to get the value. That feels a bit weird and perhaps I should have passed a lambda into my FromPropertyChange wrapper function to grab the value from this.SearchText rather than do this? Largely, that use of Rx just gives me the Throttle capability so that I can ignore the user’s typing up to a point but then take notice of it if they seem to pause for a little bit of time. Another place where I used Rx was in calling out to the FlickR search REST API, essentially using the bridge to the Asynchronous Programming Model in order to make it easier for me to call WebRequest.BeginGetResponse and EndGetResponse. I ended up writing a tiny helper method; internal static class ObservableWebRequest { public static IObservable<WebResponse> CreateDeferred(string uri) { HttpWebRequest request = WebRequest.CreateHttp(uri); Func<IObservable<WebResponse>> factory = Observable.FromAsyncPattern<WebResponse>( request.BeginGetResponse, request.EndGetResponse); return (Observable.Defer(factory)); } } and then chaining a whole bunch of my own bits onto it to gather a page worth of response data from FlickR; // Create a timer that will produce a value every so often (250ms) IObservable<long> slowDownTimer = Observable .Interval(TimeSpan.FromMilliseconds(Constants.PopulateListBoxDelayMs)); this.currentSearch = ObservableWebRequest // Performs the async HTTP get when subscribed to. .CreateDeferred(uri.Uri) // Takes the XML produced and parses it into a results page object .Select(wr => FlickrPhotoResultsPage.FromWebResponse(wr)) // Gets back to the UI thread .ObserveOnDispatcher() // Updates some state based on the parsed FlickR results .Do(UpdateResultsPageDetails) // Takes the single pages of results and extracts all the parsed // photo details .SelectMany(resultsPage => resultsPage.Photos) // Slows the process down so it only produces a result every 250ms .Zip(slowDownTimer, (result, interval) => result) // Gets back to the UI thread .ObserveOnDispatcher() // Listens for success, failure, cancellation .Subscribe( AddImage, EndSearch, () => EndSearch(null)); That’s quite a chunk of code so I sprinkled some oddly placed comments into it to try and illustrate the chain that’s built up – there’s a bunch of my own functions in there so it’s not perhaps so obvious what’s going on without delving into the code on that particular snippet. That IObservable ultimately produces the details (i.e. titles, URLs) of the page of images that I need to download and I feed them into a piece of code that downloads them asynchronously. I could have done this by chaining more work onto the previous observable but I wanted to separate that work out. Here are some images mid-load; and, again, I just used Rx to handle that for me; ObservableWebRequest .CreateDeferred(this.Uri.ToString()) .Select(response => response.ReadResponseToMemoryStream()) .ObserveOnDispatcher() .Subscribe( SetImage, ex => this.InternalVisualState = ImageViewModelState.Errored, () => this.InternalVisualState = ImageViewModelState.Cancelled); using that CreateDeferred helper method again to make me an IObservable<WebResponse> and then another little extension to read the response stream back and copy it into a separate memory stream. As I say, this was just a bit of fun but one thing I noticed when I came to drop the app on the website was that it was a little on the large side at around 1.5MB. That was a bit of a shock so I took out one of the fonts I’d embedded and it dropped down to 600KB still with one font embedded into my assembly. I had a quick look what was inside the XAP; and so my assembly with the embedded font ( ChunkFive Roman from here ) and FlickR logo is coming in at 84KB so that’s not too bad but I picked up a bunch of dependencies; - 45KB for System.Xml.Linq because I used LINQ to XML to parse my XML. - 50KB for Microsoft.Expression.Drawing.dll because I made use of Triangle rather than a simple Path. - 76KB for Microsoft.Expression.Effects because I made use of one slide effect on a state transition change - 90KB for System.Windows.Controls because I made use of ChildWindow. - 104KB for System.Windows.Controls.Toolkit because I made use of WrapPanel. - 143KB for Rx itself. So I could easily shave off another 250KB here and perhaps another 100KB if I could live without the WrapPanel. Here’s the source-code for download if you want to play around with it a little.
https://mtaulty.com/2011/08/11/m_13979/
CC-MAIN-2019-51
refinedweb
969
52.19
/** Proc21 *22 * @author Mladen Turk23 * @version $Revision: 467222 $, $Date: 2006-10-24 05:17:11 +0200 (mar., 24 oct. 2006) $24 */25 26 public class Proc {27 28 /*29 * apr_cmdtype_e enum30 */31 public static final int APR_SHELLCM = 0; /** use the shell to invoke the program */32 public static final int APR_PROGRAM = 1; /** invoke the program directly, no copied env */33 public static final int APR_PROGRAM_ENV = 2; /** invoke the program, replicating our environment */34 public static final int APR_PROGRAM_PATH = 3; /** find program on PATH, use our environment */35 public static final int APR_SHELLCMD_ENV = 4; /** use the shell to invoke the program,36 * replicating our environment37 */38 39 /*40 * apr_wait_how_e enum41 */42 public static final int APR_WAIT = 0; /** wait for the specified process to finish */43 public static final int APR_NOWAIT = 1; /** do not wait -- just see if it has finished */44 45 /*46 * apr_exit_why_e enum47 */48 public static final int APR_PROC_EXIT = 1; /** process exited normally */49 public static final int APR_PROC_SIGNAL = 2; /** process exited due to a signal */50 public static final int APR_PROC_SIGNAL_CORE = 4; /** process exited and dumped a core file */51 52 public static final int APR_NO_PIPE = 0;53 public static final int APR_FULL_BLOCK = 1;54 public static final int APR_FULL_NONBLOCK = 2;55 public static final int APR_PARENT_BLOCK = 3;56 public static final int APR_CHILD_BLOCK = 4;57 58 public static final int APR_LIMIT_CPU = 0;59 public static final int APR_LIMIT_MEM = 1;60 public static final int APR_LIMIT_NPROC = 2;61 public static final int APR_LIMIT_NOFILE = 3;62 63 64 /** child has died, caller must call unregister still */65 public static final int APR_OC_REASON_DEATH = 0;66 /** write_fd is unwritable */67 public static final int APR_OC_REASON_UNWRITABLE = 1;68 /** a restart is occuring, perform any necessary cleanup (including69 * sending a special signal to child)70 */71 public static final int APR_OC_REASON_RESTART = 2;72 /** unregister has been called, do whatever is necessary (including73 * kill the child)74 */75 public static final int APR_OC_REASON_UNREGISTER = 3;76 /** somehow the child exited without us knowing ... buggy os? */77 public static final int APR_OC_REASON_LOST = 4;78 /** a health check is occuring, for most maintainence functions79 * this is a no-op.80 */81 public static final int APR_OC_REASON_RUNNING = 5;82 83 /* apr_kill_conditions_e enumeration */84 /** process is never sent any signals */85 public static final int APR_KILL_NEVER = 0;86 /** process is sent SIGKILL on apr_pool_t cleanup */87 public static final int APR_KILL_ALWAYS = 1;88 /** SIGTERM, wait 3 seconds, SIGKILL */89 public static final int APR_KILL_AFTER_TIMEOUT = 2;90 /** wait forever for the process to complete */91 public static final int APR_JUST_WAIT = 3;92 /** send SIGTERM and then wait */93 public static final int APR_KILL_ONLY_ONCE = 4;94 95 public static final int APR_PROC_DETACH_FOREGROUND = 0; /** Do not detach */96 public static final int APR_PROC_DETACH_DAEMONIZE = 1; /** Detach */97 98 /* Maximum number of arguments for create process call */99 public static final int MAX_ARGS_SIZE = 1024;100 /* Maximum number of environment variables for create process call */101 public static final int MAX_ENV_SIZE = 1024;102 103 /**104 * Allocate apr_proc_t stucture from pool105 * This is not an apr function.106 * @param cont The pool to use.107 */108 public static native long alloc(long cont);109 110 /**111 * This is currently the only non-portable call in APR. This executes112 * a standard unix fork.113 * @param proc The resulting process handle.114 * @param cont The pool to use.115 * @return APR_INCHILD for the child, and APR_INPARENT for the parent116 * or an error.117 */118 public static native int fork(long [] proc, long cont);119 120 /**121 * Create a new process and execute a new program within that process.122 * This function returns without waiting for the new process to terminate;123 * use apr_proc_wait for that.124 * @param progname The program to run125 * @param args The arguments to pass to the new program. The first126 * one should be the program name.127 * @param env The new environment table for the new process. This128 * should be a list of NULL-terminated strings. This argument129 * is ignored for APR_PROGRAM_ENV, APR_PROGRAM_PATH, and130 * APR_SHELLCMD_ENV types of commands.131 * @param attr The procattr we should use to determine how to create the new132 * process133 * @param pool The pool to use.134 * @return The resulting process handle.135 */136 public static native int create(long proc, String progname,137 String [] args, String [] env,138 long attr, long pool);139 140 /**141 * Wait for a child process to die142 * @param proc The process handle that corresponds to the desired child process143 * @param exit exit[0] The returned exit status of the child, if a child process144 * dies, or the signal that caused the child to die.145 * On platforms that don't support obtaining this information,146 * the status parameter will be returned as APR_ENOTIMPL.147 * exit[1] Why the child died, the bitwise or of:148 * <PRE>149 * APR_PROC_EXIT -- process terminated normally150 * APR_PROC_SIGNAL -- process was killed by a signal151 * APR_PROC_SIGNAL_CORE -- process was killed by a signal, and152 * generated a core dump.153 * </PRE>154 * @param waithow How should we wait. One of:155 * <PRE>156 * APR_WAIT -- block until the child process dies.157 * APR_NOWAIT -- return immediately regardless of if the158 * child is dead or not.159 * </PRE>160 * @return The childs status is in the return code to this process. It is one of:161 * <PRE>162 * APR_CHILD_DONE -- child is no longer running.163 * APR_CHILD_NOTDONE -- child is still running.164 * </PRE>165 */166 public static native int wait(long proc, int [] exit, int waithow);167 168 /**169 * Wait for any current child process to die and return information170 * about that child.171 * @param proc Pointer to NULL on entry, will be filled out with child's172 * information173 * @param exit exit[0] The returned exit status of the child, if a child process174 * dies, or the signal that caused the child to die.175 * On platforms that don't support obtaining this information,176 * the status parameter will be returned as APR_ENOTIMPL.177 * exit[1] Why the child died, the bitwise or of:178 * <PRE>179 * APR_PROC_EXIT -- process terminated normally180 * APR_PROC_SIGNAL -- process was killed by a signal181 * APR_PROC_SIGNAL_CORE -- process was killed by a signal, and182 * generated a core dump.183 * </PRE>184 * @param waithow How should we wait. One of:185 * <PRE>186 * APR_WAIT -- block until the child process dies.187 * APR_NOWAIT -- return immediately regardless of if the188 * child is dead or not.189 * </PRE>190 * @param pool Pool to allocate child information out of.191 */192 public static native int waitAllProcs(long proc, int [] exit,193 int waithow, long pool);194 195 /**196 * Detach the process from the controlling terminal.197 * @param daemonize set to non-zero if the process should daemonize198 * and become a background process, else it will199 * stay in the foreground.200 */201 public static native int detach(int daemonize);202 203 /**204 * Terminate a process.205 * @param proc The process to terminate.206 * @param sig How to kill the process.207 */208 public static native int kill(long proc, int sig);209 210 }211 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/apache/tomcat/jni/Proc.java.htm
CC-MAIN-2017-17
refinedweb
1,168
54.86
I've been having issues with trying to get this program to work. The program's purpose is to compute the sine function without using the Math.sin(). At first I had trouble with the for loop but I finally understood how it worked, so it's not much of an issue anymore. My next problem is trying to get this factorial to work. I've look at tutorials and I think I'm doing the right thing but when I compile I get several errors (stated in code). Original Code /* This program is suppose to compute the Sine function without using Math.sin */ import java.io.*; public class x { public static void main(String args[]) throws IOException { BufferedReader keybd = new BufferedReader(new InputStreamReader(System.in)); double Rad, Sine; int MaxE; Sine = 0.0; System.out.println("Enter Radians: "); Rad = Double.parseDouble(keybd.readLine()); for (int a = 1; a <= 11; a++) { Sine += Math.pow(-1,a)*Math.pow(Rad,2*a+1)/factorial(2*a+1); } System.out.println("The value of Sine is " + Sine); } } Errors I think I'm on the right track but I'm not sure how to troubleshoot these errors yet since I'm still pretty new. Any help is appreciated!
http://www.javaprogrammingforums.com/whats-wrong-my-code/7981-sine-function-cant-get-factorial-work.html
CC-MAIN-2014-10
refinedweb
205
58.48
Eli Zaretskii <address@hidden> writes: >> From: Alex Bennée <address@hidden> >> Date: Thu, 30 Jan 2014 14:10:45 +0000 >> >> In an unrelated issue I found that I can't start src/temacs with the >> --daemon option which works with the dumped version src/emacs. > > Please use "M-x report-emacs-bug RET" to report such bugs, then they > are automatically emailed to the bug tracker address. I raised bug #16599 and I have tracked it down to being reset by syms_of_emacs() which is called in temacs after being set up by --daemon. I assume the dumped src/emacs behaves differently. The following patch works for me: >From 3dee0d9da394e17b4e6cb97cb22399f027cab440 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <address@hidden> Date: Sun, 16 Feb 2014 20:59:06 +0000 Subject: [PATCH] src/emacs.c: ensure daemon_pipe initialised before use Otherwise this breaks src/temacs --daemon invocations by resetting the daemon_pipe FDs which are used to determine if Emacs is in daemon mode. --- src/emacs.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/src/emacs.c b/src/emacs.c index 18f6a08..66f47ef 100644 --- a/src/emacs.c +++ b/src/emacs.c @@ -201,7 +201,7 @@ static char *daemon_name; /* Pipe used to send exit notification to the daemon parent at startup. */ -int daemon_pipe[2]; +int daemon_pipe[2] = {0, 0}; /* Save argv and argc. */ char **initial_argv; @@ -2548,7 +2548,4 @@ libraries; only those already known by Emacs will be loaded. */); Vlibrary_cache = Qnil; staticpro (&Vlibrary_cache); #endif - - /* Make sure IS_DAEMON starts up as false. */ - daemon_pipe[1] = 0; } -- 1.8.5.3 > > Thanks. -- Alex Bennée
https://lists.gnu.org/archive/html/emacs-devel/2014-02/msg00357.html
CC-MAIN-2021-21
refinedweb
269
65.12
possible to prevent send and recieve to send messages all over the Live set? Hi, I sometimes gets really loud surprises when I accidentally give a receive the same name as a receive on another channel. Thank you :) Protecting people from themselves is something usually done in more extreme situations than in those invloving max patching, I guess, but what does — mean? I cant see anything about it in send help or reference Thanks Well, I think with Max you come often enough to this point where something should protect you from yourself – if you do public installations you can get in quite extreme situations – imagine you boost a sound with full level out of Max/MSP – could so some damage. Well, I think you know that you can name stuff. It might be not too bad to add the project to that and maybe even you initials (like we have jit.- cv.- etc.) so ‘send meChorusIntensity’ – this way you could be halfway sure you will not run into receives that are not meant for you send :) I mean, now with M4L we do not only have to think about our own Maxpatch but also about all the M4L devices from all the others … ws that considered ? best Yes, this — comes from nowhere… I’ve read it in an interview f.e It’s used all over the place in the devices which comes with Max for Live. We should probably but that somewhere else in the documentation though. Thanks for mentioning it. The — will be replaced automatically by some unique random number, so if you put those before the name of a send/receive/coll/buffer~/table and others, this makes the object only communicate share data within the device. It’s not really new, it’s the same trick we used in pluggo. The difference between #0 and — is that #0 only works in abstraction/subpatcher. It’s been in Max for at least 8 years. Yeah we need to document it more. It also works for buffer~ names. So buffer~ —foobarg Will only be accessible within the enclosing top level patcher or device -A >I mean, now with M4L we do not only have to think about our own Maxpatch but >also about all the M4L devices from all the others … ws that considered ? Sure was. The global namespace in Max [of which Max for Live is a subset] is a feature that allows you to share data across a large and complex patch or pair of patches. The — came along in a world in which people ran multiple pluggo devices. We’ll put lots of nice mentions in the help files and refpages, though. maybe the forward command could help Thanks, Is — giving a unique name for each send-recieve step, so if I have 2 similar in a s r "chain" it will give a new unique number? happy christmas ! if they’re in different devices yes. hi, on a slightly related note, is there an easy way of making sure parameters generated in the first m4l device in the chain get passed to the second m4l device, but not across channels? I want to get in oscmessages from a javaApp, to control sounds, depending on the type of message (all similar) it needs to be forwarded to a synth(made with poly~). I have ex. 4 sphereobjects, these can be dragged around, the xyzCoordinates for the 1st instance need to access the first Voice of the SphereSynth, the 2nd the 2nd voice and so on. I just want 1 track for each of the type of objects. or can his only be achieved using midi connections inbetween the chained m4l devices? I hope this makes sense enough to be answered I wish this popped up as a big flashing message the first time I opened M4L, would have saved me a lot of time…… ( or I could have read the documentation more carefully but we all know no one does ;) "Well, I think with Max you come often enough to this point where something should protect you from yourself – if you do public installations you can get in quite extreme situations – imagine you boost a sound with full level out of Max/MSP – could so some damage." a DC filter followed by *~ 0.89 and clip -0.9 0.9 in all of your installation patches should fix that. -110 @Marion: Most has already been answered but I’d still like to focus your attention on the ‘gate‘ object as well. In a case where you have no other option to use the same send/receive name in several places (for example; to check a certain setting) and you need to be sure that it doesn’t trigger actions all over the place then this could come in very handy. Of course; in the above scenario the value object is a much better solution. But; I learned about that one after I had already finished the patch where I used the ‘gate’ construction. Forums > Max For Live
https://cycling74.com/forums/topic/possible-to-prevent-send-and-recieve-to-send-messages-all-over-the-live-set/
CC-MAIN-2015-27
refinedweb
843
74.93
New to Typophile? Accounts are free, and easy to set up. Does anyone know of any resources which would provide information on which letter *pairs* frequently occur in different languages? It would be especially useful if this included information on diacritics. I'm currently dealing with some consonant-vowel ligatures, and want to figure out if there are diacritical combinations which can be safely omitted. I'd tried googling for various diacritical combinations, but the useful data ends up buried amid results drawn from a miscellany of legacy CJK encodings. André 17 Aug 2009 — 7:27pm Chthonic, Django Reinhart, Jzanus, Ljubjana, llama...you get the idea: too many possibilities... 17 Aug 2009 — 8:30pm One source of information I have used for similar purposes is Open Office Dictionaries and other dictionaries for spelling checkers. For instance, if you click on the link for Canadian English (zip file) you get a folder containing a file with extension .dic with 62341 entries (including "derived" entries). Other dictionaries can be much larger. The .dic file is plain text. If you remove what follows the slash after each word, you get a file on which you can run programs to extract pairs, count them, etc. Of course, that gives no information on the frequency with which those pairs occur in actual texts but that gives information on possible pairs for the language you chose. Some dictionaries are utf-8 encoded, others are latin1 and so on. The encoding is given at the first line of a second file with extension .aff. Some programming ability is thus required. 17 Aug 2009 — 8:31pm Well, yes there will be lots of possibilities, but some pairs are still going to be cross-linguistically more common than others, and diacritics which are not commonly used may not occur adjacent to others -- for example, I *think* that if one had an sa ligature in a font, that it would be more important to also implement sá than șä . But I'm basing that on the fact that ä doesn't occur in Rumanian and AFAIK that's the only language which uses ș. Even within a language which contains a variety of diacritics, it's not necessarily the case that all of those diacritics will occur adjacent to one another, and while it's relatively easy to find information on which diacritics are used in which languages, I haven't found information on diacritic pairs.. André 17 Aug 2009 — 8:36pm Thanks Michael -- I'd tried using the Mac OS built-in dictionary for those languages I've installed, but it doesn't support wildcards (or if it does, the asterisk isn't used for this). Never thought, though, to try opening the actual file (a senior moment). André 17 Aug 2009 — 9:10pm I use terminal windows and unix utilities to find those files and process them. Maybe you can do better with Mac utilities, I don't know. For dictionaries installed by Firefox, I type the command "cd $HOME/Library/Ap*ort/Firefox" in a terminal window and then find . -name "*.dic" gives me the list of those dictionaries. They can be copied in some temporary folder and batch processed. Michel 17 Aug 2009 — 9:15pm I always wish there would be some linguistics textbook that covers this stuff. Maybe Steve Peters will chime in here with some help. If you have time to figure out the syntax to sift through text file wordlists it’s pretty easy to put this stuff together using Python or just Bash scripting (grep "*öö*" file.txt | wc -l). The OpenWall wordlists disk is worth it’s low price if you don’t need to analyze actual text. Ask around in the netsec world and I’m sure even more dictionaries exist. Project Gutenberg and similar resources probably have real texts covering many of the languages you need to analyze. 17 Aug 2009 — 9:26pm I have somewhere a python script that counts bigrams in a utf-8 encoded source. To get the list of words, I just use "awk 'BEGIN{FS="/"}{print $1}' *.dic". If that can be useful, I'll try to find the script. That's just a few lines of code, never more. 17 Aug 2009 — 10:31pm James wrote: I always wish there would be some linguistics textbook that covers this stuff. Linguistics texts generally aren't that concerned with orthography, so this isn't a likely source. You'll find lots of information on the pairings of various sounds , but any statistics presented will likely involve IPA rather than orthographic representations. André 18 Aug 2009 — 1:08am Frequency analysis is what you really need - a dictionary would not be enough. This would require some long texts in all the languages of interest. I don't know of a good general source for these, but someone must have compiled such. Some years ago Luc(as) de Groot () did some good work on compiling resources for kerning and building some tools for it. I think he called it Kernologica. He should be able to point you in some useful directions. 18 Aug 2009 — 5:20am Frequency analysis is what you really need. Most obviously. To get frequencies (absolute or relative) of bigrams, all you need is a very basic script that can be run on some utf-8 encoded input. To get such a script (for alphabetic bigrams), you can just copy what is between the cut lines and paste it in a terminal window and you will get an executable file named bigramsin your current folder. ---- cat >bigrams <<'EOF' #!/usr/bin/python # M. Boyer 2009 import codecs, sys infile=codecs.open(sys.argv[1],"r","utf-8") text=infile.read(); infile.close() tallies={}; nbdata=0; prev=' ' def tallyq(c): return c.isalpha() for char in text: if (tallyq(prev) and tallyq(char)): datum=prev+char # ; datum=datum.lower() nbdata=nbdata+1 if datum in tallies: tallies[datum]=tallies[datum]+1 else: tallies[datum]=1 prev=char for d in tallies: print('%s;%d;%.3f%%' % (d.encode('utf-8'), tallies[d], 100.0*tallies[d]/nbdata)) EOF chmod 755 bigrams ---- Then you decide what you want to run it on. For instance, if you want to run it on Chekhov's text Дама с собачкой (The lady with the little dog), you can type (or copy and paste) the line lynx -dump > dama.txt and then run (maybe after removing some html references at the bottom) ./bigrams dama.txt | sort Here is a copy paste of part of the output то;372;1.927% тп;1;0.005% тр;85;0.440% тс;31;0.161% ту;26;0.135% тф;2;0.010% тх;1;0.005% тч;7;0.036% There were 372 occurrences of то which reprensents 1.927% of all bigrams (after cleaning the text). With the internet, there are now many sources of texts in all languages. There is also nothing to prevent you from running the script on a dictionary to know possible combinations; it seems you then don't need the frequencies but it may still be interesting to see what were the words containing bigrams with very low frequencies. A simple grepanswers the question. Michel [added] I guess the mac does not come with lynx installed. I must have installed it myself. That example may be more for Linux than mac users. Sorry. 18 Aug 2009 — 6:29am Nice! 18 Aug 2009 — 11:41am Ohai. The LetterMeter from Peter Bilak and Just van Rossum can run a text for single letter and letter pair occurence. Then it is just a matter of feeding it with the texts you deem appropriate. Says the website: LetterMeter is a text analysis tool,. LetterMeter is created using Python. and works only on Mac OS X. Although it is available for free, it is copyrighted, and you may not redistribute it. All rights reserved, © 2003, Peter Bilak, Just van Rossum. For TEH DOWNLOADS at Typotheque 19 Aug 2009 — 12:59pm Here is another tool I made using the above code (I replaced semicolons by tabs, and added basic choices). It can be used from absolutely any computer (well... you tell me if it works on an iPhone). Link. On a PC, if you save the resulting statistics as a text file, you can then import it in Excel for further processing. On the mac, I have found no way to import utf-8 text into Excel. Hard to believe! Michel 20 Aug 2009 — 5:33pm Here are some results I got for English.?... I used to use a C program to count the most common digrams, then augment it against punctuation, to generate kerning pair lists for URW Kernus. 21 Aug 2009 — 7:22am I guess there are indeed good references for English. Before continuing, let me say that Lynx for Mac OS X can be downloaded from.... To use it at the command line, you add /Applicationsto your path. I assume this is done, and that "Terminal > Window Settings > Display"is set to Unicode (UTF-8). What follows is then good for Linux and Mac users that are used to unix commands. Now, some digrams may cause more than kerning problems. For instance, in the Typophile thread f + umlauts, Florian Hardwig mentions that the diagrams fä, fö, fü may cause a clash between the umlauts and the f. Those combinations occur in German. How often? Let's check. On the Project Gutenberg Catalog, I find Kant's Kritik der reinen Vernunft. On that page I see no html version, and no utf-8 version. I see a plain text iso-8859-1 file and if I right click the "main site" link and paste it I get that the iso-8859-1 text has url I will thus need to tell lynx to expect iso8859-1 text; I will save the result in kritik.txt as follows (on the command line): lynx --dump -assume_charset=ISO8859-1 > kritik.txt The resulting file kritik.txt now contains the utf8 text (lynx did the reencoding). Now I look at the digrams in kritik.txt; I do not try to be efficient; the bigrams code above is not, and as long as I get my answer in reasonable time, that's fine with me. I'll just find all bigrams in the text and then egrepthose containing fä, fö, fü (I replaced semicolons by tabs in the bigramscode) ./bigrams kritik.txt | egrep "f[äöü]" and I get the output fö 27 0.003% fü 697 0.079% fä 255 0.029% which means that there is a total of 27+697+255 = 979 possible clashes in Kant's text. In my library, the book is 847 pages. On the average, that is more than one possible clash per page. A few simple an inefficient scripts, unix commands and pipes often give answers faster than sophisticated programs. Michel
http://www.typophile.com/node/61027#comment-362977
crawl-003
refinedweb
1,820
72.97
Difference between revisions of "Topological data scripting/de" Revision as of 15:11, 23 February 2019 Diese Seite beschreibt verschiedene Methoden zur Erstellung und Änderung von Part shapes (Formen) mittels Python. Wenn noch keine Kenntnisse über Python vorhanden sind, ist es eine gute Idee zuerst die Einführung in Python und Wie Python scripting in FreeCAD funktioniert zu lesen. Einleitung Hier wird erläutert, wie man das Part Module/de direkt im FreeCAD-Python-Interpreter oder von einem beliebigen externem Skript aus benutzt. Die Grundlagen über die Programmierung der topologischen Daten sind im Part Modul Erläuterung der Konzepte beschrieben. Bei weiteren Fragen zur Funktionsweise von Python-Skripten in FreeCAD sollte man auch den Scripting Abschnitt und die FreeCAD Scripting Basics/de Seiten konsultieren. Klassen Diagramm Dies ist ein Unified Modeling Language (UML) Überblick über die wesentlichen Klassen des Part Moduls: Geometrie Die geometrischen Objekte sind die Bausteine aller topologischen Objekte: - Geom Basis-Klasse der geometrischen Objekte - Line Eine gerade Linie im Raum, definiert durch den Start- und Endpunkt - Circle Kreis oder Kreissegment definiert durch einen Mittelpunkt und einen Start- und Endpunkt - ...... Und demnächst mehr davon Topologie The folgenden topologischen Datentypen stehen zur Verfügung: - Compound Eine Gruppe von beliebigen topologischen Objekten. - Compsolid Ein zusammengesetzter Körper (solid) ist ein Set von Körpern, die durch ihre Seiten verbunden sind. Dies erweitert das Konzept von WIRE and SHELL auf Körpern (solids). - Solid Ein Teil des Raumes, der durch eine geschlossene dreidimensionale Hülle begrenzt ist. - Shell Hülle = Ein Satz von über ihre Kanten verbundenen Flächen. Eine Hülle kann offen oder geschlossen sein. - Face Im zweidimensionalen ist es ein Teil einer Ebene; im dreidimensionalen ist es ein Teil einer Oberfläche. Die Form ist durch Konturen begrenzt (getrimmt). Auch im 3D gekrümmte Flächen haben sind Inneren zweidimensional parametriert. - Wire Ein Satz von über ihre Endpunkten verknüpften Kanten. Ein "Wire" kann eine offene oder geschlossene Form haben, je nach dem ob nicht verknüpfte Endpunkte vorhanden sind oder nicht. - Edge Ein topologisches Element (Kante) das mit einer beschränkten Kurve korrespondiert. Eine Kante ist generell durch Vertexe begrenzt. Eine Kante ist eindimensional. - Vertex Ein topologisches Element das mit einem Punkt korrespondiert. Es ist nulldimensional. - Shape Ein generischer Term für all die zuvor aufgezählten Elemente. Quick example : usually contain three simply.LineSegment is time to display your creation on screen. Creating a Wire A wire is a multi-edge line and the 4 edges that compose our wire. Other useful information can be easily retrieved: wire3.Length > 40.0 wire3.CenterOfMass > Vector (5, 5, 0) wire3.isClosed() > True wire2.isClosed() > False Creating. Creating a Circle A circle can be created as simply as this: circle = Part.makeCircle(10) circle.Curve > Circle (Radius : 10, Position : (0, 0, 0), Direction : (0, 0, 1)) If you want to create it(circle,0,pi) Arcs are valid edges like lines, so they can be used in wires also. Creating a polygon A polygon is simply a wire with multiple straight edges. The makePolygon function takes a list of points and creates a wire through those points: lshape_wire = Part.makePolygon([Base.Vector(0,5,0),Base.Vector(0,0,0),Base.Vector(5,0,0)]) Creating a Bézier curve Bézier curves are used to model smooth curves using a series of poles (points) and optional weights. The function below makes a Part.BezierCurve from a series of FreeCAD.Vector points. (Note:() the big cirlce, radius2 is the radius of the small circle, pnt is the center of circle,, others are more complex, such as unioning and subtracting one shape from another. differently. two curve composed>].Face(wire) cylinder = disc.extrude(Base.Vector(0,0,2)) lines show it in the viewer. import Part Part.show(Part.makeBox(100,100,100)) Gui.SendMsgToActiveView("ViewFit") Now select some faces or edges. With this script you can iterate over Complete example: The OCC bottle A typical example found).LineSegment(aPnt1,aPnt2) aSegment2=Part.LineSegment(aPnt4,aPnt5) Here we actually define the geometry: an arc, made of three points, and two line segments, made of two. Three. import Draft, Part, FreeCAD, math, PartGui, FreeCADGui, PyQt4 from math import sqrt, pi, sin, cos, asin from FreeCAD import Base size = 10 poly = Part.makePolygon( [ (0,0,0), (size, 0, 0), (size, 0, size), (0, 0, size), (0, 0, 0)]) face1 = Part.Face(poly) face2 = Part.Face(poly) face3 = Part.Face(poly) face4 = Part.Face(poly) face5 = Part.Face(poly) face6 = Part.Face(poly) myMat = FreeCAD.Matrix() myMat.rotateZ(math.pi/2) face2.transformShape(myMat) face2.translate(FreeCAD.Vector(size, 0, 0)) myMat.rotateZ(math.pi/2) face3.transformShape(myMat) face3.translate(FreeCAD.Vector(size, size, 0)) myMat.rotateZ(math.pi/2) face4.transformShape(myMat) face4.translate(FreeCAD.Vector(0, size, 0)) myMat = FreeCAD.Matrix() myMat.rotateX(-math.pi/2) face5.transformShape(myMat) face6.transformShape(myMat) face6.translate(FreeCAD.Vector(0,0,size)) myShell = Part.makeShell([face1,face2,face3,face4,face5,face6]) mySolid = Part.makeSolid(myShell) mySolidRev = mySolid.copy() mySolidRev.reverse() myCyl = Part.makeCylinder(2,20) myCyl.translate(FreeCAD.Vector(size/2, size/2, 0)) cut_part = mySolidRev.cut(myCyl) Part.show(cut_part) file: import Part s = Part.Shape() s.read("file.stp") # incoming file igs, stp, stl, brep s.exportIges("file.igs") # outbound file igs Note that importing or opening BREP, IGES or STEP files can also be done directly from the File → Open or File → Import menu, while exporting can be done with File → Export. >
https://www.freecadweb.org/wiki/index.php?title=Topological_data_scripting/de&diff=prev&oldid=429131
CC-MAIN-2019-47
refinedweb
891
51.34
In late 2002, Slashdot posted a story about a “next-generation shell” rumored to be in development at Microsoft. As a longtime fan of the power unlocked by shells and their scripting languages, the post immediately captured my interest. Could this shell provide the command-line power and productivity I’d long loved on Unix systems? Since I had just joined Microsoft six months earlier, I jumped at the chance to finally get to the bottom of a Slashdot-sourced Microsoft Mystery. The post talked about strong integration with the .NET Framework, so I posted a query to an internal C# mailing list. I got a response that the project was called “Monad,” which I then used to track down an internal prototype build. Prototype was a generous term. In its early stages, the build was primarily a proof of concept. Want to clear the screen? No problem! Just lean on the Enter key until your previous commands and output scroll out of view! But even at these early stages, it was immediately clear that Monad marked a revolution in command-line shells. As with many things of this magnitude, its beauty was self-evident. Monad passed full-fidelity .NET objects between its commands. For even the most complex commands, Monad abolished the (until now, standard) need for fragile text-based parsing. Simple and powerful data manipulation tools supported this new model, creating a shell both powerful and easy to use. I joined the Monad development team shortly after that to help do my part to bring this masterpiece of technology to the rest of the world. Since then, Monad has grown to become a real, tangible product—now called Windows PowerShell. So why write a book about it? And why this book? Many users have picked up PowerShell for the sake of learning PowerShell. Any tangible benefits come by way of side effect. Others, though, might prefer to opportunistically learn a new technology as it solves their needs. How do you use PowerShell to navigate the filesystem? How can you manage files and folders? Retrieve a web page? This book focuses squarely on helping you learn PowerShell through task-based solutions to your most pressing problems. Read a recipe, read a chapter, or read the entire book—regardless, you’re bound to learn something. This book helps you use PowerShell to get things done. It contains hundreds of solutions to specific, real-world problems. For systems management, you’ll find plenty of examples that show how to manage the filesystem, the Windows Registry, event logs, processes, and more. For enterprise administration, you’ll find two entire chapters devoted to WMI, Active Directory, and other enterprise-focused tasks. Along the way, you’ll also learn an enormous amount about PowerShell: its features, its commands, and its scripting language—but most importantly you’ll solve problems. This book consists of five main sections: a guided tour of PowerShell, PowerShell fundamentals, common tasks, administrator tasks, and a detailed reference. A Guided Tour of Windows PowerShell breezes through PowerShell at a high level. It introduces PowerShell’s core features: A razor-sharp focus on administrators A consistent model for learning and discovery Integration with critical management technologies A consistent model for interacting with data stores The tour helps you become familiar with PowerShell as a whole. This familiarity will create a mental framework for you to understand the solutions from the rest of the book. Chapters 1 through 8 cover the fundamentals that underpin the solutions in this book. This section introduces you to the PowerShell interactive shell, fundamental pipeline and object concepts, and many features of the PowerShell scripting language. Chapters 9 through 19 cover the tasks you will run into most commonly when starting to tackle more complex problems in PowerShell. This includes working with simple and structured files, Internet-connected scripts, code reuse, user interaction, and more. Chapters 20 through 32 focus on the most common tasks in systems and enterprise management. Chapters 20 through 25 focus on individual systems: the filesystem, the registry, event logs, processes, services, and more. Chapters 26 and 27 focus on Active Directory, as well as the typical tasks most common in managing networked or domain-joined systems. Chapters 28 through 30 focus on the three crucial facets of robust multi-machine management: WMI, PowerShell Remoting, and PowerShell Workflows. Many books belch useless information into their appendixes simply to increase page count. In this book, however, the detailed references underpin an integral and essential resource for learning and using PowerShell. The appendixes cover: The PowerShell language and environment Regular expression syntax and PowerShell-focused examples XPath quick reference .NET string formatting syntax and PowerShell-focused examples .NET DateTime formatting syntax and PowerShell-focused examples Administrator-friendly .NET classes and their uses Administrator-friendly WMI classes and their uses Administrator-friendly COM objects and their uses Selected events and their uses PowerShell’s standard verbs The majority of this book requires only a working installation of Windows PowerShell. Windows 7, Windows 8, Windows Server 2008 R2, and Windows Server 2012 include Windows PowerShell by default. If you do not yet have PowerShell installed, you may obtain it by following the download link here. This link provides download instructions for PowerShell on Windows XP, Windows Server 2003, and Windows Vista. For Windows Server 2008, PowerShell comes installed as an optional component that you can enable through the Control Panel like other optional components. The Active Directory scripts given in Chapter 26 are most useful when applied to an enterprise environment, but Test Active Directory Scripts on a Local Installation shows how to install additional software (Active Directory Lightweight Directory Services, or Active Directory Application Mode) that lets you run these scripts against a local installation. The following typographical conventions are used in this book: Indicates menu titles, menu options, menu buttons, and keyboard accelerators Indicates new terms, URLs, email addresses, filenames, file extensions, pathnames, directories, and Unix utilities Constant width Indicates commands, options, switches, variables, attributes, keys, functions, types, classes, namespaces, methods, modules, properties, parameters, values, objects, events, event handlers, tags, macros, or the output from commands Constant width bold Shows commands or other text that should be typed literally by the user Constant width italic Shows text that should be replaced with user-supplied values To obtain electronic versions of the programs and examples given in this book, visit the Examples link: “Windows PowerShell Cookbook, Third Edition, by Lee Holmes (O’Reilly). Copyright 2013 Lee Holmes, 978-1-449-32068-3.” If you feel your use of code examples falls outside fair use or the permission: Writing is the task of crafting icebergs. The heft of the book you hold in your hands is just a hint of the multiyear, multirelease effort it took to get it there. And by a cast much larger than me. The groundwork started decades ago. My parents nurtured my interest in computers and software, supported an evening-only bulletin board service, put up with “viruses” that told them to buy a new computer for Christmas, and even listened to me blather about batch files or how PowerShell compares to Excel. Without their support, who knows where I’d be. My family and friends have helped keep me sane for two editions of the book now. Ariel: you are the light of my life. Robin: thinking of you reminds me each day that serendipity is still alive and well in this busy world. Thank you to all of my friends and family for being there for me. You can have me back now. :) I would not have written either edition of this book without the tremendous influence of Guy Allen, visionary of the University of Toronto’s Professional Writing program. Guy: your mentoring forever changed me, just as it molds thousands of others from English hackers into writers. Of course, members of the PowerShell team (both new and old) are the ones that made this a book about PowerShell. Building this product with you has been a unique challenge and experience—but most of all, a distinct pleasure. In addition to the PowerShell team, the entire PowerShell community defined this book’s focus. From MVPs to early adopters to newsgroup lurkers: your support, questions, and feedback have been the inspiration behind each page. Converting thoughts into print always involves a cast of unsung heroes, even though each author tries his best to convince the world how important these heroes are. Thank you to the many technical reviewers who participated in O’Reilly’s Open Feedback Publishing System, especially Aleksandar Nikolic and Shay Levy. I truly appreciate you donating your nights and weekends to help craft something of which we can all be proud. To the awesome staff at O’Reilly—Rachel Roumeliotis, Kara Ebrahim, Mike Hendrickson, Genevieve d’Entremont, Teresa Elsey, Laurel Ruma, the O’Reilly Tools Monks, and the production team—your patience and persistence helped craft a book that holds true to its original vision. You also ensured that the book didn’t just knock around in my head but actually got out the door. This book would not have been possible without the support from each and every one of you. No credit card required
https://www.safaribooksonline.com/library/view/windows-powershell-cookbook/9781449359195/pr04.html
CC-MAIN-2018-22
refinedweb
1,537
53.31
tag:blogger.com,1999:blog-22573230501661015142009-07-12T04:07:39.868+08:00Paint Me Gorgeous - Beauty and makeup blog for the gorgeous you.Paint Me Gorgeousnoreply@blogger.comBlogger127125PaintMeGorge*Paint Me Gorgeousnoreply@blogger.com2 I'm throwing out.I am throwing all the Loreal True Match Liquid Foundation out. It is so camera unsafe. If the camera's good you look good but if the camera is shit (maybe the flash is sucky. etc) then you look like you fell into a pile of flour and then you apply the foundation.Luckily still have RMK and Cinema Secrets. Cinema Secrets for the winner!!!Paint Me Gorgeousnoreply@blogger.com5's Bright Eye Palette - It's here in Msia!!I just saw it yesterday and it's SO MIGHTY gorgeous! It's a small palette, but really, I really don't think we can use up the colours so soon... no chinese opera makeup here!Retailing at RM 230! Yes! I'm gonna go buy it!! Lucky lucky bride who gets to use it this weekend. hee heeThe colours are all matte, except for one purple shimmer. (Based on 2 minute observation). Love it love it love it. Paint Me Gorgeousnoreply@blogger.com3 Maybelline Volume Express Hypercurl Mascara. One of the better mascaras that I've tried. Usually retailing at over RM30 (31.90?) but THIS WEEKEND only at Guardian Pharmacies all over Malaysia, RM 20.88. That's a discount of about 30%!I usually use a new tube for brides, for hygiene reasons, and because it's WATERPROOF. Very important for teary brides, mother of brides, bla bla.Be sure to get Paint Me Gorgeousnoreply@blogger.com4 both also look so awesome!Oh! My! GOD!!!Bobbi Brights Eye Palette!35 soft matte colour. Perfect for the flappy lids of our Asian eyes. I want! I want! I want!!!Only USD70. (Dunno how much when reach KL, is it here yet?)I want! I want! I want!Dany Sanz Limited Edition Palette! (MUFE)30 colours in matte, satiny and irisdescent. (Extremely pigmented colours!)I want! I want! I want!USD 390 (product value of USD 640!)I want! IPaint Me Gorgeousnoreply@blogger.com6 brand makeupsister just bought November 08's edition of Cleo mag. Saw a huge ad for Stage Makeup. Tried to google it but found nothing. Ok I admit I was impatient cos I only scrolled thru 2 pages of search results. What is this brand? Where did it come from? From the lack of info I can only say it's probably a private brand manufactured from some oem factory. Anyone else got info on it? What's the quality Paint Me Gorgeousnoreply@blogger.com10 it rains......it pours. First my net died. Then my laptop died. What do I do? Can I just say there are too many ugly ppl out there ? Of course you nice-ies are gonna say everybody's gorgeous. No nos to the 'internet fugly models'Paint Me Gorgeousnoreply@blogger.com3 CPD Warehouse Sale - Friday, 26 Sept 2008Hey guys,Some of you psychos totally love buying lots of makeup and let them rot at the back of your drawer so don't let that stop.There'll be a Loreal Warehouse Sale (the last one of the year, supposedly) so don't miss this one if your craving for makeup or skincare has not been satisfied!Details of the Loreal Warehouse Sale September 2008 are as below:-Date: 26th September (Fri) Time: 8.30Paint Me Gorgeousnoreply@blogger.com5 am not dead!!Dear all,Sorry for not updating for so long, MY INTERNET IS SCREWED!! Streamyx is terrible!!! I couldn't even go online to check and reply emails, facebook, bla bla... was suffering from total withdrawal symptoms!Anyhow, some recent beauty buys... my bf went to Paris so I told him to get me the MUFE Flash Palette from any Sephora he sees, but the one he went to didn't have it, so HE WAS SO NICE Paint Me Gorgeousnoreply@blogger.com6 eyes red lips.These days nearly every magazine I flip I see dark eyes and red lips. I heard there is this 'makeup rule' that states that you should always focus on one part only, either your eyes or your lips, but I guess the designers don't give a damn about any rules...Since I have a hen's night to attend soon, I thought the theme could be dark eyes red lips with long dresses... sure damn outstanding wan!Paint Me Gorgeousnoreply@blogger.com23: RenRen of makeupbyrenren.comahh the famous RenRen with her infamous pose. lol you'll see that pose a lot in her blog at MakeupbyRenRen.com. RenRen started out as a makeup fanatic and now while in a university somewhere in Atlanta, she is also pursuing makeup artistry during the weekends or whenever she can. You gotta admire the passion she has, I know I do.If you love dramatic looking eyes, do go check out her sites for Paint Me Gorgeousnoreply@blogger.com2: Malian Eyebrow TemplateI have very little brow hair, so the first makeup technique that I tried to learn and master was drawing my eyebrows. It used to take me very long to draw on brows that are symmetrical - nearly 20 minutes each time JUST FOR EYEBROWS. (btw, eventually I learnt that eyebrows are sisters, not twins, if you get what I mean)So when I saw this eyebrow template when I was shopping over the weekend, I Paint Me Gorgeousnoreply@blogger.com9 of the Maybelline Water Shine 3D Collagen contest here!Hi hi, sorry for the delay in announcing the winners of the Maybelline Water Shine 3D Collagen contest. Here we go.... *drumroll*And the lucky ladies are....Grand Prize(1 Maybelline Water Shine 3D Collagen lipstick, 1 Maybelline mascara, 1 Eye Studio Eyeshadow Palette & 1 Angelfit Powder Foundation)Miki (chuahuiling @ gmail .com)Special Prize (1 Eye Studio Eyeshadow Palette & 1 Angelfit PowderPaint Me Gorgeousnoreply@blogger.com7 hurry, the contest is ending in less than 12 hours.Hey guys,Just a short reminder that the Maybelline Water Shine 3D Collagen contest will end TODAY at 12 pm sharp (Malaysian time). So hurry click on the above link and join our contest!A piece of good news:- there's an additional gift of the Eye Studio Eyeshadow Palette + Angelfit Powder Foundation to be given away, so your chance of getting a freebie just got higher. Do it do it! :)Random Paint Me Gorgeousnoreply@blogger.com5 Combing Remover.I admit I'm not a hardcore Shiseido makeup fan, mostly because their products are so blah. Nothing exciting, nothing really outstanding but an exception would be their base makeup products such as foundation, two way cake (I loved the Poreless Supplist) as Japanese brands make the best base makeup products.But this product changed my mind about Shiseido. This is the Maquillage Combing Remover (Paint Me Gorgeousnoreply@blogger.com2 Lipstick Reality Show with FREEBIES.Herrow...Just got a bunch of Maybelline Water Shine 3D Collagen lipsticks to play with. Fun fun fun! What is better than lots of makeup? Free makeup, of course! :DThe Maybelline Water Shine 3D Collagen comes in 12 shades. Presenting...The 1 series (The Neutrals): Shade 102, Shade 135 Shade 135 is my favourite of them all! The 2 series (This looked like a coral on the stick, but turns out pink onPaint Me Gorgeousnoreply@blogger.com108 Lauder Vibrator.hur hur hur okay not vibrator but Estee Lauder Turbolash All Effects Motion Mascara, which translates to VIBRATING MASCARA. The battery is at the top part where the handle is at, and when you take the wand out it VIBRATES!It claims to do everything:- Curl, Volumise, Separate, Lengthen.Why the hell we need a vibrating mascara for? Anyone got any idea? Because ultimately at the end of it all, you Paint Me Gorgeousnoreply@blogger.com9 Makeover for a Girls Night Out.Okay it’s Friday night. After being stuck in the office the whole day, you finally get off work at <?xml:namespace prefix = st1 />7 pm. You’ve got a Girls Night Out with your best girlfriends (ala Sex and the City) and you don’t want to waste any more time being a plain Jane. You take a shower and begin the process of making yourself gorgeous and irresistible. <?xml:namespace prefix = o />A Paint Me Gorgeousnoreply@blogger.com13 this cornerless eyelash curler!!This eyelash curler is suitable for crimping your real lashes together with false lashes as it is cornerless or edgeless.It is better to use this 'cornerless' curler curl the false + real lashes since it won't ruin your false lashes if you crimp them wrongly. That happened to me sometimes when I use the normal eyelash curler like Shu Uemura and did not manage to put all the false lashes into the Paint Me Gorgeousnoreply@blogger.com7 to Hold a Great Makeup Party.You need:- <?xml:namespace prefix = o />1. A bunch of crazy makeup lovers!2. A treasure chest of makeup!3. And of course, yummy food with lots of calories for breaks in between!The Details Early last week, Paris, Audrey, Ringo, Karen and I attended a makeup party organised by Maybelline exclusively for us. This being the first time I’m attending a makeup party, I braved myself to go out without Paint Me Gorgeousnoreply@blogger.com14 hate my camera. i hate myself. i don't know how to use the camera.i lost my blogroll!!! I#))#@)_)@__@)@sorry guys, will add it back soon..Paint Me Gorgeousnoreply@blogger.com4 you spot the difference?Same items used for both eyes, different application techniques. Each brow drawn to match shape of shadow.Which side do you like better?Products used:Dior lilac baseThe Face Shop Deep Purple PP406Shu Uemura Black M990Bobbi Brown Gel linerFalse lashes.p/s: Can someone please teach me what digital cam setting to use to take nice clear pics of makeup? I can't seem to do it properly. Thanks!!Paint Me Gorgeousnoreply@blogger.com9: AskmewhatsAskmewhat belongs to Nikki, a really sweet happy go lucky chica from Philippines! I mean, I could feel the vibes all the way through the Internet!Nikki loves makeup, and is well known in the beauty bloggers neighbourhood. Go to any beauty blog and chances are you'll see her giving her kind comments there. Yep, that's how I would describe her, sweet and kind.Anyhow, if you love all sort of girly Paint Me Gorgeousnoreply@blogger.com6 didn't go to the Estee Lauder WH Sale.......but yet my gorgeous gorgeous sister bought me stuff... is she sweet or is she sweet? I know... *sigh*BTW, my sister had to wait for 2 hours before she got in at 10 a.m and by then everything already kena sapu by all the other kiasu fellas. Apparently a lot of the staff were holding back some goods for their friends/family members. Terrible right.. but it's okay la, who am I to complain..MAC Paint Me Gorgeousnoreply@blogger.com5: The Body Shop Vitamin E Gentle Facial Cleansing Wipes.Do Paint Me Gorgeousnoreply@blogger.com4
http://feeds.feedburner.com/PaintMeGorgeous
crawl-002
refinedweb
1,863
69.38
In .NET Framework 4.0 the System.Xml APIs are obsolete and are maintained for compatibility reasons only. Web developers can create XML file by using LINQ to XML. The next picture presents classes in LINQ to XML. These classes are included in System.Xml.Linq namespace and the following ones are the most important, because they cover 90% of cases: – XAttribute – Represents an attribute – XComment – Represents a comment – XDocument – Represents the XML – XElement – Represents a node in the XML – XDeclaration – Represents the XML declaration 1. Web developer has to create an XDocument instance and, in its constructor has to pass XElement instance that create the root node. 2. The XElement constructor takes a list of XElement instances that represent the children. Based on this design, Web developer can create the whole XML in a single statement, by nesting XElement instances. If his/her code is well structured it easy to understand. XDocument myDoc = new XDocument( new XDeclaration(“1.0”, “utf-8”, “true”), new XComment(“my comment”), new XElement(“Books”, new XElement(“Book”, new XAttribute(“ISBN-13”, “978-0545139700”), new XElement(“Title”, ” Harry Potter and the Deathly Hallows “) ) ) ); The result is the following: <?xml version=”1.0” encoding=”utf-8”?> <!-my comment–> < Books> <Book ISBN-13=”978-0545139700”> <Title> Harry Potter and the Deathly Hallows</Title> </Book> </Books> 3. After he/she has the XDocument instance with the XML, Web developer can use the Save method to save the XML string in a file places wherever he/she wants myDoc.Save(path);
https://www.howtoasp.net/how-to-create-an-xml-file-by-using-linq-to-xml-in-c/
CC-MAIN-2021-49
refinedweb
252
56.25
TL;DR: On Wednesday, February 7, 2018, Laravel 5.6, was released to the public. It's a major release to the most popular PHP framework on GitHub as of this writing. Furthermore, Spark 6.0 was also released alongside Laravel 5.6. According to the policy laid down by the Taylor and the core team, major Laravel framework releases are released every six months (February and August). In this article, I'll cover the new features in Laravel 5.6 and several other changes and deprecations. Laravel is the most popular, open-source PHP framework as of this writing. It is designed for building web applications with an expressive and elegant syntax. With Laravel, application development is fast because it ships with a lot of features out of the box. Without further ado, let's dive right into Laravel 5.6. What's new in Laravel 5.6? 1. Support For Argon Password Hashing Argon2, the recommended password hashing algorithm by the Password Hashing Competition, is a modern algorithm for securely hashing passwords. And it comes in two distinct flavors, Argon 2i and Argon 2d. PHP 7.2 recently added support for Argon 2i password hashing. Therefore, Michael Lundbol took the initiative to add support for Argon hashing in Laravel. Initiative to add Argon Password Hashing The bcrypt driver is the default driver for password hashing in Laravel. However, Laravel 5.6 now supports argon. Config file for hashing Note: Laravel 5.6 ships with a hashing config file as displayed above. 2. Better Support For Dynamic Rate Limiting One of the beautiful built-in features of Laravel is API Rate Limiting. Before now, Laravel's rate limiting configuration involved specifying a hard-coded number of requests on a group of routes. However, Laravel 5.6 adds icing to the cake by allowing you specify a maximum number of requests based on an authenticated User model attribute. Route::middleware('auth:api', 'throttle:rate_limit,1')->group(function () { Route::get('/users', function () { // do something awesome }); }); In the code above, the rate_limit parameter is an attribute of the User model in a Laravel 5.6 application. 3. Model Serialization on Steriods Before now, there was a persistent bug on queued models. When queued models are finally processed, it's without their loaded relationships in place. In Laravel 5.6, huge improvements have been made to ensure that relationships loaded on queued models are automatically re-loaded when the job is processed by the queue. 4. Improved Logging There is a new logging config file in Laravel 5.6. In this file, you can set up logging stacks that send log messages to various handlers. 'channels' => [ 'stack' => [ 'driver' => 'stack', 'channels' => ['syslog', 'slack'], ], 'syslog' => [ 'driver' => 'syslog', 'level' => 'debug', ], 'slack' => [ 'driver' => 'slack', 'url' => env('LOG_SLACK_WEBHOOK_URL'), 'username' => 'Laravel Log', 'emoji' => ':boom:', 'level' => 'critical', ], ], The stack driver allows you to combine multiple channels, in this case, syslog and slack, into a single log channel as shown above. Furthermore, with the tap array on a channel's configuration, you can easily customize Monolog for an existing channel like so: 'single' => [ 'driver' => 'single', 'tap' => [App\Logging\CustomizeFormatter::class], 'path' => storage_path('logs/laravel.log'), 'level' => 'debug', ], Note: Monolog is a library that sends your application logs to files, sockets, inboxes, databases and various web services. It's a comprehensive library with various log handlers. 5. Single Server Task Scheduling Before now, if your app ran on multiple servers and the task scheduler was active on those servers, then your scheduled tasks also executed multiple times. However, in Laravel 5.6, you can now schedule your task to execute on just a single server if your app runs on multiple servers. " In Laravel 5.6, you can now schedule your task to execute on just a single server if your app runs on multiple servers." Tweet This $schedule->command('launch:flasheavy') ->tuesdays() ->at('12:00') ->onOneServer(); 6. Beautiful Error Reporting With Collision Collision is an awesome package developed and maintained by Nuno Maduro. It is a detailed, beautiful and intuitive error handler for console/command-line PHP applications. Laravel 5.6 now ships with Collision via the dev composer dependency. When working with your Laravel app via the command line, Collision provides an awesome error reporting interface like so: Collision Integration with Laravel 5.6 7. Elevated Eloquent Date Formatting Laravel 5.6 provides a subtle way to cast Eloquent date model attributes to a specific date format. All you need to do is to specify the desired date format in the $casts array. "Laravel 5.6 provides a subtle way to cast Eloquent date model attributes to a specific date format." Tweet This Now, you can customize the date model attributes like so: protected $casts = [ 'date_enrolled' => 'date:Y-m-d', 'date_evicted' => 'datetime:Y-m-d H:00', ]; When the model is cast to JSON or array output, the attributes will be formatted to the date formats example provided above like so: date_enrolled: 2005-02-19 date_evicted: 2018-02-20 8:30 8. Aliasing Blade Components For Easier Access This feature is very handy. Accessing a blade component in a sub-directory is easier via aliasing. With the component method, you can alias components.card to card assuming the card component's directory is resources/views/components/card.blade.php. Blade::component('components.card', 'card'); Once it has been aliased, you can invoke the component like so: @card This is your Valentine card. @card 9. Broadcasting Channel Classes In Laravel 5.6, you can now generate broadcasting channel classes via the make:channel Artisan command. The generated channel class will be placed in the App/Broadcasting directory. php artisan make:channel PurchaseChannel Registering the channel is as easy as calling the Broadcast::channel method below in the routes/channel.php file: use App\Broadcasting\PurchaseChannel; Broadcast::channel('purchase.{purchase}', PurchaseChannel::class); In the PurchaseChannel class, you can add the authorization logic in the join method. Before now, the authorization logic was placed in the channel authorization Closure. <?php namespace App\Broadcasting; use App\User; use App\Purchase; class PurchaseChannel { .... /** * Authenticate the user's access to the channel. * * @param \App\User $user * @param \App\Purchase $purchase * @return array|bool */ public function join(User $user, Purchase $purchase) { return $user->id === $purchase->user_id; } } 10. Daring Upgrade Of Symfony Components and Bootstrap All the Symfony components used by Laravel 5.6 have been bumped to the ~4.0 release. Furthermore, all the front-end scaffolding of Laravel 5.6 and Spark 6.0 has been bumped to v4. I covered all the major additions and deprecations of Bootstrap 4 in this article. Deprecations and Other Updates - Generating an API resource controller can now be done by using the --apiswitch when executing the make:controllercommand. - Laravel 5.6 introduced helper methods, Arr::wrap(), classes_uses_recursive(), Str::uuid(), and Str::orderedUuid(), the latter generating a timestamp first UUID that's more easily and efficiently indexed by databases like MySQL. - PHPUnit, upgraded to v7. - Laravel Mix, upgraded to v2. - The deprecated Artisan optimizecommand has been removed. - Added support for customizing the mail message building in ResetPassword::toMail(). - Add policies()method to AuthServiceProviderto retrieve all the policies defined by the provider. - Two new blade directives have been added to the framework, @csrfand @method. - Support for PostgreSQL comments was added. Furthermore, Laravel 5.6 now has better support for enumeration columns. Upgrading to Laravel 5.6 Laravel 5.6 requires PHP >= 7.1.3. And the estimated upgrade time from Laravel v5.5 is about ten to thirty. Laravel Shift Aside: Securing Laravel APIs with Auth0 Securing Laravel APIs with Auth0 is very easy and brings a lot of great features to the table. With Auth0, you only have to write a few lines of code to get: - A solid identity management solution, including single sign-on - User management - Support for social identity providers (like Facebook, GitHub, Twitter, etc.) - Enterprise identity providers (Active Directory, LDAP, SAML, etc.) - Our own database of users You'll need an Auth0 account to manage authentication. You can sign up for a free account here. Next, set up an Auth0 API. Set Up an API Go to APIs in your Auth0 dashboard and click on the "Create API" button. Enter a name for the API. Set the Identifier to a URL(existent or non-existent URL). The Signing Algorithm should be RS256. Create API on Auth0 dashboard You're now ready to implement Auth0 authentication in your Laravel API. Dependencies and Setup Install the laravel-auth0 package by entering the following in your terminal: composer require auth0/login:"~5.0" Generate the laravel-auth0 package config file: php artisan vendor:publish After the file is generated, it will be located at config/laravel-auth0.php. Double check your values match those found here. Now you need to open your .env file to create and fill in those variables. Add the following anywhere in your .env file: AUTH0_DOMAIN=kabiyesi.auth0.com AUTH0_CLIENT_ID=xxxxxxxxxxxxxxxxxx AUTH0_CLIENT_SECRET=xxxxxxxxxxxxxxxxx AUTH0_AUDIENCE= AUTH0_CALLBACK_URL=null Now replace all of those placeholders with the corresponding value from your Auth0 dashboard. Activate Provider and Facade The laravel-auth0 package comes with a provider called LoginServiceProvider. Add this to the list of application providers: // config/app.php 'providers' => array( // ... \Auth0\Login\LoginServiceProvider::class, ); If you would like to use the Auth0 Facade, add it to the list of aliases. // config/app.php 'aliases' => array( // ... 'Auth0' => \Auth0\Login\Facade\Auth0::class, ); The user information can now be accessed's AppServiceProvider: // app/Providers/AppServiceProvider.php public function register() { $this->app->bind( \Auth0\Login\Contract\Auth0UserRepository::class, \Auth0\Login\Repository\Auth0UserRepository::class ); } Configure Authentication Driver The laravel-auth0 package comes with an authentication driver called auth0. This driver defines a user structure that wraps the normalized user profile defined by Auth0. It doesn't persist the object but rather simply stores it in the session for future calls. This is adequate for basic testing or if you don't need to persist the user. At any point you can call Auth::check() to determine if there is a user logged in and Auth::user() to retrieve the wrapper with the user information. Configure the driver in config/auth.php to use auth0. // app/config/auth.php // ... 'providers' => [ 'users' => [ 'driver' => 'auth0' ], ], Secure API Routes Your API routes are defined in routes/api.php for Laravel 5.3+ apps. // routes/api.php <?php use Illuminate\Http\Request; /* |-------------------------------------------------------------------------- | API Routes |-------------------------------------------------------------------------- | | Here is where you can register API routes for your application. These | routes are loaded by the RouteServiceProvider within a group which | is assigned the "api" middleware group. Enjoy building your API! | */ Route::get('/public', function (Request $request) { return response()->json(["message" => "Hello from a public endpoint! You don't need any token to access this URL..Yaaaay!"]); }); Route::get('/wakanda', function (Request $request) { return response()->json(["message" => "Access token is valid. Welcome to this private endpoint. You need elevated scopes to access Vibranium."]); })->middleware('auth:api'); Now you can send a request to your protected endpoint with an access_token: curl --request GET \ --url \ --header 'authorization: Bearer <ACCESS TOKEN>' Once a user hits the api/wakanda endpoint, a valid JWT access_token will be required before the resource can be released. With this in place, private routes can be secured. More Resources That's it! You have an authenticated Laravel API with protected routes. To learn more, check out the following resources: Conclusion Laravel 5.6 came loaded with new features and significant improvements. Another major release will be out by August and some of the features shipping with the next major release will be unleashed at Laracon 2018. Get your tickets already! Have you upgraded to Laravel v5.6 yet? What are your thoughts? Let me know in the comments section! 😊
https://auth0.com/blog/laravel-5-6-release-what-is-new/
CC-MAIN-2021-31
refinedweb
1,948
50.53
Display a formatted error message, and then exit (varargs) #include <err.h> void verr( int eval, const char *fmt, va_list args ); void verrx( int eval, const char *fmt, va_list args ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The err() and warn() family of functions display a formatted error message on stderr. For a comparison of the members of this family, see err(). The verr() function produces a message that consists of: The verrx() function produces a similar message, except that it doesn't include the string associated with errno. The message consists of: The verr() and verrx() functions don't return, but exit with the value of the argument eval.
http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.lib_ref/topic/v/verr.html
CC-MAIN-2018-26
refinedweb
122
62.68
Outlook from the Managed World - Posted: Oct 31, 2006 at 2:36 AM - 3,431 Views - 19 Comments Something went wrong getting user information from Channel 9 Something went wrong getting user information from MSDN Something went wrong getting the Visual Studio Achievements When I read the announcement that Visual Studio 2005 Tools for Microsoft Office (VSTO 2005) had added support for Microsoft Outlook (currently in beta), I downloaded it immediately! After all, my digital universe revolves around email, and being able to easily extend Outlook in custom ways has always been a wish of mine. Writing add-ins for Outlook certainly isn't a new capability, but it was never as easy to write useful, secure code as it is now. My first step was to download the VSTO Outlook Beta installer. Then, to speed up the learning curve, I downloaded the conveniently supplied snippets. Including the snippets, even this early build is immensely helpful and a great way to jumpstart your creativity. As you will see, the extensibility is quite broad, and in fact we will only be able to scratch the surface with this article. Don't worry though; I'll revisit it again in future columns. The application we will create today is basically a Hello World for Outlook. We'll step through creating an email message and a contact. The supplied code samples in this article are written in C#, however Visual Basic source code is also included with the code download for this article. Developing with VSTO requires a version of Visual Studio greater than Express Edition. The current beta of Visual Studio can be downloaded from. Of course, Microsoft Office Professional 2003 (Service Pack 1) is also a requirement for this article, or at least Outlook 2003 SP1. Be aware that Outlook Express will not function as a replacement for Outlook. The first step after installing the above is to create an Outlook Add-in project in the Office project templates section in Visual Studio. Notice that alongside the Excel and Word templates, there is a new option for Outlook. Specify a name and click OK. Creating the Outlook Add-in project (click image to zoom) Actually working with the new features could have proved to be difficult, but it turns out that the object model and available events just make sense. The project template adds a few references for general VSTO and specifically for Outlook. It also creates a setup project to simplify deployment. The main class file is ThisApplication.cs and contains a Startup and Shutdown event handler similar to other VSTO project types. To avoid presenting too much material at once, in our example creating these items will occur when the add-in starts up and we will not use a menu or any user input to trigger it. Though not realistic, it makes it easier to step into this new functionality. Keep in mind though, that you can create custom menus on the standard menu bar, or respond to events such as the currently active folder, new messages arriving, or an appointment reminder. Creating mail items, contacts, and folders is easy using the snippets, but be sure to read the readme that appears after installing them. Though the snippets are installed, they will not appear in the Insert Snippet menu item until added through the Visual Studio Snippets Manager (Tools | Code Snippets Manager menu command). In order to create a new mail message, you will first need to obtain a reference to an Outlook.MAPIFolder object. This encapsulates an Outlook folder like Inbox, Draft, Outbox, or user-created folders. In order to get the default Outbox folder call the GetDefaultFolder method of the MAPI namespace object: Visual C# Outlook.MAPIFolder outBoxfolder = this.GetNamespace("MAPI").GetDefaultFolder( Microsoft.Office.Interop.Outlook.OlDefaultFolders.olFolderOutbox); Visual Basic Dim outBoxfolder As Outlook.MAPIFolder = Me.GetNamespace("MAPI").GetDefaultFolder( Microsoft.Office.Interop.Outlook.OlDefaultFolders.olFolderOutbox) Correction (9/7/2005, thanks to Sue Mosher, Outlook MVP): You should actually use the Application.CreateItem method, which is the standard (and simpler) technique for creating a new item in one of the default folders. Once you have a reference to the folder, you can add an item to it. Because the Outbox is generally intended for mail items, that is what we will add, but remember that Outlook supports any item type being added to any folder. To create and add a new mail item, call the Items.Add method of the folder object: Visual C# Outlook.MailItem mailItem = (Outlook.MailItem)outBoxfolder.Items.Add(Outlook.OlItemType.olMailItem); Visual Basic Dim mailItem As Outlook.MailItem = outBoxfolder.Items.Add(Outlook.OlItemType.olMailItem) At this point you have a mail item to manipulate. You can set the Subject, Body, and To properties, add a reminder, attach files, display the message, and of course send it: Visual C# mailItem.Subject = "Coding4Fun Sample"; mailItem.Body = "This email was created using VSTO 2005 Outlook support."; mailItem.To = "emailaddress@emailaddress"; Visual Basic mailItem.Subject = "Coding4Fun Sample" mailItem.Body = "This email was created using VSTO 2005 Outlook support." mailItem.To = "emailaddress@emailaddress" Sending the message simply requires calling the Send method, while displaying the message is accomplished with the Display method (modally or not). If you do not call the Send or Save method, the message will not persist unless you display the message to allow the user to save or send it. This model is consistent with the various item types. If you have added the snippets, you can create a mail item in two quick steps: insert the snippet, and then fill in the placeholder fields. You should probably create a new method to contain your message sending code, then right-click and select Insert Snippet, and navigate to the Outlook | Create sub-menu. You will see all of the above code, including helpful TODO comments, and commented out Send and Display calls. A sample message (click image to zoom) There is no snippet for adding a task item (yet anyway), so for our final sample, let's create a new task. The API is so well thought-out, that once you can create an email, creating a task is a snap. As before, obtain a reference to the folder, this time the Tasks folder. Also as before, remember that any folder will hold a task. This feature is useful for linking items and maintaining isolation between business or project boundaries. Once again, the folder is one of the defaults, so call GetDefaultFolder, this time with the olFolderTasks enumerated folder name: Visual C# Outlook.MAPIFolder tasksFolder = this.GetNamespace("MAPI").GetDefaultFolder( Microsoft.Office.Interop.Outlook.OlDefaultFolders.olFolderTasks); Visual Basic Dim outBoxfolder As Outlook.MAPIFolder = Me.GetNamespace("MAPI").GetDefaultFolder( Microsoft.Office.Interop.Outlook.OlDefaultFolders.olFolderTasks) Next, add a new instance of a task item. Notice that you are not calling a constructor. Each item type is actually an interface. Items must be created as children of a folder, not as stand-alone objects: Visual C# Outlook.TaskItem taskItem = (Outlook.TaskItem)tasksFolder.Items.Add(Outlook.OlItemType.olTaskItem); Visual Basic Dim taskItem As Outlook.TaskItem = outBoxfolder.Items.Add(Outlook.OlItemType.olTaskItem) Next, set the appropriate properties. In this case, the task's subject, owner, percentage complete, and status. Status is an enumerated type representing Complete, Deferred, In Progress, Not Started, and Waiting. Visual C# taskItem.Subject = "Finish Coding4Fun article!"; taskItem.Owner = "Bob"; taskItem.PercentComplete = 0; taskItem.Status = Microsoft.Office.Interop.Outlook.OlTaskStatus.olTaskInProgress; Visual Basic taskItem.Subject = "Finish Coding4Fun article!" taskItem.Owner = "Bob" taskItem.PercentComplete = 0 taskItem.Status = Microsoft.Office.Interop.Outlook.OlTaskStatus.olTaskInProgress The final step is to save the task. This step must be called, or the task will be lost when Outlook is closed or the object goes out of scope. Visual C# taskItem.Save(); Visual Basic taskItem.Save() VSTO support for Outlook makes it painless to create add-ins for Outlook in managed code. You can create all of the common entity types such as email messages, contacts, and tasks, respond to events, create HTML folder views, and also add application menus. Download Visual Studio 2005, the beta VSTO Outlook installer, then give this application a try. Get started at. VSTO support for Outlook . Compile error for GetNamespace(...). Any suggestions? Thanks! Eva After install the downloaded code, the project can not be loaded to VS 2005. I installed VSTO 2005. However the Office folder does not show in my VS 2005 under my Visual C#. I tried re-booting my system and uninstall, reinstall. It still does not show. Do you know why? Thanks! this code should also in c# users how can we receive mails from inbox folder? I am getting a compiler error for GetNamespace("MAPI") please tell me what to do do you have VSTO and VS Pro? I got the GetNamespace error when I first tried the code as well. I had to change it to the following to get it to work: Outlook.MAPIFolder outBoxFolder = (Outlook.MAPIFolder)outlookObj.Session.GetDefaultFolder(Outlook.OlDefaultFolders.olFolderOutbox); @Chance: Thanks how can we search for a particular mail in inbox Whats does outlookObj refer to in the code above. @Darren Brown: I see no reference to outlookObj, what line number / code base are you looking at? Hi. I use outlook tasks so often, but in the real world I categorize tasks importance as follows: 1. Urgent and Important 2. Urgent 3. Important 4. Not urgent and not important And I want to change this in outlook when creating new tasks: I want to change the default "High, Low and Normal" values for the importance field... Can you please help me with that? Does anybody load .msg file by using VSTO? How can i do it? tnx [code] public void CreateMessage() { // Create a new mail item. Microsoft.Office.Interop.Outlook._Application outlookObj = new Microsoft.Office.Interop.Outlook.Application(); Microsoft.Office.Interop.Outlook.MAPIFolder outBoxFolder = (Microsoft.Office.Interop.Outlook.MAPIFolder)outlookObj.Session.GetDefaultFolder(Microsoft.Office.Interop.Outlook.OlDefaultFolders.olFolderOutbox); Microsoft.Office.Interop.Outlook.MailItem mailItem = (Microsoft.Office.Interop.Outlook.MailItem)outBoxFolder.Items.Add(Microsoft.Office.Interop.Outlook.OlItemType.olMailItem); mailItem.Subject = "Coding4Fun Sample"; //TODO: Change the Subject field of the e-mail item. mailItem.Body = "What ever you want in the body" //TODO: Change the body of the e-mail item. mailItem.To = "emailaddress@emailaddress"; //TODO: Change the recipient's address. mailItem.Display(false); //true to make the window modal. The default value is false. } public void CreateTask() { // Create a new task item. Microsoft.Office.Interop.Outlook._Application outlookObj = new Microsoft.Office.Interop.Outlook.Application(); Microsoft.Office.Interop.Outlook.MAPIFolder tasksFolder = (Microsoft.Office.Interop.Outlook.MAPIFolder)outlookObj.Session.GetDefaultFolder(Microsoft.Office.Interop.Outlook.OlDefaultFolders.olFolderTasks); Microsoft.Office.Interop.Outlook.TaskItem taskItem = (Microsoft.Office.Interop.Outlook.TaskItem)tasksFolder.Items.Add(Microsoft.Office.Interop.Outlook.OlItemType.olTaskItem); taskItem.Subject = "Finish Coding4Fun article!"; taskItem.Owner = "Bob"; taskItem.PercentComplete = 0; taskItem.Status = Microsoft.Office.Interop.Outlook.OlTaskStatus.olTaskInProgress; taskItem.Save(); } [/code] Can I set mailItem.from of MailItem ? @Milind no you cannot looking at the API. @vusman requires Outlook for the APIs to succeed. VSTO are extentions for developing against Office. This is pretty old article as well, stuff may have changed in the new versions since then. is the VSTO page for everything. can this code work for users that dont have Microsoft Outlook installed on their PCs Remove this comment Remove this threadclose
http://channel9.msdn.com/coding4fun/articles/Outlook-from-the-Managed-World
CC-MAIN-2014-52
refinedweb
1,884
50.43
From: Wang Weiwei (wwwang_at_[hidden]) Date: 2006-07-15 11:15:20 Hello, I have done a test, on WinXP + VS.net 2003, with regex lib as follows , A created a new 'Win32 Console Project', I selected MFC and pre-compiled header support. Leave other settings default. Then I created a skeleton project that can compile. Then I added #include <boost/regex.hpp> in the main cpp file. When I then compiled the program in debug mode, I got a "error C2665" at regex_raw_buffer.hpp(177) - ¡°operator new. When I compiled it in release mode, it's Ok. Anybody know how about this phenomenom? Thanks Max Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2006/07/107853.php
CC-MAIN-2021-43
refinedweb
128
80.38
UPM has been around for nearly 4 years now, and seems to have been well adopted, to the point where we’re starting to see some support calls along the lines of “I’ve outgrown my original user store and I need to migrate my users to a more scalable store, without losing my data. How?” There are some obvious solutions around using tools like robocopy and Profile Nurse (sepago.de), but they don’t fit all scenarios, as they tend to require some downtime, or high-bandwidth connections (or both). So I’m going talk about a couple of other techniques that might help upgrade a live installation. One of those techniques is a bit “off-the-wall”, but maybe it’ll inspire you. Technique #1 – DFS is your friend Suppose in the early days of my UPM deployment I had defined my profile store (i.e. Path to user store) to be \\MyCorpLonSrv01\profile$\#cn# Imagine now that we’ve kept adding users, plus we really need to add a couple more file servers in the new Tokyo and Seattle offices, which are planned to have their own farms. We can’t really afford any downtime, either. And we’d rather do it without adding any new OUs for the moment. Scaling UPM to thousands of users really needs DFS Namespaces, and it’s this that we’ll use. Step 1 is to set up the new file servers \\MyCorpTkoSrv01 and \\MyCorpSeaSrv01 and create profile shares on each of them. Step 2 is to create a DFS Namespace called (say) \\MyCorp\Profiles. Beneath this namespace we can now create folders, and assign them to targets. This is very similar to the technique described in eDocs at . Typically we’ll use a department-based or a geo-based attribute from the AD user object to load balance. The eDocs page suggests #l# (the location attribute) as a suitable example. So at this point we have the following folders defined in a DFS namespace, pointing to the targets shown \\MyCorp\Profiles\London --> \\MyCorpLonSrv01\profile$ \\MyCorp\Profiles\Tokyo --> \\MyCorpTkoSrv01\profile$ \\MyCorp\Profiles\Seattle --> \\MyCorpSeaSrv01\profile$ Step 3 is to add or fix all the AD user objects so that every user has a location attribute set to one of London, Tokyo and Seattle. After that, you can simply change the path to user store policy to be \\MyCorp\Profiles\#l#\#cn# Simple? Maybe too simple… But what if some early Tokyo users already have profiles on MyCorpLonSrv01, or if MyCorpLonSrv01 isn’t really powerful enough to handle all the users on it. Or maybe there’s a merger/acquisition and we’re integrating hundreds of new users from the NewCorp domain? This can be a real headache, as #cn# probably isn’t unique – consider that both MyCorp and NewCorp may have employees called “John Smith”, for example – we really need to switch the profiles to use %USERNAME%.%USERDOMAIN% rather than #cn# We need a mechanism to migrate (some) users off MyCorpLonSrv01. Of course, if we don’t mind a bit of downtime, it’s easy. Use robocopy and/or Profile Nurse to migrate selected profiles to new servers, change the Path to user store and let everyone back on again. And we’ll need quite a bit of bandwidth if we want to get all profiles copied over a long weekend, which may not work over intercontinental links. But is there a way to do it without downtime, and without huge bandwidth? Maybe… Technique #2 – UPM Profiles are just like MS Roaming Profiles – Really! (Warning – This is the off-the-wall one!) UPM has a feature to migrate MS Roaming Profiles to UPM Profiles. If UPM finds that a user doesn’t yet have a UPM Profile, it can be configured to migrate an existing Roaming Profile. But what’s the difference between a UPM Profile and a Roaming Profile? It turns out there’s very little difference at all. This what you’d expect – if there were major differences between UPM Profiles and Roaming Profiles, then there would be application compatibility issues. How little a difference? On the local machine there’s a registry value in HKLM that tells UPM that the local profile for a particular user SID is actually a UPM profile. And there are a couple of extra copies of the registry (NTUSER.DAT.START and NTUSER.DAT.NET) that UPM uses to merge registry changes – they’re temporary files that get deleted at logoff anyway. On the profile store, there are a couple of extra files (PmCompatibility.ini and UPMSettings.ini) that record which machines have been sharing the profile, and how they’re configured. There’s also the Pending area, which is a temporary area used for managing profile updates made by concurrent sessions. If there are no sessions active, the Pending area will be empty. So maybe we can convince UPM that our old UPM Profile is actually a Roaming Profile and get it to migrate it to a new UPM Profile in a new location. Let’s see what we have to do… First of all, we have to get rid of that registry value in HKLM. If that’s around, UPM will find a UPM profile on the local machine but no UPM profile in the User Store. UPM (rightly) regards this as an error condition. To get rid of that registry value, we need to make sure UPM deletes the locally cached profile off the machine on logoff. So we set the policy “Delete locally cached profiles on logoff”. That gets rid of the profile and that troublesome registry value too. It doesn’t have much effect on performance, as you should be using “Profile streaming” anyway. Now we have to convince (=trick) UPM to treat the old UPM Profile as if it were a Roaming Profile, and migrate it. UPM looks for Roaming Profiles by querying Group Policy and the AD User Account (the user object) in the following order: - Group policy TS profile directory (if in TS session) - User account TS profile directory (if in TS session) - Group policy “normal” profile directory (if on Vista/7/2008) - User account “normal” profile directory So we need to set up one of these sources to point to the old user store, and all will be well. The Group policy works by putting the path into a registry key+value, which will be either: TS Profile: Normal Profile: The equivalent attributes in the AD user account are: TerminalServicesProfilePath or ProfilePath respectively. The two policies are machine policies, so may not be suitable in all cases. However, it should be straightforward to script an AD update that overwrites either TerminalServicesProfilePath or ProfilePath for each user to reference the “old” UPM Profile path. PowerShell, for example, provides Get-ADUser and Set-ADUser cmdlets that let a suitably-privileged administrator update the AD User Object. So now we have a way to: - Clear out the locally cached copy of the UPM profile - Convince UPM that the old UPM profile is in fact a Roaming Profile, by setting the AD user attribute TerminalServicesProfilePath to the old UPM profile (for example, changing John Smith’s attribute to “\\MyCorpLonSrv01\profile$\John Smith”) It just remains to point UPM at the new profile store (by changing the Path to user store policy to \\MyCorp\Profiles\#l#\%USERNAME%.%USERDOMAIN%) and UPM will migrate the old UPM profiles to the new user store, complete with name changes, and moving to new servers. The best and the worst of this solution is that migrations happen when users log on. If the logons are staggered, that’s great – your profiles will get migrated without straining your intercontinental link. But if all your users log on at once – the classic “thundering herd” scenario – then this solution is no better than robocopy. The mitigation is to move small batches of users at a time, moving them to a new farm with separate machines and the new user store, but still able to access the old user store. Sadly this involves setting up either a new OU or at least putting WMI filtering on, to give the new farm a different Path to user store policy. Conclusion The moral of this tale is to get your Path to user store right in the first place. If you don’t, it’ll be painful later. Some specific points: - Don’t use #cn#. Use %USERNAME%.%USERDOMAIN% - Do include a user attribute in the Path to user store that will help you load balance across file servers and across geos when your deployment grows. Make sure your user attributes are also valid as NTFS folder names! - Plan from the start to use DFS Namespaces for scalable profile deployments Hope that helps somebody… PostScript Some of you will be asking, has Bill actually tried any of this out? No, but that’s not the point. The point is to start some discussion on what issues UPM users have faced around this area, with a view of collecting some best practices and maybe setting UPM development direction. - Is this a realistic migration scenario? - How much control do UPM users typically have over their AD environment? Can you set up an OU at the drop of a hat? Or do you have to wait 6 weeks to get even the tiniest changes made? - Have you tried something like this, and did it work? - What other techniques have you found useful? - What new features or tools would be useful to you, to help in such a scenario? Disclaimer WARNING! Modifying the Active Directory incorrectly, either manually or by a script, can cause serious problems that may require you to restore your Active Directory. Citrix cannot guarantee that problems resulting from the incorrect modification of the Active Directory can be solved. Modifications of Active Directory, either manually or by script, are undertaken at your own risk.
https://www.citrix.com/blogs/2012/06/27/migrating-upm-profiles-%E2%80%93-tips-and-tricks/
CC-MAIN-2017-13
refinedweb
1,652
60.45
Exchange 2013 is stable. Like Exchange 2010, some of the CUs are better than others. Quality control has been a struggle for the Exchange team. But overall 2013 is functional and I'd usually recommend it over 2010 for new deployments. Exchange 2013 doesn't require HUB server role install separately on new serverAlthough there is not a single role for Hub Transport there is in fact a Hub transport service on both CAS and Mailbox server. CAS being the Front-End Transport Service and Mailbox server role has the Backend Transport Service. Exchange 2013 CAS server roles does automatic load balancing.CAS does not load balance itself. You still require a load balancing solution, i.e hardware load balancer, or WNLB (not recommended for production deployments). You can use Layer 4 with multiple namespaces to get the same effect of Layer 7This is what you had to do in Exchange 2010 if you wanted to use Layer4 load balancing. It can still be used but requires multiple namespaces and Exchange 2013 has simplified the namesapces to 2 autodiscover.domain.com and mail.domain.com. If you are experiencing a similar issue, please ask a related question Join the community of 500,000 technology professionals and ask your questions. Connect with top rated Experts 1 Experts available now in Live!
https://www.experts-exchange.com/questions/28615319/Emal-design-Verification.html
CC-MAIN-2017-04
refinedweb
220
57.27
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. import openerp error [Closed] The Question has been closed for reason: off-topic or not relevantby Hello, I am new in openerp, I installed openerp, postgresql. I installed eclipse, and download the source openerp. I wanted to test the changes to the source code level, but I have a problem: import openerp error . (when debug). can someone help me? regards, You need to install openerp as a python module and probably you did not. you can install openerp as a python module with command below. cd youropenerpserverdir sudo python setup.py install you also need to add your addons dir to /etc/openerp-server.conf I run the sudo command: 'sudo' is not recognized as an internal or external command, an executable programe or batch file You should read some linux tutorials hello I execut 'sudo' = 'runas' windows but still the same error : Traceback (most recent call last): File "C:\Users\hp\Downloads\openerp-6.1-20130206-002341\openerp-server", line 42, in <module> import openERP ImportError: No module named openERP About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now You should describe in more details what you did, so that it's useful to others, and other that tried the same can help you.
https://www.odoo.com/forum/help-1/question/import-openerp-error-2847
CC-MAIN-2017-51
refinedweb
251
62.07
This section describes how to build and run LAMMPS, for both new and experienced users.2.1 What's in the LAMMPS distribution When you download a LAMMPS tarball you will need to unzip and untar the downloaded file with the following commands, after placing the tarball in an appropriate directory. gunzip lammps*.tar.gz tar xvf lammps*.tar This will create a LAMMPS directory containing two files and several sub-directories: Note that the download page also has links to download Windows exectubles and installers, as well as pre-built executables for a few specific Linux distributions. It also has instructions for how to download/install LAMMPS for Macs (via Homebrew), and to download and update LAMMPS from SVN and Git repositories, which gives you the same files that are in the download tarball. The Windows and Linux executables for serial or parallel only include certain packages and bug-fixes/upgrades listed on this page up to a certain date, as stated on the download page. If you want an executable with non-included packages or that is more current, then you'll need to build LAMMPS yourself, as discussed in the next section. Skip to the Running LAMMPS sections for info on how to launch a LAMMPS Windows executable on a Windows box. This section has the following sub-sections: If you want to avoid building LAMMPS yourself, read the preceeding section about options available for downloading and installing executables. Details are discussed on the download page. Building LAMMPS can be simple or not-so-simple. If all you need are the default packages installed in LAMMPS, and MPI is already installed on your machine, or you just want to run LAMMPS in serial, then you can typically use the Makefile.mpi or Makefile.serial files in src/MAKE by typing one of these lines (from the src dir): make mpi make serial Note that on a facility supercomputer, there are often "modules" loaded in your environment that provide the compilers and MPI you should use. In this case, the "mpicxx" compile/link command in Makefile.mpi should just work by accessing those modules. It may be the case that one of the other Makefile.machine files in the src/MAKE sub-directories is a better match to your system (type "make" to see a list), you can use it as-is by typing (for example): make stampede If any of these builds (with an existing Makefile.machine) works on your system, then you're done! If you want to do one of the following: then building LAMMPS is more complicated. You may need to find where auxiliary libraries exist on your machine or install them if they don't. You may need to build additional libraries that are part of the LAMMPS package, before building LAMMPS. You may need to edit a Makefile.machine file to make it compatible with your system. Note that there is a Make.py tool in the src directory that automates several of these steps, but you still have to know what you are doing. Section 2.4 below describes the tool. It is a convenient way to work with installing/un-installing various packages, the Makefile.machine changes required by some packages, and the auxiliary libraries some of them use. Please read the following sections carefully. If you are not comfortable with makefiles, or building codes on a Unix platform, or running an MPI job on your machine, please find a local expert to help you. Many compilation, linking, and run problems that users have are often post the issue to the LAMMPS mail list. If you succeed in building LAMMPS on a new kind of machine, for which there isn't a similar machine Makefile included in the src/MAKE/MACHINES directory, then send it to the developers and we can include it in the LAMMPS distribution. Step 0 The src directory contains the C++ source and header files for LAMMPS. It also contains a top-level Makefile and a MAKE sub-directory with low-level Makefile.* files for many systems and machines. See the src/MAKE/README file for a quick overview of what files are available and what sub-directories they are in. The src/MAKE dir has a few files that should work as-is on many platforms. The src/MAKE/OPTIONS dir has more that inovke additional compiler, MPI, and other setting options commonly used by LAMMPS, to illustrate their syntax. The src/MAKE/MACHINES dir has many more that have been tweaked or optimized for specific machines. These files are all good starting points if you find you need to change them for your machine. Put any file you edit into the src/MAKE/MINE directory and it will be never be touched by any LAMMPS updates. From within the src directory, type "make" or "gmake". You should see a list of available choices from src/MAKE and all of its sub-directories. If one of those has the options you want or is the machine you want, you can type a command like: make mpi or make serial_icc or gmake mac Note that the corresponding Makefile.machine can exist in src/MAKE or any of its sub-directories. If a file with the same name appears in multiple places (not a good idea), the order they are used is as follows: src/MAKE/MINE, src/MAKE, src/MAKE/OPTIONS, src/MAKE/MACHINES. This gives preference to a file you have created/edited and put in src/MAKE/MINE. Note that on a multi-processor or multi-core platform you can launch a parallel make, by using the "-j" switch with the make command, which will build LAMMPS more quickly. If you get no errors and an executable like lmp_mpi or lmp_g++_serial or lmp_mac is produced, then you're done; it's your lucky day. Note that by default only a few of LAMMPS optional packages are installed. To build LAMMPS with optional packages, see this section below. Step 1 If Step 0 did not work, you will need to create a low-level Makefile for your machine, like Makefile.foo. You should make a copy of an existing Makefile.* in src/MAKE or one of its sub-directories as a starting point. The only portions of the file you need to edit are the first line, the "compiler/linker settings" section, and the "LAMMPS-specific settings" section. When it works, put the edited file in src/MAKE/MINE and it will not be altered by any future LAMMPS updates. Step 2 Change the first line of Makefile.foo to list the word "foo" after the "#", and whatever other options it will set. This is the line you will see if you just type "make". Step 3 The "compiler/linker settings" section lists compiler and linker settings for your C++ compiler, including optimization flags. You can use g++, the open-source GNU compiler, which is available on all Unix systems. You can also use mpicxx which will typically be available if MPI is installed on your system, though you should check which actual compiler it wraps. Vendor compilers often produce faster code. On boxes with Intel CPUs, we suggest using the Intel icc compiler, which can be downloaded from Intel's compiler site. If building a C++ code on your machine requires additional libraries, then you should list them as part of the LIB variable. You should not need to do this if you use mpicxx. The DEPFLAGS setting is what triggers the C++ compiler to create a dependency list for a source file. This speeds re-compilation when source (*.cpp) or header (*.h) files are edited. Some compilers do not support dependency file creation, or may use a different switch than -D. GNU g++ and Intel icc works with -D. If your compiler can't create dependency files, then you'll need to create a Makefile.foo patterned after Makefile.storm, which uses different rules that do not involve dependency files. Note that when you build LAMMPS for the first time on a new platform, a long list of *.d files will be printed out rapidly. This is not an error; it is the Makefile doing its normal creation of dependencies. Step 4 The "system-specific settings" section has several parts. Note that if you change any -D setting in this section, you should do a full re-compile, after typing "make clean" (which will describe different clean options). The LMP_INC variable is used to include options that turn on ifdefs within the LAMMPS code. The options that are currently recogized are: The read_data and dump commands will read/write gzipped files if you compile with -DLAMMPS_GZIP. It requires that your machine supports the "popen" function in the standard runtime library and that a gzip executable can be found by LAMMPS during a run. If you use -DLAMMPS_JPEG, the dump image command will be able to write out JPEG image files. For JPEG files, you must also link LAMMPS with a JPEG library, as described below. If you use -DLAMMPS_PNG, the dump image command will be able to write out PNG image files. For PNG files, you must also link LAMMPS with a PNG library, as described below. If neither of those two defines are used, LAMMPS will only be able to write out uncompressed PPM image files. If you use -DLAMMPS_FFMPEG, the dump movie command will be available to support on-the-fly generation of rendered movies the need to store intermediate image files. It requires that your machines supports the "popen" function in the standard runtime library and that an FFmpeg executable can be found by LAMMPS during the run. Using -DLAMMPS_MEMALIGN= If you use -DLAMMPS_XDR, the build will include XDR compatibility files for doing particle dumps in XTC format. This is only necessary if your platform does have its own XDR files available. See the Restrictions section of the dump command for details. Use at most one of the -DLAMMPS_SMALLBIG, -DLAMMPS_BIGBIG, -D- DLAMMPS_SMALLSMALL settings. The default is -DLAMMPS_SMALLBIG. These settings refer to use of 4-byte (small) vs 8-byte (big) integers within LAMMPS, as specified in src/lmptype.h. The only reason to use the BIGBIG setting is to enable simulation of huge molecular systems (which store bond topology info) with more than 2 billion atoms, or to track the image flags of moving atoms that wrap around a periodic box more than 512 times. Normally, the only reason to use SMALLSMALL is if your machine does not support 64-bit integers, though you can use SMALLSMALL setting if you are running in serial or on a desktop machine or small cluster where you will never run large systems or for long time (more than 2 billion atoms, more than 2 billion timesteps). See the Additional build tips section below for more details on these settings. Note that two packages, USER-ATC and USER-CUDA are not currently compatible with -DLAMMPS_BIGBIG. Also the GPU package requires the lib/gpu library to be compiled with the same setting, or the link will fail. The -DLAMMPS_LONGLONG_TO_LONG setting may be needed if your system or MPI version does not recognize "long long" data types. In this case a "long" data type is likely already 64-bits, in which case this setting will convert to that data type. Using one of the -DPACK_ARRAY, -DPACK_POINTER, and -DPACK_MEMCPY options can make for faster parallel FFTs (in the PPPM solver) on some platforms. The -DPACK_ARRAY setting is the default. See the kspace_style command for info about PPPM. See Step 6 below for info about building LAMMPS with an FFT library. Step 5 The 3 MPI variables are used to specify an MPI library to build LAMMPS with. Note that you do not need to set these if you use the MPI compiler mpicxx for your CC and LINK setting in the section above. The MPI wrapper knows where to find the needed files. If you want LAMMPS to run in parallel, you must have an MPI library installed on your platform. If MPI is installed on your system in the usual place (under /usr/local), you also may not need to specify these 3 variables, assuming /usr/local is in your path. On some large parallel machines which use "modules" for their compile/link environements, you may simply need to include the correct module in your build environment, before building LAMMPS. Or the parallel machine may have a vendor-provided MPI which the compiler has no trouble finding. Failing this, these 3 variables can be used to specify where the mpi.h file (MPI_INC) and the MPI library file (MPI_PATH) are found and the name of the library file (MPI_LIB). If you are installing MPI yourself, we recommend Argonne's MPICH2 or OpenMPI. MPICH can be downloaded from the Argonne MPI site. OpenMPI can be downloaded from the OpenMPI site. Other MPI packages should also work. If you are running on a big parallel platform, your system people or the vendor should have already installed a version of MPI, which is likely to be faster than a self-installed LAMMPS build, which can avoid problems that can arise when linking LAMMPS to the MPI library. If you just want to run LAMMPS on a single processor, you can use the dummy MPI library provided in src/STUBS, since you don't need a true MPI library installed on your system. See src/MAKE/Makefile.serial for how to specify the 3 MPI variables in this case. You will also need to build the STUBS library for your platform before making LAMMPS itself. Note that if you are building with src/MAKE/Makefile.serial, e.g. by typing "make serial", then the STUBS library is built for you. To build the STUBS library from the src directory, type "make stubs", or from the src/STUBS dir, type "make". This should create a libmpi_stubs.a file suitable for linking to LAMMPS. If the build fails, you will need to edit the STUBS/Makefile for your platform. The file STUBS/mpi.c provides a CPU timer function called. Step 6 The 3 FFT variables allow you to specify an FFT library which LAMMPS uses (for performing 1d FFTs) when running the particle-particle particle-mesh (PPPM) option for long-range Coulombics via the kspace_style command. LAMMPS supports various open-source or vendor-supplied FFT libraries for this purpose. If you leave these 3 variables blank, LAMMPS will use the open-source KISS FFT library, which is included in the LAMMPS distribution. This library is portable to all platforms and for typical LAMMPS simulations is almost as fast as FFTW or vendor optimized libraries. If you are not including the KSPACE package in your build, you can also leave the 3 variables blank. Otherwise, select which kinds of FFTs to use as part of the FFT_INC setting by a switch of the form -DFFT_XXX. Recommended values for XXX are: MKL, SCSL, FFTW2, and FFTW3. Legacy options are: INTEL, SGI, ACML, and T3E. For backward compatability, using -DFFT_FFTW will use the FFTW2 library. Using -DFFT_NONE will use the KISS library described above. You may also need to set the FFT_INC, FFT_PATH, and FFT_LIB variables, so the compiler and linker can find the needed FFT header and library files. Note that on some large parallel machines which use "modules" for their compile/link environements, you may simply need to include the correct module in your build environment. Or the parallel machine may have a vendor-provided FFT library which the compiler has no trouble finding. FFTW is a fast, portable library that should also work on any platform. You can download it from. Both the legacy version 2.1.X and the newer 3.X versions are supported as -DFFT_FFTW2 or -DFFT_FFTW3. Building FFTW for your box should be as simple as ./configure; make. Note that on some platforms FFTW2 has been pre-installed, and uses renamed files indicating the precision it was compiled with, e.g. sfftw.h, or dfftw.h instead of fftw.h. In this case, you can specify an additional define variable for FFT_INC called -DFFTW_SIZE, which will select the correct include file. In this case, for FFT_LIB you must also manually specify the correct library, namely -lsfftw or -ldfftw. The FFT_INC variable also allows for a -DFFT_SINGLE setting that will use single-precision FFTs with PPPM, which can speed-up long-range calulations, particularly in parallel or on GPUs. Fourier transform and related PPPM operations are somewhat insensitive to floating point truncation errors and thus do not always need to be performed in double precision. Using the -DFFT_SINGLE setting trades off a little accuracy for reduced memory use and parallel communication costs for transposing 3d FFT data. Note that single precision FFTs have only been tested with the FFTW3, FFTW2, MKL, and KISS FFT options. Step 7 The 3 JPG variables allow you to specify a JPEG and/or PNG library which LAMMPS uses when writing out JPEG or PNG files via the dump image command. These can be left blank if you do not use the -DLAMMPS_JPEG or -DLAMMPS_PNG switches discussed above in Step 4, since in that case JPEG/PNG output will be disabled. A standard JPEG library usually goes by the name libjpeg.a or libjpeg.so and has an associated header file jpeglib.h. Whichever JPEG library you have on your platform, you'll need to set the appropriate JPG_INC, JPG_PATH, and JPG_LIB variables, so that the compiler and linker can find it. A standard PNG library usually goes by the name libpng.a or libpng.so and has an associated header file png.h. Whichever PNG library you have on your platform, you'll need to set the appropriate JPG_INC, JPG_PATH, and JPG_LIB variables, so that the compiler and linker can find it. As before, if these header and library files are in the usual place on your machine, you may not need to set these variables. Step 8 Note that by default only a few of LAMMPS optional packages are installed. To build LAMMPS with optional packages, see this section below, before proceeding to Step 9. Step 9 That's it. Once you have a correct Makefile.foo, and you have pre-built any other needed libraries (e.g. MPI, FFT, etc) all you need to do from the src directory is type something like this: make foo or gmake foo You should get the executable lmp_foo when the build is complete. IMPORTANT NOTE: If an error occurs when building LAMMPS, the compiler or linker will state very explicitly what the problem is. The error message should give you a hint as to which of the steps above has failed, and what you need to do in order to fix it. Building a code with a Makefile is a very logical process. The compiler and linker need to find the appropriate files and those files need to be compatible with LAMMPS source files. When a make fails, there is usually a very simple reason, which you or a local expert will need to fix. Here are two non-obvious errors that can occur: (1) If the make command breaks immediately with errors that indicate it can't find files with a "*" in their names, this can be because your machine's native make doesn't support wildcard expansion in a makefile. Try gmake instead of make. If that doesn't work, try using a -f switch with your make command to use a pre-generated. Note that you should include/exclude any desired optional packages before using the "make makelist" command. (2) If you get an error that says something like 'identifier "atoll" is undefined', then your machine does not support "long long" integers. Try using the -DLAMMPS_LONGLONG_TO_LONG setting described above in Step 4. (1) Building LAMMPS for multiple platforms. You can make LAMMPS for multiple platforms from the same src directory. Each target creates its own object sub-directory called Obj_target where it stores the system-specific *.o files. (2) Cleaning up. Typing "make clean-all" or "make clean-machine" will delete *.o object files created when LAMMPS is built, for either all builds or for a particular machine. (3). The default is -DLAMMPS_SMALLBIG which allows for systems with up to 2^63 atoms and 2^63 timesteps (about 9e18). The atom limit is for atomic systems which do not store bond topology info and thus do not require atom IDs. If you use atom IDs for atomic systems (which is the default) or if you use a molecular model, which stores bond topology info and thus requires atom IDs, the limit is 2^31 atoms (about 2 billion). This is because the IDs are stored in 32-bit integers. Likewise, with this setting, the 3 image flags for each atom (see the dump doc page for a discussion) are stored in a 32-bit integer, which means the atoms can only wrap around a periodic box (in each dimension) at most 512 times. If atoms move through the periodic box more than this many times, the image flags will "roll over", e.g. from 511 to -512, which can cause diagnostics like the mean-squared displacement, as calculated by the compute msd command, to be faulty. To allow for larger atomic systems with atom IDs or larger molecular systems or larger image flags, compile with -DLAMMPS_BIGBIG. This stores atom IDs and image flags in 64-bit integers. This enables atomic or molecular systems with atom IDS of up to 2^63 atoms (about 9e18). And image flags will not "roll over" until they reach 2^20 = 1048576. If your system does not support 8-byte integers, you will need to compile with the -DLAMMPS_SMALLSMALL setting. This will restrict the total number of atoms (for atomic or molecular systems) and timesteps to 2^31 (about 2 billion). Image flags will roll over at 2^9 = 512. Note that in src/lmptype.h there are definitions of all these data types as well as the MPI data types associated with them. The MPI types need to be consistent with the associated C data types, or else LAMMPS will generate a run-time error. As far as we know, the settings defined in src/lmptype.h are portable and work on every current system. In all cases, the size of problem that can be run on a per-processor basis is limited by 4-byte integer storage to 2^31 atoms per processor (about 2 billion). This should not normally be a limitation since such a problem would have a huge per-processor memory footprint due to neighbor lists and would run very slowly in terms of CPU secs/timestep. OS X is BSD Unix, so it should just work. See the src/MAKE/MACHINES/Makefile.mac and Makefile.mac_mpi files. The LAMMPS download page has an option to download both a serial and parallel pre-built Windows executable. See the Running LAMMPS section for instructions on running these executables on a Windows box. The pre-built executables hosted on the LAMMPS download page are built with a subset of the available packages; see the download page for the list. These are single executable files. No examples or documentation in included. You will need to download the full source code package to obtain those. As an alternative, you can download "daily builds" (and some older versions) of the installer packages from rpm.lammps.org/windows.html. These executables are built with most optional packages and the download includes documentation, some tools and most examples. If you want a Windows version with specific packages included and excluded, you can build it yourself. One way to do this is install and use cygwin to build LAMMPS with a standard unix style make program, just as you would on a Linux box; see src/MAKE/MACHINES/Makefile.cygwin. The other way to do this is using Visual Studio and project files. See the src/WINDOWS directory and its README.txt file for instructions on both a basic build and a customized build with pacakges you select. This section has the following sub-sections: Note that the following Section 2.4 describes the Make.py tool which can be used to install/un-install packages and build the auxiliary libraries which some of them use. It can also auto-edit a Makefile.machine to add settings needed by some packages. The source code for LAMMPS is structured as a set of core files which are always included, plus optional packages. Packages are groups of files that enable a specific set of features. For example, force fields for molecular systems or granular systems are in packages. You can see the list of all packages by typing "make package" from within the src directory of the LAMMPS distribution. This also lists various make commands that can be used to manipulate packages. If you use a command in a LAMMPS input script that is specific to a particular package, you must have built LAMMPS with that package, else you will get an error that the style is invalid or the command is unknown. Every command's doc page specfies if it is part of a package. You can also type lmp_machine -h to run your executable with the optional -h command-line switch for "help", which will list the styles and commands known to your executable. There are two kinds of packages in LAMMPS, standard and user packages. More information about the contents of standard and user packages is given in Section_packages of the manual. The difference between standard and user packages have been contributed by users, and always begin with the user prefix. If they are a single command (single file), they are typically in the user-misc package. Otherwise, they are a a set of files grouped together which add a specific functionality to the code. User packages don't necessarily meet the requirements of the standard packages. If you have problems using a feature provided in a user package, you will likely need to contact the contributor directly to get help. Information on how to submit additions you make to LAMMPS as a user-contributed package is given in this section of the documentation. Some packages (both standard and user) require additional libraries. See more details below. To use or not use a package you must include or exclude it before building LAMMPS. From the src directory, this is typically as simple as: make yes-colloid make g++ or make no-manybody make g++ IMPORTANT NOTE: You should NOT include/exclude packages and build LAMMPS in a single make command by using multiple targets, e.g. make yes-colloid g++. This is because the make procedure creates a list of source files that will be out-of-date for the build if the package configuration changes during the same command. Some packages have individual files that depend on other packages being included. LAMMPS checks for this and does the right thing. I.e. individual files are only included if their dependencies are already included. Likewise, if a package is excluded, other files dependent on that package are also excluded. If you will never run simulations that use the features in a particular packages, there is no reason to include it in your build. For some packages, this will keep you from having to build auxiliary libraries (see below), and will also produce a smaller executable which may run a bit faster. When you download a LAMMPS tarball, these packages are pre-installed in the src directory: KSPACE, MANYBODY,MOLECULE. When you download LAMMPS source files from the SVN or Git repositories, no packages are pre-installed. Packages are included or excluded by typing "make yes-name" or "make no-name", where "name" is the name of the package in lower-case, e.g. name = kspace for the KSPACE package or name = user-atc for the USER-ATC package. You can also type "make yes-standard", "make no-standard", "make yes-std", "make no-std", "make yes-user", "make no-user", "make yes-all" or "make no-all" to include/exclude various sets of packages. Type "make package" to see the all of the package-related make options. IMPORTANT NOTE: Inclusion/exclusion of a package works by simply moving files back and forth between the main src directory and sub-directories with the package name (e.g. src/KSPACE, src/USER-ATC), so that the files are seen or not seen when LAMMPS is built. After you have included or excluded a package, you must re-build LAMMPS. Additional package-related" or "make pu" will overwrite src files with files from the package sub-directories if the package has been included. It should be used after a patch is installed, since patches only update the files in the package sub-directory, but not the src files. Typing "make package-overwrite" will overwrite files in the package sub-directories with src files. Typing "make package-status" or "make ps" will show which packages are currently included. Of those that are included, it will list files that are different in the src directory and package sub-directory. Typing "make package-diff" lists all differences between these files. Again, type "make package" to see all of the package-related make options. A few of the standard and user packages require additional auxiliary libraries. They must be compiled first, before LAMMPS is built. If you get a LAMMPS build error about a missing library, this is likely the reason. See the Section_packages doc page for a list of packages that have auxiliary libraries. The lib directory in the distribution has sub-directories with package names that correspond to the needed auxiliary libs, e.g. lib/reax. Each sub-directory has a README file that gives more details. Code for most of the auxiliary libraries is included in that directory. Examples are the USER-ATC and MEAM packages. A few of the sub-directories do not include code, but do include instructions and sometimes scripts that automate the process of downloading the auxiliary library and installing it so LAMMPS can link to it. Examples are the KIM and VORONOI and USER-MOLFILE packages. For libraries with provided code, the sub-directory README file (e.g. lib/reax/README) has instructions on how to build that library. Typically this is done by typing something like: make -f Makefile.g++ If one of the provided Makefiles is not appropriate for your system you will need to edit or add one. Note that all the Makefiles have a setting for EXTRAMAKE at the top that specifies a Makefile.lammps.* file. If the library build is successful, it will produce 2 files in the lib directory: libpackage.a Makefile.lammps The Makefile.lammps file will be a copy of the EXTRAMAKE file setting specified in the library Makefile.* you used. Note that you must insure that the settings in Makefile.lammps are appropriate for your system. If they are not, the LAMMPS build will fail. As explained in the lib/package/README files, the settings in Makefile.lammps are used to specify additional system libraries and their locations so that LAMMPS can build with the auxiliary library. For example, if the MEAM or REAX packages are used, the auxiliary libraries consist of F90 code, built with a Fortran complier. To link that library with LAMMPS (a C++ code) via whatever C++ compiler LAMMPS is built with, typically requires additional Fortran-to-C libraries be included in the link. Another example are the BLAS and LAPACK libraries needed to use the USER-ATC or USER-AWPMD packages. For libraries without provided code, the sub-directory README file has information on where to download the library and how to build it, e.g. lib/voronoi/README. The README files also describe how you must either (a) create soft links, via the "ln" command, in those directories to point to where you built or installed the packages, or (b) check or edit the Makefile.lammps file in the same directory to provide that information. Some of the sub-directories, e.g. lib/voronoi, also have an install.py script which can be used to automate the process of downloading/building/installing the auxiliary library, and setting the needed soft links. Type "python install.py" for further instructions. As with the sub-directories containing library code, if the soft links or settings in the lib/package/Makefile.lammps files are not correct, the LAMMPS build will typically fail. A few packages require specific settings in Makefile.machine, to either build or use the package effectively. These are the USER-INTEL, KOKKOS, USER-OMP, and OPT packages. The details of what flags to add or what variables to define are given on the doc pages that describe each of these accelerator packages in detail: Here is a brief summary of what Makefile.machine changes are needed. Note that the Make.py tool, described in the next Section 2.4 can automatically add the needed info to an existing machine Makefile, using simple command-line arguments. In src/MAKE/OPTIONS see the following Makefiles for examples of the changes described below: For the USER-INTEL package, you have 2 choices when building. You can build with CPU or Phi support. The latter uses Xeon Phi chips in "offload" mode. Each of these modes requires additional settings in your Makefile.machine for CCFLAGS and LINKFLAGS. For CPU mode (if using an Intel compiler): For Phi mode add the following in addition to the CPU mode flags: And also add this to CCFLAGS: -offload-option,mic,compiler,"-fp-model fast=2 -mGLOB_default_function_attrs=\"gather_scatter_loop_unroll=4\"" For the KOKKOS package, you have 3 choices when building. You can build with OMP or Cuda or Phi support. Phi support uses Xeon Phi chips in "native" mode. This can be done by setting the following variables in your Makefile.machine: These can also be set as additional arguments to the make command, e.g. make g++ OMP=yes MIC=yes Building the KOKKOS package with CUDA support requires a Makefile machine that uses the NVIDIA "nvcc" compiler, as well as an appropriate "arch" setting appropriate to the GPU hardware and NVIDIA software you have on your machine. See src/MAKE/OPTIONS/Makefile.kokkos_cuda for an example of such a machine Makefile. For the USER-OMP package, your Makefile.machine needs additional settings for CCFLAGS and LINKFLAGS. For the OPT package, your Makefile.machine needs an additional settings for CCFLAGS. The src directory includes a Make.py script, written in Python, which can be used to automate various steps of the build process. It is particularly useful for working with the accelerator packages, as well as other packages which require auxiliary libraries to be built. The goal of the Make.py tool is to allow any complex multi-step LAMMPS build to be performed as a single Make.py command. And you can archive the commands, so they can be re-invoked later via the -r (redo) switch. If you find some LAMMPS build procedure that can't be done in a single Make.py command, let the developers know, and we'll see if we can augment the tool. You can run Make.py from the src directory by typing either: Make.py -h python Make.py -h which will give you help info about the tool. For the former to work, you may need to edit the first line of Make.py to point to your local Python. And you may need to insure the script is executable: chmod +x Make.py Here are examples of build tasks you can perform with Make.py: The bench and examples directories give Make.py commands that can be used to build LAMMPS with the various packages and options needed to run all the benchmark and example input scripts. See these files for more details: All of the Make.py options and syntax help can be accessed by using the "-h" switch. E.g. typing "Make.py -h" gives Syntax: Make.py switch args ... switches can be listed in any order help switch: -h prints help and syntax for all other specified switches switch for actions: -a lib-all, lib-dir, clean, file, exe or machine list one or more actions, in any order machine is a Makefile.machine suffix, must be last if used one-letter switches: -d (dir), -j (jmake), -m (makefile), -o (output), -p (packages), -r (redo), -s (settings), -v (verbose) switches for libs: -atc, -awpmd, -colvars, -cuda -gpu, -meam, -poems, -qmmm, -reax switches for build and makefile options: -intel, -kokkos, -cc, -mpi, -fft, -jpg, -png Using the "-h" switch with other switches and actions gives additional info on all the other specified switches or actions. The "-h" can be anywhere in the command-line and the other switches do not need their arguments. E.g. type "Make.py -h -d -atc -intel" will print: -d dir dir = LAMMPS home dir if -d not specified, working dir must be lammps/src -atc make=suffix lammps=suffix2 all args are optional and can be in any order make = use Makefile.suffix (def = g++) lammps = use Makefile.lammps.suffix2 (def = EXTRAMAKE in makefile) -intel mode mode = cpu or phi (def = cpu) build Intel package for CPU or Xeon Phi Note that Make.py never overwrites an existing Makefile.machine. Instead, it creates src/MAKE/MINE/Makefile.auto, which you can save or rename if desired. Likewise it creates an executable named src/lmp_auto, which you can rename using the -o switch if desired. The most recently executed Make.py commmand is saved in src/Make.py.last. You can use the "-r" switch (for redo) to re-invoke the last command, or you can save a sequence of one or more Make.py commands to a file and invoke the file of commands using "-r". You can also label the commands in the file and invoke one or more of them by name. A typical use of Make.py is to start with a valid Makefile.machine for your system, that works for a vanilla LAMMPS build, i.e. when optional packages are not installed. You can then use Make.py to add various settings (FFT, JPG, PNG) to the Makefile.machine as well as change its compiler and MPI options. You can also add additional packages to the build, as well as build the needed supporting libraries. You can also use Make.py to create a new Makefile.machine from scratch, using the "-m none" switch, if you also specify what compiler and MPI options to use, via the "-cc" and "-mpi" switches. LAMMPS can be built as either a static or shared library, which can then be called from another application or a scripting language. See this section for more info on coupling LAMMPS to other codes. See this section for more info on wrapping and running LAMMPS from Python. To build LAMMPS as a static library (*.a file on Linux), type make makelib make -f Makefile.lib foo where foo is the machine name. This kind of library is typically used to statically link a driver application to LAMMPS, so that you can insure all dependencies are satisfied at compile time. Note that inclusion or exclusion of any desired optional packages should be done before typing "make makelib". The first "make" command will create a current Makefile.lib with all the file names in your src dir. The second "make" command will use it to build LAMMPS as a static library, using the ARCHIVE and ARFLAGS settings in src/MAKE/Makefile.foo. The build will create the file liblammps_foo.a which another application can link to. To build LAMMPS as a shared library (*.so file on Linux), which can be dynamically loaded, e.g. from Python, type make makeshlib make -f Makefile.shlib foo where foo is the machine name. This kind of library is required when wrapping LAMMPS with Python; see Section_python for details. Again, note that inclusion or exclusion of any desired optional packages should be done before typing "make makelib". The first "make" command will create a current Makefile.shlib with all the file names in your src dir. The second "make" command will use it to build LAMMPS as a shared library, using the SHFLAGS and SHLIBFLAGS settings in src/MAKE/Makefile.foo. The build will create the file liblammps_foo.so which another application can link to dyamically. It will also create a soft link liblammps.so, which the Python wrapper uses by default. Note that for a shared library to be usable by a calling program, all the auxiliary libraries it depends on must also exist as shared libraries. This will be the case for libraries included with LAMMPS, such as the dummy MPI library in src/STUBS or any package libraries in lib/packges, since they are always built as shared libraries with the -fPIC switch. However, if a library like MPI or FFTW does not exist as a shared library, the second make command will generate an error. This means you will need to install a shared library version of the package. The build instructions for the library should tell you how to do this. As an example, here is how to build and install the MPICH library, a popular open-source version of MPI, distributed by Argonne National Labs, as a shared library in the default /usr/local/lib location: ./configure --enable-shared make make install You may need to use "sudo make install" in place of the last line if you do not have write privileges for /usr/local/lib. The end result should be the file /usr/local/lib/libmpich.so. The operating system finds shared libraries to load at run-time using the environment variable LD_LIBRARY_PATH. So you may wish to copy the file src/liblammps.so or src/liblammps_g++.so (for example) to a place the system can find it by default, such as /usr/local/lib, or you may wish to add the LAMMPS src directory to LD_LIBRARY_PATH, so that the current version of the shared library is always available to programs that use it. For the csh or tcsh shells, you would add something like this to your ~/.cshrc file: setenv LD_LIBRARY_PATH $LD_LIBRARY_PATH:/home/sjplimp/lammps/src Either flavor of library (static or shared0 allows one or more LAMMPS objects to be instantiated from the calling program. When used from a C++ program, all of LAMMPS is wrapped in a LAMMPS_NS namespace; you can safely use any of its classes and methods from within the calling code, as needed. When used from a C or Fortran program or a scripting language like Python, the library has a simple function-style interface, provided in src/library.cpp and src/library.h. See the sample codes in examples/COUPLE/simple for examples of C++ and C and Fortran codes that invoke LAMMPS thru its library interface. There are other examples as well in the COUPLE directory which are discussed in Section_howto 10 of the manual. See Section_python of the manual for a description of the Python wrapper provided with LAMMPS that operates through the LAMMPS library interface. The files src/library.cpp and library.h define the C-style API for using LAMMPS as a library. See Section_howto 19 of the manual for a description of the interface and how to extend it for your needs. By default, LAMMPS runs by reading commands from standard input. Thus if you run the LAMMPS executable by itself, e.g. lmp_linux it will simply wait, expecting commands from the keyboard. Typically you should put commands in an input script and use I/O redirection, e.g. lmp_linux < in.file For parallel environments this should also work. If it does not, use the '-in' command-line switch, e.g. lmp_linux -in in.file This section describes how input scripts are structured and what commands they contain. You can test LAMMPS on any of the sample inputs provided in the examples or bench directory. Input scripts are named in.* and sample outputs are named log.*.name.P where name is a machine and P is the number of processors it was run on. Here is how you might run a standard Lennard-Jones benchmark on a Linux box, using mpirun to launch a parallel job: cd src make linux cp lmp_linux ../bench cd ../bench mpirun -np 4 lmp_linux < in.lj See this page for timings for this and the other benchmarks on various platforms. Note that some of the example scripts require LAMMPS to be built with one or more of its optional packages. On a Windows box, you can skip making LAMMPS and simply download an executable, as described above, though the pre-packaged executables include only certain packages. To run a LAMMPS executable on a Windows machine, first decide whether you want to download the non-MPI (serial) or the MPI (parallel) version of the executable. Download and save the version you have chosen. For the non-MPI version, follow these steps: For the MPI version, which allows you to run LAMMPS under Windows on multiple processors, follow these steps: Section_errors. Either the full word or a one-or-two letter abbreviation can be used: For example, lmp_ibm might be launched as follows: mpirun -np 16 lmp_ibm -v f tmp.out -l my.log -sc none < in.alloy mpirun -np 16 lmp_ibm -var f tmp.out -log my.log -screen none < in.alloy Here are the details on the options: -cuda on/off Explicitly enable or disable CUDA support, as provided by the USER-CUDA package. Even if LAMMPS is built with this package, as described above in Section 2.3, this switch must be set to enable running with the CUDA-enabled styles the package provides. If the switch is not set (the default), LAMMPS will operate as if the USER-CUDA package were not installed; i.e. you can run standard LAMMPS or with the GPU package, for testing or benchmarking purposes. . Print a brief help summary and a list of options compiled into this executable for each LAMMPS style (atom_style, fix, compute, pair_style, bond_style, etc). This can tell you if the command you want to use was included via the appropriate package at compile time. LAMMPS will print the info and immediately exit if this switch is used. -in file Specify a file to use as an input script. This is an optional switch when running LAMMPS in one-partition mode. If it is not specified, LAMMPS reads its script from standard input, typically from a script via I/O redirection; e.g. lmp_linux < in.run. I/O redirection should also work in parallel, but if it does not (in the unlikely case that an MPI implementation does not support it), then use the -in flag. Note that this is a required switch when running LAMMPS in multi-partition mode, since multiple processors cannot all read from stdin. -kokkos on/off keyword/value ... Explicitly enable or disable KOKKOS support, as provided by the KOKKOS package. Even if LAMMPS is built with this package, as described above in Section 2.3, this switch must be set to enable running with the KOKKOS-enabled styles the package provides. If the switch is not set (the default), LAMMPS will operate as if the KOKKOS package were not installed; i.e. you can run standard LAMMPS or with the GPU or USER-CUDA or USER-OMP packages, for testing or benchmarking purposes. Additional optional keyword/value pairs can be specified which determine how Kokkos will use the underlying hardware on your platform. These settings apply to each MPI task you launch via the "mpirun" or "mpiexec" command. You may choose to run one or more MPI tasks per physical node. Note that if you are running on a desktop machine, you typically have one physical node. On a cluster or supercomputer there may be dozens or 1000s of physical nodes. Either the full word or an abbreviation can be used for the keywords. Note that the keywords do not use a leading minus sign. I.e. the keyword is "t", not "-t". Also note that each of the keywords has a default setting. Example of when to use these options and what settings to use on different platforms is given in Section 5.8. device Nd This option is only relevant if you built LAMMPS with CUDA=yes, you have more than one GPU per node, and if you are running with only one MPI task per node. The Nd setting is the ID of the GPU on the node to run on. By default Nd = 0. If you have multiple GPUs per node, they have consecutive IDs numbered as 0,1,2,etc. This setting allows you to launch multiple independent jobs on the node, each with a single MPI task per node, and assign each job to run on a different GPU. gpus Ng Ns This option is only relevant if you built LAMMPS with CUDA=yes, you have more than one GPU per node, and you are running with multiple MPI tasks per node (up to one per GPU). The Ng setting is how many GPUs you will use. The Ns setting is optional. If set, it is the ID of a GPU to skip when assigning MPI tasks to GPUs. This may be useful if your desktop system reserves one GPU to drive the screen and the rest are intended for computational work like running LAMMPS. By default Ng = 1 and Ns is not set. Depending on which flavor of MPI you are running, LAMMPS will look for one of these 3 environment variables SLURM_LOCALID (various MPI variants compiled with SLURM support) MV2_COMM_WORLD_LOCAL_RANK (Mvapich) OMPI_COMM_WORLD_LOCAL_RANK (OpenMPI) which are initialized by the "srun", "mpirun" or "mpiexec" commands. The environment variable setting for each MPI rank is used to assign a unique GPU ID to the MPI task. threads Nt This option assigns Nt number of threads to each MPI task for performing work when Kokkos is executing in OpenMP or pthreads mode. The default is Nt = 1, which essentially runs in MPI-only mode. If there are Np MPI tasks per physical node, you generally want Np*Nt = the number of physical cores per node, to use your available hardware optimally. This also sets the number of threads used by the host when LAMMPS is compiled with CUDA=yes. numa Nm This option is only relevant when using pthreads with hwloc support. In this case Nm defines the number of NUMA regions (typicaly sockets) on a node which will be utilizied by a single MPI rank. By default Nm = 1. If this option is used the total number of worker-threads per MPI rank is threads*numa. Currently it is always almost better to assign at least one MPI rank per NUMA region, and leave numa set to its default value of 1. This is because letting a single process span multiple NUMA regions induces a significant amount of cross NUMA data traffic which is slow. . Option -plog will override the name of the partition log files file.N. -nocite Disable writing the log.cite file which is normally written to list references for specific cite-able features used during a LAMMPS run. See the citation page for more details. -package style args .... Invoke the package command with style and args. The syntax is the same as if the command appeared at the top of the input script. For example "-package gpu 2" or "-pk gpu 2" is the same as package gpu 2 in the input script. The possible styles and args are documented on the package doc page. This switch can be used multiple times, e.g. to set options for the USER-INTEL and USER-OMP packages which can be used together. Along with the "-suffix" command-line switch, this is a convenient mechanism for invoking accelerator packages and their options without having to edit an input script. . Running with multiple partitions can e useful for running multi-replica simulations, where each replica runs on on one or a few processors. Note that with MPI installed on a machine (e.g. your desktop), you can run on more (virtual) processors than you have physical processors. To run multiple independent simulatoins from one input script, using multiple partitions, see Section_howto 4 of the manual. World- and universe-style variables are useful in this context. -plog file Specify the base name for the partition log files, so partition N writes log information to file.N. If file is none, then no partition log files are created. This overrides the filename specified in the -log command-line option. This option is useful when working with large numbers of partitions, allowing the partition log files to be suppressed (-plog none) or placed in a sub-directory (-plog replica_files/log.lammps) If this option is not used the log file for partition N is log.lammps.N or whatever is specified by the -log command-line option. -pscreen file Specify the base name for the partition screen file, so partition N writes screen information to file.N. If file is none, then no partition screen files are created. This overrides the filename specified in the -screen command-line option. This option is useful when working with large numbers of partitions, allowing the partition screen files to be suppressed (-pscreen none) or placed in a sub-directory (-pscreen replica_files/screen). If this option is not used the screen file for partition N is screen.N or whatever is specified by the -screen command-line option. -restart restartfile remap datafile keyword value ... Convert the restart file into a data file and immediately exit. This is the same operation as if the following 2-line input script were run: read_restart restartfile remap write_data datafile keyword value ... Note that the specified restartfile and datafile can have wild-card characters ("*",%") as described by the read_restart and write_data commands. But a filename such as file.* will need to be enclosed in quotes to avoid shell expansion of the "*" character. Note that following restartfile, the optional flag remap can be used. This has the same effect as adding it to the read_restart command, as explained on its doc page. This is only useful if the reading of the restart file triggers an error that atoms have been lost. In that case, use of the remap flag should allow the data file to still be produced. Also note that following datafile, the same optional keyword/value pairs can be listed as used by the write_data command. -reorder nth N -reorder custom filename Reorder the processors in the MPI communicator used to instantiate LAMMPS, in one of several ways. The original MPI communicator ranks all P processors from 0 to P-1. The mapping of these ranks to physical processors is done by MPI before LAMMPS begins. It may be useful in some cases to alter the rank order. E.g. to insure that cores within each node are ranked in a desired order. Or when using the run_style verlet/split command with 2 partitions to insure that a specific Kspace processor (in the 2nd partition) is matched up with a specific set of processors in the 1st partition. See the Section_accelerate doc pages for more details. If the keyword nth is used with a setting N, then it means every Nth processor will be moved to the end of the ranking. This is useful when using the run_style verlet/split command with 2 partitions via the -partition command-line switch. The first set of processors will be in the first partition, the 2nd set in the 2nd partition. The -reorder command-line switch can alter this so that the 1st N procs in the 1st partition and one proc in the 2nd partition will be ordered consecutively, e.g. as the cores on one physical node. This can boost performance. For example, if you use "-reorder nth 4" and "-partition 9 3" and you are running on 12 processors, the processors will be reordered from 0 1 2 3 4 5 6 7 8 9 10 11 to 0 1 2 4 5 6 8 9 10 3 7 11 so that the processors in each partition will be 0 1 2 4 5 6 8 9 10 3 7 11 See the "processors" command for how to insure processors from each partition could then be grouped optimally for quad-core nodes. If the keyword is custom, then a file that specifies a permutation of the processor ranks is also specified. The format of the reorder file is as follows. Any number of initial blank or comment lines (starting with a "#" character) can be present. These should be followed by P lines of the form: I J where P is the number of processors LAMMPS was launched with. Note that if running in multi-partition mode (see the -partition switch above) P is the total number of processors in all partitions. The I and J values describe a permutation of the P processors. Every I and J should be values from 0 to P-1 inclusive. In the set of P I values, every proc ID should appear exactly once. Ditto for the set of P J values. A single I,J pairing means that the physical processor with rank I in the original MPI communicator will have rank J in the reordered communicator. Note that rank ordering can also be specified by many MPI implementations, either by environment variables that specify how to order physical processors, or by config files that specify what physical processors to assign to each MPI rank. The -reorder switch simply gives you a portable way to do this without relying on MPI itself. See the processors out command for how to output info on the final assignment of physical processors to the LAMMPS simulation domain. . Option -pscreen will override the name of the partition screen files file.N. -suffix style Use variants of various styles if they exist. The specified style can be cuda, gpu, intel, kk, omp, or opt. These refer to optional packages that LAMMPS can be built with, as described above in Section 2.3. The "cuda" style corresponds to the USER-CUDA package, the "gpu" style to the GPU package, the "intel" style to the USER-INTEL package, the "kk" style to the KOKKOS package, the "opt" style to the OPT package, and the "omp" style to the USER-OMP package. Along with the "-package" command-line switch, this is a convenient mechanism for invoking accelerator packages and their options without having to edit an input script. As an example, all of the packages provide a pair_style lj/cut variant, with style names lj/cut/cuda, lj/cut/gpu, lj/cut/intel, lj/cut/kk, lj/cut/omp, and lj/cut/opt. A variant style can be specified explicitly in your input script, e.g. pair_style lj/cut/gpu. If the -suffix switch is used the specified suffix (cuda,gpu,intel,kk,omp,opt) is automatically appended whenever your input script command creates a new atom, pair, fix, compute, or run style. If the variant version does not exist, the standard version is created. For the GPU package, using this command-line switch also invokes the default GPU settings, as if the command "package gpu 1" were used at the top of your input script. These settings can be changed by using the "-package gpu" command-line switch or the package gpu command in your script. For the USER-INTEL package, using this command-line switch also invokes the default USER-INTEL settings, as if the command "package intel 1" were used at the top of your input script. These settings can be changed by using the "-package intel" command-line switch or the package intel command in your script. If the USER-OMP package is also installed, the intel suffix will make the omp suffix a second choice, if a requested style is not available in the USER-INTEL package. It will also invoke the default USER-OMP settings, as if the command "package omp 0" were used at the top of your input script. These settings can be changed by using the "-package omp" command-line switch or the package omp command in your script. For the KOKKOS package, using this command-line switch also invokes the default KOKKOS settings, as if the command "package kokkos" were used at the top of your input script. These settings can be changed by using the "-package kokkos" command-line switch or the package kokkos command in your script. For the OMP package, using this command-line switch also invokes the default OMP settings, as if the command "package omp 0" were used at the top of your input script. These settings can be changed by using the "-package omp" command-line switch or the package omp command in your script. The suffix command can also be used within an input script to set a suffix, or to turn off or back on any suffix setting made via the command line. -var name value1 value2 ... Specify a variable that will be defined for substitution purposes when the input script is read. This switch can be used multiple times to define multiple variables. . NOTE: Currently, the command-line parser looks for arguments that start with "-" to indicate new switches. Thus you cannot specify multiple variable values if any of they start with a "-", e.g. a negative numeric value. It is OK if the first value1 starts with a "-", since it is automatically skipped.. The current C++ began with a complete rewrite of LAMMPS 2001, which was written in F90. Features of earlier versions of LAMMPS are listed in Section_history. C++ LAMMPS. If you are a previous user of LAMMPS 2001, these are the most significant changes you will notice in C++ LAMMPS: (1) The names and arguments of many input script commands have changed. All commands are now a single word (e.g. read_data instead of read data). (2) All the functionality of LAMMPS 2001 is included in C++ LAMMPS, but you may need to specify the relevant commands in different ways. (3) The format of the data file can be streamlined for some problems. See the read_data command for details. The data file section "Nonbond Coeff" has been renamed to "Pair Coeff" in C++ LAMMPS. (4) Binary restart files written by LAMMPS 2001 cannot be read by C++ LAMMPS C++ LAMMPS read_data command to read it in. (5) There are numerous small numerical changes in C++ LAMMPS that mean you will not get identical answers when comparing to a 2001 run. However, your initial thermodynamic energy and MD trajectory should be close if you have setup the problem for both codes the same.
http://lammps.sandia.gov/doc/Section_start.html
CC-MAIN-2015-11
refinedweb
10,508
63.8
Raspberry Pi Cluster Node – 17 InfluxDB Machine Stats Monitoring This post builds on my previous posts in the Raspberry Pi Cluster series by performing a small refactor and storing cluster vitals data to InfluxDB. Refactoring the codebase a little To start with I will refactor the codebase a little so it is more split up. By introducing a few new package layers it should be easier to work with. First we are going to refactor the code a little to move all classes that deal with secondary nodes into their own package. In addition the primaries will also be moved into their own area. These creates two new package locations: RaspberryPiCluster/PrimaryNodes RaspberryPiCluster/SecondaryNodes Once that has been done I am tweaking the naming of some functions and files. The MachineInfo file has been renamed to NodeVitals. This is because the file allows the “vital information” about a node to be collected. Inside this the get_base_machine_info has been renamed to get_node_baseinfo. This function is now designed to return “base information” about the node which wont change often. This would include information such as RAM, CPU, and SWAP size. The temporal data (data that changes based on time) has been removed from this function and added to a new function current_node_vitals. This is designed to be called to obtain the “current” state of the node and any changing “vital information”. In addition to this, the method now returns an object VitalsPayload which is designed to hold information about the current vitals. By abstracting this data into an object it can have helper methods to load and format the payload to send to the primary. Reporting current node vitals for secondary nodes Now we have refactored the node vital methods it can be used to report data to the primary. This will be helpful as the primary will then be able to determine how loaded each node is. This could be important when deciding what work should be scheduled on what node. Once the secondary has connected to the master it currently enters a loop where it will call perform_action repeatedly. In the new code below it will check if its vitals have been sent in the last 60 seconds. If it hasn’t then it will first report its vitals to the primary and then perform some action. vitals_last_sent = 0 while True: if (vitals_last_sent + 60) < time.time(): logger.info("Sending vitals to primary node") current_vitals = get_current_node_vitals() self.connection_handler.send_message(current_vitals.get_flat_payload(), "vitals") vitals_last_sent = time.time() self.perform_action() When the primary begins to distribute work it will be able to use this information to determine what should run. At the moment if perform_action takes a long time this will not be updated very frequently so I will look at making this a second thread in the future. Storing Vital Data in InfluxDb With the above changes the secondary is now regularly sending its vital data to the primary. We are going to store this data using InfluxDB so we can see it over time. To send data to InfluxDB I am using the official python library influxdb. This allows connecting to, reading, and writing data into an Influx Database. Currently as the primary is the only node writing to Influx I will only install the Influx library on that node. To handle the connection to InfluxDB I have created a class hold the information. import datetime from influxdb import InfluxDBClient class RpiInfluxClient: """ Simple helper class which will hold the details of the RPI Influx Client and make it a little easier to use """ def __init__(self, influxdb_host, influxdb_port, influxdb_database_prefix): self.influxdb_host = influxdb_host self.influxdb_port = influxdb_port self.influxdb_database_prefix = influxdb_database_prefix self.node_vitals_database_name = self.influxdb_database_prefix + "node_vitals" self.node_name = None # We only connect once we are ready to connect self.influx_client = None def add_node_name(self, node_name): self.node_name = node_name def connect(self): """ Connects to the influx db Database using their client and sets up the databases needed (if needed)""" self.influx_client = InfluxDBClient(self.influxdb_host, self.influxdb_port) self.influx_client.create_database(self.node_vitals_database_name) self.influx_client.create_retention_policy("node_vitals_year_rp", '365d', 3, database=self.node_vitals_database_name, default=True) This is given all the information needed to create a connection to talk to the Influx database. The connect method is used to begin the connection and set up the data structures in Influx. This configures a simple retention policy for the data in addition to the database to store the data. There are also two methods which are used to write data into InfluxDB. The first is a private method to write a datapoint to InfluxDB. This will be used to keep a standard format when writing data to InfluxDB. The second is a function to take vitals and store it to InfluxDB using the first. This splits apart the Vitals datastructure and sends the data one by one. def _write_datapoint(self, measurement, values): if self.node_name is None: # We shouldnt ever encounter this but if we do I want it to fail hard so we can debug (not a good idea for production though) raise Exception("Cannot write node value without node name") points_to_write = [ { "measurement": measurement, "tags": { "node": self.node_name }, "time": datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%SZ"), "fields": values } ] self.influx_client.write_points(points_to_write, database=self.node_vitals_database_name) def log_vitals(self, vitals): # TODO: Write these in one write_points API call rather than lots of smaller ones self._write_datapoint("cpu", { "frequency": vitals.cpu_frequency, "percentage": vitals.cpu_percentage }) self._write_datapoint("ram", { "free": vitals.ram_free }) self._write_datapoint("swap", {"free": vitals.swap_free}) Since the Python InfluxDB client allows for writing multiple values at once a future improvement is to make a single call to write_points. I have left a note here in the comments to do this. Over time as more data is added into the vitals object, this will log more data if available. However currently this is just logging the basic data in the object. After these changes the primary is aware of the current node vitals of all connected nodes, and is storing this for tracking purposes. Summary of adding node monitoring After some refactoring the secondary nodes now all report their data to the primary. This is logged by the primary into an InfluxDB which can be viewed to monitor the status of the cluster. Going forward more information will be added to keep an eye on the nodes. The full code is available on Github, any comments or questions can be raised there as issues or posted below.
https://chewett.co.uk/blog/2744/raspberry-pi-cluster-node-17-influxdb-machine-stats-monitoring/
CC-MAIN-2022-27
refinedweb
1,068
56.25
In performing my ongoing research into all things Internet marketing related, I'm now digging deeper into all this RSS/RDF stuff. Now I know where the "in crowd" has been hiding. Here I am fiddling with Metadata at the page level and meanwhile the rest of you are feeding XML files in addition to your Metadata. Ya, it finally clicked for me. Why am I one of the few who come late to the party? ;) So tell me, should I Feed those 10,000 Subscribers? Do I want to interrupt a long standing email marketing campaign that has been successful in getting visitors to the site? I know, offer the Feeds and appeal to a larger audience. I fully understand that concept. But, I have some burning questions... Let's say that 3,000 of the 10,000 subscribe to the Feeds. Prior, those 3,000 would click a link in an email which took them to the destination page, usually a promotional offer or updates of importance. That whole process has value for me from a variety of perspectives, especially from a traffic volume one. If those 3,000 are now following the Feeds, they may not click through as much, yes? No? You're going to tell me that it doesn't matter either way. They either click the link in the email or they click the link in the Feed, same thing, different technologies and concepts. I want to talk a lot more about Feeds. I've invested quite a bit of research time into this and I feel I've become somewhat of an up and coming expert in this area. I've already researched all the technologies, formats, etc. My first feed validated out of the box, how cool is that? And, I have my own hand rolled feed generator... <generator>MS FrontPage RSS Feed Generator ME</generator> W3C: Feed Validation Service [validator.w3.org...] Heh, I was really surprised to find some of the Blogs I visit daily who had errors in their feeds. How can that be? These things are very simple, how the heck can you have errors? Fix them now please? Talk to me. Tell me how you're using RSS/RDF to your advantage. I'm anxious to learn everything I can. Which of the formats are you using? Or, do you blend different formats to achieve RSS Nirvana? I'm blending right now using RSS 2.0 with some DC mixed in. ^ Please do smack me if I'm using incorrect terminology. I'm new to all this. :) How do you handle RSS Channels? I see lots of neat things that I can do with setting up Channels and I'm already working on that. We have such a diverse group of services that RSS Channels is only natural. Feed me Seymour, feed me all night long! I can hear some of you now. "Oh crap, P1R got into the RSS specs, just what we need!" You're going to find people who will consume the data in different ways. You'll have the individuals who will use either the feed or the newsletter, but sometime it will be both. Some people may click through to your site less if provided a feed, but at least they'll still be keeping track of your information. If a site offers me a feed I'm happy. There are very few sites I take the time to visit regularly, but I can always scan my feeds and get an update on a wide variety of sites. You're making it easier for me to aggregate your info in my own way. If you don't offer a feed, but do offer a newsletter then your site had better be quite good. I'll subscribe to an RSS feed a lot quicker than a newsletter. Newsletters are a bigger commitment for me. pageoneresults, personally, I unsubscribed to all my email subscriptions a while ago. During that unsubscription processed, i looked for an RSS alternative for the email. If there was none, then I still unsubscribed. :) Bottom line, you can and should allow both. Plus, Feedburner does allow you to see "who" signed up via your email list, so you do have that information. That's the part that concerns me. I'm losing the visitor to the feeds. I'm not too sure I feel 100% comfortable with that. I lose all that data associated with the visit which from my perspective may be a little more important than the feed. Ya, I know, just do it and find out. ;) I don't use my reader as much as I should I guess. I prefer the old fashioned way of bookmarking, keeping a page of resources, etc., that I can easily follow links from. I like browsing websites and not a list of headlines and descriptions. But I'm learning, so as I get the hang of this, I'll probably feel a bit more comfortable with it all. I just don't like the idea of losing potential traffic to the site. There are all sorts of things occurring when that visitor gets to the site. You've got the visual, core content and promotional stuff that they don't get via the feed. Ya, I know, I can do quite a bit with that feed to give it some dynamics, some interactivity. I have a feeling this could have a negative impact on the traffic analytics portion. But, if I'm using GA and FeedBurner, I'm almost certain they can tie the two together and see; "Oh, the site implemented feeds hence the reason for the drop in traffic. Take the two stats and combine them for traffic analysis." Ya, I know, me Tin Hat is beaming right now. For those of you using Feeds, what namespaces are you using and which elements within that namespace? Do all of your feeds validate? I can tell you that of the top ten blogs I visit, there were more than a few that did not validate. I think the email campagains and rss feeds can co-exist. Let the people choose. Also though I havent done it, you can slip promotional stuff in a feed. Well said. It depends on your web site. Bottom line is the bottom line. How can this make you money? If the RSS feed is killing traffic, it can be a problem. You will find all kinds of thinking on this and there is no one answer. My thinking is, don't give it all away. Create a feed that shows something is available. One idea is to offer a high-end widget that YOU control and you can sell ads on, using something like Adobe AIR. That concerns me! They must be giving away "too much" information in the feed, yes? That would be similar to the approach in this instance. We utilize a very strict approach to CMS and have an area that is defined as the IPW paragraph. That is the first one or two lines from the content that is of the most importance. That is all we plan on including in the <description> of the feed along with the <title>. I can't see giving away too much more information. I'm already losing someone to a Feed, I don't want to lose the potential click through and revenue that may be generated. Ya, that's a concern too and one that I am mulling over right now. We definitely need to make sure the feeds are designed to generate click throughs to the site. Without those, we lose the potential of upselling. There's a bit to consider in all of this. Did you know that RDF was or is the proposed replacement and/or next step in metadata? I could never fully grasp the RDF specification until recently and it all clicked. I'll be damned! Here I am chasing metadata and I could be chasing RSS/RDF too. Woohoo! Can I promote other people's feeds? I mean, is it okay to provide RSS links to their feeds if I feel they are of value to the audience? What is best practice in this area? Short answer: from a technical standpoint, yes. There's lots of pre-build widgets out there to put "inline feeds" into your site. That's entirely up to you, from both a content, and traffic shaping viewpoint. The upside: If you do it right, then the landing pages you have the inline feeds, or aggregated feeds, become a destination point for your visitors. A lot of "average" users have no idea how to use RSS, so your page becomes a convenient, one stop shop for headlines relevant to a particular subject. You can really "shape" this in a lot of different ways to ensure that you're providing relevant info that won't send your users to the competition. In many ways, easier to control than AdSense. The downside: Outclicks. But if you did the upside correctly, this isn't a huge concern. Rule #1: Check the TOS for the RSS feed you're planning to include. Some websites are pretty explicit about an RSS feed being "For Private, Non Commercial Use Only." Big, old school outfits (think newspapers and TV stations) are the most likely to send a flock of lawyers to pound on your door if you publish their feed without permission. Rule #2: Carefully craft the inline feed to suit your visitors. And by this, I mean go a step beyond just picking which feeds to display. Select what parts of those feeds you want to display. There are countless custom aggregation outfits around, use them, and use them wisely. It will allow you to get pretty specific. "Check Feed X, for keywords A, B, C, D, and E, but only if in combination with A or B." It really is possible to put that fine grained a control on things. Have the aggregator generate a custom RSS feed from your parameters, and then use that for your inline feed. Rule #3: Refer to rule #1 Rule #4: Display only the headlines from outside feeds. Don't go overboard and do "Headline plus intro" or "Headline plus the whole dang article." Just give a headline and let the end user decide from that alone if it's worth pursuing. Rule #5: Refer to rule #1 Ocean1000 is right about never visiting sites again once people subscribe to the feed. Many feeds, if not most, throw the whole article out through the feed. From a user standpoint, I find this great. It allows me to completely avoid the train wreck of bad design/user interface that IS the average website for an old school newspaper. Heck, it even allows you to avoid the train wreck of usability of sites that should know better (Wired comes to mind - it gets worse with every redesign). On the other hand it really works in favor of sites that do it right. Webmasterworld is a really good example of this. Sure, WebmasterWorld pipes a good chunk of the thread down it's feed, but only so much. Basically the first page of posts. Beyond that, you have to click through to read the rest of the thread, and in the case of WebmasterWorld, participate in the conversation. The key thing here is to provide the user with a compelling reason to click through and visit your site. Given the attention to detail that you seem to show about most things, I'm pretty sure that's within your ability. You can also build advertising right into the XML of the RSS feed. Sure, people like me aren't going to see those ads because we'll have a feed reader that will allow us to block out advertising. But people like me aren't going to see the ads on your Website anyway, because I'll block all the ads there, as well. The upside of publishing a full feed is ease of consumption. If you are primarly publishing for branding (or personal branding) purposes, maybe this is OK. The problems I see are: 1) A less rich experience - careful layouts, good illustrations, useful sidebar content, etc., can be degraded or lost entirely when only the feed is consumed. 2) Loss of ad impressions and/or clicks if ads are present. Feed ads strike me as less effective for a variety of reasons. In my case, #1 has kept me with the excerpt approach for now, but it bothers me that I'm forcing users to view the posts the way *I* want them to vs. the way they would prefer. I may well change my mind at some point. We're all broadcasting information via our HTML pages, but there are many other ways our information can be disseminated. RSS feeds are just another way of getting our information out to a wider market. You just need to track that usage of your data in different ways. FeedBurner became popular as it allowed any webmaster to track the uptake and use of their feeds. The challenge is to find a way to effectively track this data. What happens if I don't provide a feed? Well, there are plenty of services available that will take sites with no RSS feeds, and create a feed for the user. If your site doesn't have a feed I could use one of these services. However, you as the webmaster then lose all control over format, tracking, and potential ad revenue from the feed. When I was a real feed consumer, and I had finally found a great aggregator program to sift through them all, I had a feed list that was approaching 10000 feeds. It was a bit too much, I admit. ;) However, my aggregator could sift through it all and pick out the keywords and phrases that I was interested in, from thousands of sites, and I could whittle that information down into a reasonable selection for consumption. The quality sites would always rise to the top. I have many rss feeds syndicated to my rss reader. My suggestion will be to publish 80% of the content on RSS feed with ads, and to read the rest 20% people will have to come to the website. Some points to remember: 1) People read emails daily not RSS (on a regular basis), so email-rss both works together. 2) Without RSS it becomes a lot difficult for many of the web properties to cover you on a daily basis. This will certainly increase the reach. Say with emails 80% people visited your site. So for 1000 readers 800 people visited your site. With RSS only 20% people visits your website but it speads much faster than email subscription, say within an year 10,000 subscribed to your RSS, then 2000 people will be coming to your site than 800 + 10,000 people spreading your word. RSS is a Good thing for sure :)
http://www.webmasterworld.com/rss_atom/3855185.htm
CC-MAIN-2014-52
refinedweb
2,518
74.29
Font Handling for Visual Basic 6.0 Users This topic compares font-handling techniques in Visual Basic 6.0 with their equivalents in Visual Basic 2005. Conceptual Differences Fonts in Visual Basic 6.0 are handled in two different ways: as font properties of forms and controls, or as a stdFont object. In Visual Basic 2005, there is a single Font object: System.Drawing.Font. The Font property of a form or control takes a Font object as an argument. Setting Font Properties In Visual Basic 6.0, font properties can be set at run time, either by assigning a stdFont object or by setting the properties directly on the control; the two methods can be interchanged. In Visual Basic 2005, the Font property of a control is read-only at run time—you cannot set the properties directly. You must instantiate a new Font object each time you want to set a property. Font Inheritance In Visual Basic 6.0, font properties have to be set individually for each control or form; using a stdFont object simplifies the process but still requires code. In Visual Basic 2005, font properties are automatically inherited from their parent unless they are explicitly set for the child object. For example, if you have two label controls on a form and change the font property of the form to Arial, the label control's font also changes to Arial. If you subsequently change the font of one label to Times Roman, further changes to the form's font would not override the label's font. Font Compatibility Visual Basic 6.0 supports raster fonts for backward compatibility; Visual Basic 2005 supports only TrueType and OpenType fonts. Enumerating Fonts In Visual Basic 6.0, you can use the Screen.Fonts collection along with the Screen.FontCount property to enumerate the available screen fonts. In Visual Basic 2005, the Screen object no longer exists; in order to enumerate available fonts on the system, you should use the System.Drawing.FontFamily namespace. Code Changes for Fonts The following code examples illustrate the differences in coding techniques between Visual Basic 6.0 and Visual Basic 2005. Code Changes for Setting Font Properties The following example demonstrates setting font properties at run time. In Visual Basic 6.0, you can set properties directly on a control; in Visual Basic 2005, you must create a new Font object and assign it to the control each time you need to set a property. ' Visual Basic 6.0 ' Set font properties directly on the control. Label1.FontBold = True ' Create a stdFont object. Dim f As New stdFont ' Set the stdFont object to the Arial font. f.Name = "Arial" ' Assign the stdFont to the control's font property. Set Label1.Font = f ' You can still change properties at run time. Label1.FontBold = True Label1.FontItalic = True ' Visual Basic 2005 ' Create a new Font object Name and Size are required. Dim f As New System.Drawing.Font("Arial", 10) ' Assign the font to the control Label1.Font = f ' To set additional properties, you must create a new Font object. Label1.Font = New System.Drawing.Font(Label1.Font, FontStyle.Bold Or FontStyle.Italic) Code Changes for Enumerating Fonts The following example demonstrates filling a ListBox control with a list of the fonts installed on a computer. Upgrade Notes When a Visual Basic 6.0 application is upgraded to Visual Basic 2005, any font-handling code is modified to use the new Font object. Font inheritance in Visual Basic 2005 can cause unintended changes in the appearance of your application. You should check your converted application for any code that explicitly sets a font at the form or container level and, if necessary, change the font for any child controls that should not inherit that font. During upgrade, raster fonts are converted to the default OpenType font, Microsoft Sans Serif. Formatting such as Bold or Italic is not preserved. For more information, see Only OpenType and TrueType fonts are supported. If your application contains code that enumerates fonts, raster fonts will not be enumerated in the upgraded application, and font families are enumerated rather than individual character-set versions. See Also ReferenceFont Class FontFamily.Families Property
http://msdn.microsoft.com/en-us/library/3essdeyy(v=vs.80).aspx
CC-MAIN-2013-48
refinedweb
700
58.38