text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
)
An App Engine application can consume resources up to certain maximums, or quotas. With quotas, App Engine ensures that your application won't exceed your budget, and that other applications running on App Engine won't impact the performance of your app.
Note: The free quota levels described below changed on June 22nd, 2009. See Recent Changes to the Free Quotas below for more details. If your application requires higher quotas than the "billing-enabled" per-minute values listed below allow, you can request an increase.
Each App Engine resource is measured against one of two kinds of quota: a billable quota or a fixed quota.
Billable quotas are resource maximums set by you, the application's administrator, to prevent the cost of the application from exceeding your budget. Every application gets an amount of each billable quota for free. You can increase billable quotas for your application by enabling billing, setting a daily budget, then allocating the budget to the quotas. You will be charged only for the resources your app actually uses, and only for the amount of resources used above the free quota thresholds.
After you enable billing for your application, you can set your daily budget and adjust quota allocations for your app using the Admin Console. For more information about setting your budget and allocating quotas, see Billing.
Fixed quotas are resource maximums set by App Engine to ensure the integrity of the system. These resources describe the boundaries of the architecture, and all applications are expected to run within the same limits. They ensure that another app that is consuming too many resources will not affect the performance of your app.
When you enable billing for your app, the app's fixed quotas increase. See the Resources section for details.
App Engine records how much of each resource an application uses in a calendar day, and considers the resource depleted when this amount reaches the app's quota for the resource. A calendar day is a period of 24 hours beginning at midnight, Pacific Time. App Engine resets all resource measurements at the beginning of each day, except for Stored Data which always represents the amount of datastore storage in use.
Historical note: The 24-hour replenishment cycle was introduced in December 2008. It replaced a more complicated system of "continuous" replenishment, to make it easier to report and control resource usage.
In addition to the daily quotas described above, App Engine moderates how quickly an app can consume a resource, using per-minute quotas. This protects the app from consuming all of its quota in very short periods of time, and keeps other apps from affecting your app by monopolizing a given resource.
If your application consumes a resource too quickly and depletes one of the per-minute limits, the word "Limited" will appear by the appropriate quota on the Quota Details screen in the Admin Console. Requests for resources that have hit their per-minute maximum will be denied. See When a Resource is Depleted for details.
As with the daily fixed quotas, there are two levels of per-minute quotas, depending on whether billing has been enabled or not. See the quota tables in the Resources section for details.
When an app consumes all of an allocated resource, the resource becomes unavailable until the quota is replenished. This may mean that your app will not work until the quota is replenshed.
For resources that are required to initiate a request, when the resource is depleted, App Engine returns an HTTP 403 Forbidden status code for the request instead of calling a request handler. The following resources have this behavior:
For all other resources, when the resource is depleted, an attempt in the app to consume the resource results in an exception. This exception can be caught by the app and handled, such as by displaying a friendly error message to the user. In the Python API, this exception is
apiproxy_errors.OverQuotaError.
The following example illustrates how to catch the
OverQuotaError, which may be raised by the
SendMessage() method if an email-related quota has been exceeded:
try: mail.SendMessage(to='test@example.com', from='admin@example.com', subject='Test Email', body='Testing') except apiproxy_errors.OverQuotaError, message: # Log the error. logging.error(message) # Display an informative message to the user. self.response.out.write('The email could not be sent. ' 'Please try again later.')
If you're going over your system resource quota unexpectedly, consider profiling your app's performance.
A Python application can determine how much CPU time the current request has taken so far by calling the Quota API. This is useful for profiling CPU-intensive code, and finding places where CPU efficiency can be improved for greater cost savings. You can measure the CPU used for the entire request, or call the API before and after a section of code then subtract to determine the CPU used between those two points.
The
google.appengine.api.quota package provides the
get_request_cpu_usage() function. This function returns the amount of CPU resources that the current request has spent so far, as a number of megacycles. This number is proportional to the "CPU Time" quota measurement, but does not include the CPU speed multiplier. This number does not include CPU used by API calls.
In the development server, this function returns 0.
import logging from google.appengine.api import quota start = quota.get_request_cpu_usage() do_something_expensive() end = quota.get_request_cpu_usage() logging.info("do_something_expensive() cost %d megacycles." % (start - end))
This function is not yet available for Java applications. It may be added in a future release.
An application may use the following resources, subject to quotas. Resources measured against billable quotas are indicated with "(billable)." Resource amounts represent an allocation over a 24 hour period.
The cost of additional billable resources is listed on the Billing page.
The amount of data sent by the app in response to requests.
This includes data sent in response to both secure requests and non-secure requests, data sent in email messages, and data in outgoing HTTP requests sent by the URL fetch service.
The amount of data received by the app from requests.
This includes data received by the app in secure requests and non-secure requests, and data received in response to HTTP requests by the URL fetch service..
One tool to assist you in identifying areas in the application which use high amounts of runtime CPU quota is the
cProfile module. For instructions on setting up profiling while debugging your application, see "How do I profile my app's performance?".
You can examine the CPU time used to serve each request by looking at the Logs section of the Admin Console. While profiling will assist in identifying inefficient portions of your Python code, it's also helpful to understand which datastore operations contribute to your CPU usage.
The amount of data stored in entities and corresponding indexes.
It's important to note that data stored in the datastore may incur significant overhead. This overhead depends on the number and types of associated properties, and includes space used by built-in and custom indexes. Each entity stored in the datastore requires the the following metadata::
Fixed quotas for applications with billing enabled were not affected. | http://code.google.com/appengine/docs/quotas.html | crawl-002 | refinedweb | 1,206 | 55.13 |
- Overview
- Table of Contents
- Special Member Functions: Constructors, Destructors, and the Assignment Operator
- Operator Overloading
- Memory Management
- Templates
- Namespaces
- Time and Date Library
- Retrieving the Current Time
- Time Differences and Time Zones
- High Resolution Time Measurement and Timers, Part I
- High Resolution Time Measurement and Timers, Part II
- High Resolution Timers
- Summary
- Online Resources
-<<
High Resolution Time Measurement and Timers, Part I
Last updated Jan 1, 2003.
Standard C++ inherited the <ctime> time and date library from C. While this library offers many useful time and date facilities, it doesn’t support a high-resolution time measurement. The POSIX standard defines extensions to this library which are capable of measuring time in microseconds (millionths of a second) and even nanoseconds (billionths of a second). Let’s see how they are used.
Measuring Time
First, let’s look at a quick reminder. In standard C and C++, you use the time() function to obtain the current timestamp. The time is represented as the number of seconds that have elapsed since The Epoch, or 1/1/1970 at midnight. In a typical 32 bit system, time_t will rollover in 2038. However, as more platforms are gradually moving to a 64 bit time_t, this time bomb so to speak will have disappeared by then.
POSIX defines a fine-grained time measuring function called gettimeofday():
#include <sys/time.h> #include <unistd.h> int gettimeofday(struct timeval * tv, struct timezone *tz);
If struct timeval looks familiar, it’s because I discussed awhile ago when I explained non-blocking I/O using select(). Its declaration is repeated here for convenience:
struct timeval { int tv_sec; /*seconds*/ int tv_usec; /*microseconds*/ };
struct timezone is declared as follows:
struct timezone { int tz_mniuteswest; /*minutes west of Greenwich*/ int tz_dsttime; /*dst correction type*/ };.
The following code listing displays the current timestamp in seconds and microseconds:
struct timeval tv; struct timezone tz; gettimeofday(&tv, &tz); printf("the current time of day represented as time_t is %d and %d microseconds", tv.tv_sec, tv.tv_usec);
Using Timers
A timer is a resource that enables a process or a thread to schedules an event. A program uses a timer to ask the kernel to notify it when a certain amount of time has elapsed. There are two types of timer: synchronous and asynchronous. When a process uses a timer synchronously, it waits until the timer expires, usually by means of calling sleep() or a similar syscall. Sleeping means that the process is removed from the kernel’s scheduler for a certain amount of time. POSIX defines four different functions for sleeping, each of which measures time in different units.
sleep()
sleep() causes the process to sleep at least nsec seconds or until a <a href="">signal</a> that the process doesn’t ignore is received.
unsigned int sleep (unsigned int nsec);
If sleep() hasn’t slept nsec seconds, it returns the number of seconds left. Otherwise, it returns 0. sleep() is mostly useful for polling a certain resource in fixed intervals of time. For example, a mail client that polls its mail server every 10 minutes can use a loop that contains a sleep(600); call. This is also the most common syscall for sleeping.
usleep()
usleep() causes the process to sleep for at least usec microseconds:
void usleep (unsigned long usec);
usleep() causes the process to sleep for at least usec microseconds. No signals are used in this case. Most implementations use select() to implement this function. It’s equivalent to calling:
timeval tv; tv.tv_sec=0; tv.tv_usec=usec; int select(0, NULL, NULL, NULL, &tv);
nanosleep()
nanosleep() offers a resolution of nanoseconds:
int nanosleep(struct timespec *req, struct timespec * rem);
The struct timespec is declared as follows:
struct timspec { long int tv_sec; /*as in timeval*/ long int tv_nsec; /*nanoseconds*/ };
nanosleep() causes the process to sleep at least the amount of time indicated in req or until a signal is received. If nanosleep() terminates earlier due to a signal it returns -1 and rem is filled with the remaining time. nanosleep() offers the highest resolution (theoretically, up to a billionth of a second) although it’s the least portable of these four functions. This function is mostly used in real-time environments, allowing a process to sleep a precise amount of time.
In the second part of this series I will discuss interval timers, a facility that delivers signals on a regular basis to a process. | http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=272 | CC-MAIN-2014-41 | refinedweb | 732 | 60.24 |
C++ implementation of this approach:
// C++ program to find maximum number to be removed // to convert a tree into forest containg trees of // even number of nodes #include<bits/stdc++.h> #define N 12 using namespace std; // Return the number of nodes of subtree having // node as a root. int dfs(vector<int> tree[N], int visit[N], int *ans, int node) { int num = 0, temp = 0; // Mark node as visited. visit[node] = 1; // Traverse the adjacency list to find non- // visited node. for (int i = 0; i < tree[node].size(); i++) { if (visit[tree[node][i]] == 0) { // Finding number of nodes of the subtree // of a subtree. temp = dfs(tree, visit, ans, tree[node][i]); // If nodes are even, increment number of // edges to removed. // Else leave the node as child of subtree. (temp%2)?(num += temp):((*ans)++); } } return num+1; } // Return the maxium number of edge to remove // to make forest. int minEdge(vector<int> tree[N], int n) { int visit[n+2]; int ans = 0; memset(visit, 0, sizeof visit); dfs(tree, visit, &ans, 1); return ans; } // Driven Program int main() { int n = 10; vector<int> tree[n+2]; tree[1].push_back(3); tree[3].push_back(1); tree[1].push_back(6); tree[6].push_back(1); tree[1].push_back(2); tree[2].push_back(1); tree[3].push_back(4); tree[4].push_back(3); tree[6].push_back(8); tree[8].push_back(6); tree[2].push_back(7); tree[7].push_back(2); tree[2].push_back(5); tree[5].push_back(2); tree[4].push_back(9); tree[9].push_back(4); tree[4].push_back(10); tree[10].push_back(4); cout << minEdge(tree, n) << endl; return 0; }
Output:
2
Time Complexity: O(n).
Reference:. | http://www.geeksforgeeks.org/convert-tree-forest-even-nodes/ | CC-MAIN-2017-13 | refinedweb | 278 | 62.17 |
Digital DJ Free Web Music System 3.0
Sponsored Links
Digital DJ Free Web Music System 3.0 Ranking & Summary
RankingClick at the star to rank
Ranking Level
User Review: 10 (1 times)
File size: 5,800K
Platform: Windows 9X/ME/NT/2K/2003/XP/Vista
License: Freeware
Price:
Downloads: 3766
Date added: 2000-01-28
Publisher: PC Technical Services
Digital DJ Free Web Music System 3.0 description
Digital DJ Free Web Music System 3.0. Audio & Multimedia
Digital DJ Free Web Music System 3.0 Screenshot
Digital DJ Free Web Music System 3.0 Keywords
Bookmark Digital DJ Free Web Music System 3.0
Digital DJ Free Web Music System 3.0 Copyright
WareSeeker.com do not provide cracks, serial numbers etc for Digital DJ Free Web Music System 3.0. Any sharing links from rapidshare.com, yousendit.com or megaupload.com are also prohibited.
Featured Software
Want to place your software product here?
Please contact us for consideration.
Contact WareSeeker.com
Related Software
Girlsense Music Player allows you to play your favorite music Free Download
SHOCKING LOW PRICE for Studio quality music and video automation. Non stop, no repeat music, 24-7 reliability, Scheduling, tracking, Mixing. Broadcasting, Background systems, Digital Juke Boxes, Disc Jockeys, Home Systems. RIAA and DMCA compliant Free Download
import music collection comfortably into the local database Free Download
D&BA Music Player Widget plays all your music and favorite D&B tracks. Free Download
Website Music Player is a useful utility that gives you the possibility to easily add music to you Web site Free Download
Amsoft Music Player can plays MP3, wave and midi files. Free Download
A music player for the tracklist provided by South by Southwest Music. Free Download
This soft hat is the music player which operates on Microsoft Windows95. A reproducible music file is amodule file in which sampling data of a tone was contained. Free Download
Latest Software
Popular Software
Favourite Software | http://wareseeker.com/Audio-Multimedia/digital-dj-free-web-music-system-3.0.zip/24427 | CC-MAIN-2018-05 | refinedweb | 325 | 50.33 |
hey im trying to write a program that has a loop in it and takes the percentage of the total number of scores and the scores that passed which are above 70
here's my code when i go to run it and input the scores and it comes up with a question mark as an answer
Get back to me ASAPGet back to me ASAP<import java.util.Scanner; import java.text.NumberFormat; public class DANPercentPassage { public static void main(String[] args) { int score, pass, fail, total; double percent; percent = 0; pass = 0; fail = 0; total = pass + fail; Scanner sc = new Scanner(System.in); NumberFormat per = NumberFormat.getPercentInstance(); per.setMinimumFractionDigits(2); per.setMaximumFractionDigits(0); do{ System.out.print("Enter the score: "); score = sc.nextInt(); if(score>=70) pass = pass + 1; else if(score>0&&score<70) fail = fail + 1; }while(score!=-1); percent = (double) pass/total; if(score == -1) System.out.print("The percent of the scores is "+ per.format(percent)); } } And heres my console window when i run it Enter the score: 80 Enter the score: 80 Enter the score: 35 Enter the score: 35 Enter the score: -1 The percent of the scores is ?%> | http://www.javaprogrammingforums.com/whats-wrong-my-code/27669-need-help-some-code.html | CC-MAIN-2018-09 | refinedweb | 197 | 54.83 |
This content has been marked as final. Show 7 replies
1. Re: jdbcrp0428 Sep 13, 2012 1:39 PM (in response to 962021)Welcome to the forum!
>
i am facing a problem of "no suitable driver found " in java database connectivity using type 4 drivers although i have set the class path of jar file.
so please give me some hint to solve this problem.
>
You should mark this question ANSWERED and repost the question in the JDBC forum.
When you repost provide the java code, the class path you are using and the jar file name.
That problem can be caused by a syntax error when specifying the url; there is not driver that understands the url specified.
See Chap 8 Data Sources and URLs in the JDBC Developer's Guide for how to specify urls.
String url = "jdbc:oracle:thin:@//myHost:1521/service_name";
2. Re: jdbcrukbat Sep 13, 2012 1:51 PM (in response to 962021)Moderator Action:
Post moved from the New To Java forum,
to the JDBC forum,
for closer topic alignment.
3. Re: jdbc962021 Sep 14, 2012 9:21 AM (in response to rukbat)my source code is-
import java.lang.*;
import java.sql.*;
class Testdb4o {
public static void main(String a[]) {
try{
try{
Class.forName("oracle.jdbc.driver.oracleDriver");
}
catch(ClassNotFoundException c){System.out.println(c);}
Connection c=DriverManager.getConnection("jdbc:oracle:thin:@localhost:1521:xe","system","oracle");
Statement s=c.createStatement();
ResultSet rs1=s.executeQuery("select * from Students");
while(rs1.next()){
System.out.println(rs1.getString(1));
System.out.println(rs1.getString(2));
}
//c.close();
}
catch(SQLException e){System.out.println(e);}
catch(Exception i){System.out.println(i);}
}
}
and i havae set the classpath of the driver as-------"C:\oraclexe\app\oracle\product\10.2.0\server\jdbc\lib\ojdbc14.jar;"
please suggest me some solution to this problem.
4. Re: jdbcrp0428 Sep 14, 2012 11:45 AM (in response to 962021)>
please suggest me some solution to this problem.
>
Thanks for posting the code and classpath. Now we can see what the problem is.
This code
is using the wrong class name. The name of the class is 'OracleDriver' with an uppercase 'O'.
Class.forName("oracle.jdbc.driver.oracleDriver");
Use
That ojdbc14.jar is an ancient jar file. You should upgrade to the current jar version.
Class.forName("oracle.jdbc.driver.OracleDriver");
5. Re: jdbc962021 Sep 15, 2012 11:15 AM (in response to rp0428)thank you sir, now every thing is working well.........
6. Re: jdbcdsurber Sep 17, 2012 11:39 AM (in response to rp0428)If you are using JDK 1.6 or later, there is no need for Class.forName at all. It's a slow method and a waste of time. See.
If you are using an ancient, desupported, pre-1.6 driver, then the correct class name is "oracle.jdbc.OracleDriver". No "...driver..." . oracle.jdbc.driver.OracleDriver has been desupported since 11.1.
7. Re: jdbcrp0428 Sep 17, 2012 11:48 AM (in response to dsurber)OP is the one with the quesion/issue. If you have information that you think will assist them you should direct your response to them. | https://community.oracle.com/message/10580406 | CC-MAIN-2015-14 | refinedweb | 525 | 52.36 |
.
With Java 7
So lets start with some POJO class.
public class Item { int weight; BigDecimal price; public Item(int weight, BigDecimal price) { this.weight = weight; this.price = price; } public int getWeight() { return weight; } public BigDecimal getPrice() { return price; } }
Now let’s imagine that we want to write function which will return price converted to String for all items with weight more than 5.
In plain Java it can look like this:
public List getBigItemsPricesAsStrings(List items) { List result = new ArrayList(); for (Item item : items) { if (item.getWeight() > 5) { result.add(item.getPrice().toString()); } } return result; }
It is not specially readable. We have if statement with some magic numbers and conversion from BidDecimal to String in the next line. In order to know how this method works we have to analyze it line by line. What is the first step to make it better? Refactor.
public List getBigItemsPricesAsStrings(List items) { List result = new ArrayList(); for (Item item : items) { if (itemIsBig(item)) { convertPrice(result, item); } } return result; } private void convertPrice(List result, Item item) { result.add(item.getPrice().toString()); } private boolean itemIsBig(Item item) { return item.getWeight() > 5; }
Now it is better, but still in one method we mix different things – we loop through items, we check condition and convert something. So now lets change whole idea and use FluentIterable.
import static com.google.common.collect.FluentIterable.from; public List getBigItemsPricesAsStrings(List items) { return from(items).filter(onlyBig()).transform(priceToString()).toList(); } private Function priceToString() { return new Function() { @Override public String apply(Item item) { return item.getPrice().toString(); } }; } private Predicate onlyBig() { return new Predicate() { @Override public boolean apply(Item item) { return item.getWeight() > 5; } }; }
That was usage of FluentItarable in Java 7 so without lambdas. The desired method is now one line and is quite clear. From this code we can easily tell what it does. In order to use FluentIterable we have to implement Predicate and Function interfaces. I did it by adding methods to class which return desired types with implementation. Those elements can be extracted to separate compilation units or to class like: ItemPredicates or ItemFunctions if we have more of them.
With Java 8
So now we have Java 8 and lambdas. Lets see what we can do with this example.
import static com.google.common.collect.FluentIterable.from; public List getBigItemsPricesAsStrings(List items) { return from(items).filter(item -> item.getWeight() > 5) .transform(item -> item.getPrice().toString()).toList(); }
I would like to stop here for a moment. In previous example we have splitted levels of abstraction and details were hidden in Predicate and Function implementations. Now with direct usage of lambdas we mix those levels again. We are getting one line but it starts to look a little bit ugly. In comparison to previous implementation we again have to think and analyze what this code does. We don’t see it immediately.
If we now write next POJO class
public class AnotherItem { int weight; int lenght; int height; int width; BigDecimal price; public AnotherItem(int weight, int lenght, int height, int width, BigDecimal price) { this.weight = weight; this.lenght = lenght; this.height = height; this.width = width; this.price = price; } public int getWeight() { return weight; } public int getLenght() { return lenght; } public int getHeight() { return height; } public int getWidth() { return width; } public BigDecimal getPrice() { return price; } }
And would like to use it with similiar method, we would get:
import static com.google.common.collect.FluentIterable.from; public List getBigAnotherItemsPricesAsStrings(List items) { return from(items).filter(item -> item.getWeight() > 5 && item.getLenght() > 10 && item.getHeight() > 20 && item.getWidth() > 5).transform(item -> item.getPrice().toString()).toList(); }
As you can see it is getting worse and worse. So how to overcome this problem? Lets Extract Method
import static com.google.common.collect.FluentIterable.from; public List getBigAnotherItemsPricesAsStrings(List items) { return from(items).filter(item -> isItemBig(item)).transform(item -> convertPrice(item)).toList(); } private String convertPrice(AnotherItem item) { return item.getPrice().toString(); } private boolean isItemBig(AnotherItem item) { return item.getWeight() > 5 && item.getLenght() > 10 && item.getHeight() > 20 && item.getWidth() > 5; }
So what did we get in the end? Instead of simple method call from Java 7 (methods create Predicate and Function) we have lambdas with methods containing logic. It is quite readable and doesn’t contain boilerplate code. The difference is basically only syntactic sugar.
Summary
With Java 8 we can use lambdas and stop writing boilerplate code. It can be very useful, but we have to think when writing and we have to find good balance between code readability and minimalism. Usually we tend to write less but with writing less can lose a lot. We can lose on code readability and get hit by future maintenance. Also we start to mix levels of abstraction and details of implementation with overall logic.
Moreover, with bad usage of lambdas we get hard to understand code. This code can no longer work as its own documentation.
So as always, when new things are available to us – think before use, and if you use, use it wisely.
Pingback: Java 8 – Streams API instead of FluentIterable | Coffee Driven Development
Pingback: Maria Smith | http://coffeedriven.org/code-readability-with-fluentitarable-and-java-8/ | CC-MAIN-2018-47 | refinedweb | 844 | 60.01 |
Project:I am making a radio controlled bristlebot, and planning to use a Baby Orangutan to read in 3 PWM signals, and use them to control 2 rotary encoded motors. I must say I am impressed with your libraries and the Orangutan, as I was able to get software installed along with positional motor control working within an evening. However, when I tried adding code for reading PWM, I encountered an error while building the project. I simplified the code to narrow down the problem.
Hardware:Using the Baby Orangutan B-328
Setup:I'm using Atmel Studio 7 (7.0.1188). I created the project using "File -> New Project" and selecting "Baby Orangutan B-328"
Problem:This code in main.c produces an error while compiling or linking.
#include <pololu/orangutan.h>
//-------------------------------------------------------------------------------------
int main()
{
encoders_init(IO_B0, IO_B1, IO_B2, IO_B3);
unsigned char pulseChannels[] = {IO_D0, IO_D1};
pulse_in_start(pulseChannels, 2);
}
This is the error:
Error multiple definition of `__vector_3' orangutan_app1 C:\home\david\libpololu-avr\src\PololuWheelEncoders\PololuWheelEncoders.cpp 256
Error multiple definition of `__vector_4' orangutan_app1 C:\home\david\libpololu-avr\src\PololuWheelEncoders\PololuWheelEncoders.cpp 256
Error multiple definition of `__vector_5' orangutan_app1 C:\home\david\libpololu-avr\src\PololuWheelEncoders\PololuWheelEncoders.cpp 256
Question:What is the best way to fix this problem? I searched for similar issues, and it seems it may be conflicts with interrupts. If this requires modifying library source code, what changes should I make, and how would I recompile it most easily?
Thanks!
Hello.
Thank you for your interest in the Baby Orangutan, and for the detailed and organized explanation of your problem.
Yes, the OrangutanPulseIn and PololuWheelEncoders components of the Pololu USB AVR C/C++ Library define their own interrupt service routine (ISR) for the pin-change interrupts on the AVR. The AVR can only have one ISR for each interrupt, so these libraries conflict.
Library modifications
You can probably make your program work by modifying the library. I would recommend that you change the ISRs in OrangutanPulseIn.cpp and PololuWheelEncoders.cpp to just be normal functions instead of real ISRs. You would have to get rid of all the lines that refer to interrupt vectors PCINT0_vect, PCINT1_vect, PCINT2_vect, and PCINT3_vect. Also, define your ISRs with extern "C" so you can call them from your C program. So the ISR definition in each file would look something like:
extern "C"
extern "C" void pulse_in_isr(void)
{
// ISR code here
}
Then, in main.c, you need to define an actual ISR that calls these functions. Something like this should work:
main.c
void pulse_in_isr(void);
void encoders_isr(void);
ISR(PCINT0_vect)
{
pulse_in_isr();
encoders_isr();
}
ISR(PCINT1_vect,ISR_ALIASOF(PCINT0_vect));
ISR(PCINT2_vect,ISR_ALIASOF(PCINT0_vect));
Compiling the modified library
We usually compile the library on Ubuntu using Ubuntu's AVR GCC cross compiler and standard tools like Bash and GNU Make. However, I think it will be easier for you to instead just copy all the library files that you need into your own Atmel Studio project and compile them as part of that project. You should not use our templates; you would need to make a new project by selecting File -> New Project -> C/C++ -> GCC C++ Executable Project. This kind of project can have both C and C++ source files in it. I recommend that you add your main.c file to it, try to build the project, and then fix compilation and linker errors by adding needed .cpp and .h files from the source code of the library to your project until everything builds. If you have some familiarity with C or C++ already, I think the errors will not be too hard to fix, and you can post the errors here if you need help.
While you are developing this, it might be good to move or rename the "pololu" folder in Atmel Studio's AVR GCC toolchain so that you can make sure that you are using your own copies of the Pololu AVR C/C++ Library header files instead of using the main library's header files. However, since you don't need to modify any of those header files, it will not be too bad if you accidentally use them. The "pololu" folder that gets installed by our library in Atmel Studio's toolchain can be found here, assuming you installed Atmel Studio in the default location:
C:\Program Files (x86)\Atmel\Studio\7.0\toolchain\avr8\avr8-gnu-toolchain\avr\include\pololu
ISR performance
Since your new ISR would call functions in other source files, the compiler might be forced to save/restore many registers that are not actually used, and this would slow down the ISR. There is a good chance that your application will work fine anyway, but if you want to speed up the ISRs, you might consider enabling GCC's link-time optimization (LTO) feature. However, I have not looked into whether link-time optimization is actually supported in Atmel Studio. (It is supported in the Arduino IDE, another environment for AVR programming that is compatible with the Baby Orangutans.)
--David
Ok, I've done this and got it compiling with no errors, though I can't test it on the device for now. I looked and didn't see the Link-time optimization. Maybe I can combine the PololuWheelEncoders and OrangutanPulseIn files as an alternate solution. With this solution of combining ISRs, it appears to me that we will be running extra code on interrupts. Is the code prepared to handle this? Is this because we are limited in our number of interrupts?
Thanks for the help.
I am glad you were able to get it to compile.
There is a good chance that the extra overhead in the ISR will not make a noticeable difference in your application, so I would recommend testing the code first before you try to optimize it.
It looks like there are 12 extra registers that avr-gcc stores on the stack when it is calling an opaque function in an ISR. Each register takes 4 cycles to save and restore, so that is 48 cycles, or 2.4 microseconds. There will also be a few cycles of overhead to call the functions and return from them.
Even though Atmel Studio does not have a feature for link-time optimization, the compiler they are using does. I was able to build a program with LTO today in Atmel Studio 7.0.1006 by adding the -flto command line argument to Project Properties -> AVR/GNU C Compiler -> Miscellaneous -> Other flags. I also had to copy a file named specs-avr5 from one part of Atmel Studio to another so that GCC's lto-wrapper program could find it. Since it did not work out of the box, I would consider Atmel Studio's LTO to be an experimental feature.
-flto
specs-avr5
lto-wrapper
Yes, you could combine PololuWheelEncoders and OrangutanPulseIn into one file, or you could make some private variables for the libraries be globally visible so that you can copy all the code for those ISRs into the ISR you defined.
Yes, by combining the ISRs likes that, they are going to run more often than they normally would. The encoder ISR will run whenever there is a change in your PWM input pins, and the PWM ISR will run whenever there is a change in your encoder input pins. The ISR code can handle that.
If you want to avoid running the extra code, you might consider changing your pin assignments to make sure that there is no pin-change interrupt vector that handles events for both the libraries. For example, you might put all of the encoder inputs on PCINT0_vect and all of the PWM inputs on PCINT1_vect if that is feasible. The mapping between pins and pin-change interrupts is documented in the "External Interrupts" section of the ATmega328P datasheet.
I had the time to give this all a shot this weekend, after fixing many other small bugs in my code. It all went pretty smoothly and is working well. I just have a few notes for others that may want to do something similar.
The initialize for OrangutanPulseIn clears the initialization of PololuWheelEncoders. Just initialize OrangutanPulseIn first, then PololuWheelEncoders, and they will both work fine.
I used PD0, PD1, and PD2 for my PWM in pins. For some reason my PD1 didn't work, and I'm not sure why, so I just switched to using PD7, and all 3 channels worked without issue.
My update loop tested at around 500 cycles per second, and I only need 50 per second, so I didn't need to bother with doing any micro optimizations, or try the link time optimization.
Thanks for all your help with this. I am impressed by the knowledge and helpfulness of Pololu. I've also got to commend you on your libraries, as I was able to implement everything I needed, while keeping my code clean, and operating on a high level. I'm on to designing the mechanics of this project now that all the electronics and code are sorted out.
I am glad you were able to get your program working.
On the Baby Orangutan, pin PD1 controls the red user LED and it is also the serial transmit (TX) pin for the AVR's UART. If you are using the red LED in your program or using the AVR's UART, that would explain why PD1 did not work for PWM input. | https://forum.pololu.com/t/confliction-between-pololuwheelencoders-and-orangutanpulsein/12067 | CC-MAIN-2017-51 | refinedweb | 1,579 | 69.41 |
Signing in with Google and Facebook has been a norm for as long as one can remember. This changed on June 3, 2019. During WWDC 2019 at San Jose, Apple announced the new ‘Sign in with Apple’. This feature will only be compatible with versions iOS13 and later.
Being an iOS app development company, we keep eye on upcoming trends and technologies from Apple. This article is the output of it. In this Sign in with Apple (iOS Swift) tutorial, you will learn about how to integrate this feature in your app.
Contents
- Introduction
- Why ‘Sign in with Apple’?
- How to integrate ‘Sign in with Apple’ feature in your iOS app?
- Benefits of Apple Sign in
- Guidelines to Use Sign-in With Apple Feature in Your iOS Apps
- FAQs
- Conclusion
Introduction
With every announcement, Apple brings in newer features and better-updated versions of its technology. In the WWDC in 2019, Apple announced various new updates and features like Sign in with Apple,Dark Mode, iOS 13 features, and Multiple windows feature in iPadOS.
Apple is making its sign-in required whenever a third-party (whether it’s Facebook, Google, Instagram, Snapchat, or any other) log-in option is provided, giving users a private choice. Seeing WWC 2019, one of our experienced iOS developers wrote this Apple Swift tutorial on how to integrate ‘Sign in with Apple’ feature into your iOS application.
Want to Develop an iOS App with ‘Sign in with Apple’ Feature?
If you want to integrate Sign in with Apple feature in your iOS mobile application and want to make the login of your mobile app simpler and safer for your user, this is the right blog for you..
Security
A user account is always secure as every account using ‘Sign In with Apple id’ gets automatically protected with two-factor authentication. This is done by adding Touch ID or Face ID as a second layer of protection after the username/password combination.
Multiple platforms
It works natively on iOS, macOS, tvOS, and watchOS. You can also deploy it on your website and in different versions of your apps running on other platforms. For apps on Android devices and the web, users are sent to a ‘web view’ of the app where they need to click on Sign in with Apple and enter their AppleID and password to complete their sign in.
Prevent fraud
Using advanced machine learning it can send alerts if the new account on your app is (probably) not a real person. The developer can take this alert into consideration while processing their own additional anti-fraud measures or while providing permission to features in their apps
Create a New Project in Xcode
Go to “Target” and click on “Capability”
Add the “Sign in with Apple” Capability in your project
Go to the “ViewController.swift” and Import ASAuthenticationServices framework (framework for “Sign In with Apple”)
Add the “Sign in with Apple” button in ViewController after adding the target for TouchUpInside action and add action to the login button click.
Handle ASAuthorizationController Delegate and Presentation Context for success / failure response.
You need to enable Sign in with Apple in your developer account.
import AuthenticationServices
This provides the ASAuthorizationAppleID button.! } }
Go to “Certificates, Identifiers & Profile” section and then click on the “Keys” option.
Click on the “Create a key” option. Enter the name of the key and enable the “Sign in with Apple” option
Done.
Now, we’ll talk about some of the benefits of this feature.
What are the Benefits of Sign in with Apple?
- This feature allows the user of the app to create an account, complete with all the details like name, email address, and verifiable stable identifiers.
- This account will help the user to sign in anywhere where your app is deployed- iPhone, iPad, macOS, tvOS.
- If your app requires a user account for better performance or for unlocking specific functionality, the sign in with Apple feature is helpful.
- The feature also helps the user to create an account after they have interacted with certain features of the app or made a purchase.
- The same account lets the user to reauthenticate on the app or website an.
Want to Develop an iOS App with the Latest Features?
We will now answer some of the frequently asked questions.
Frequently Asked Questions
Which apps use the sign in with Apple?
A bunch of top app developers has embraced the feature pretty quickly. You will find this feature in apps like Poshmark, Zillow, Bumble, Adobe, TikTok, GroupMe among other apps.
Will the developer/owner get any detail of the user when he chooses to sign in with Apple?
The developer only receives the name of the user associated with the Apple ID and email address. All the other information like contact info, social ID is not sent to the developer. There are stable identifiers that are used to keep all the information private.
Which are the other features of iOS 13?
- Dark Mode
- Revamped Photos app
- HomeKit Secure Video
- All-new Reminders app
- Memoji and stickers
- Name and image in Messages
- Smarter, smoother Siri voice assistance
- Swiping keyboard
- Multi-user HomePod
Conclusion
We hope you found this tutorial helpful. With the help of this iPhone app tutorial, we have learned how to integrate the ‘Sign about iOS app development, Android app development, or any mobile app development in general, we are happy to guide you. | https://www.spaceotechnologies.com/blog/sign-in-with-apple-ios-tutorial/ | CC-MAIN-2022-33 | refinedweb | 900 | 61.26 |
Hi('abc')) [(0, 'a'), (1, 'b'), (2, 'c')] >>> list(enumerate('abc', 1)) [(1, 'a'), (2, 'b'), (3, .
Simple Server
Do you want to quickly and easily share files from a directory? You can simply do:
# Python2 python -m SimpleHTTPServer # Python 3 python3 -m http.server
This would start up a server.
Evaluating Python expressions
We all know about eval but do we all know about literal_eval? Perhaps not. You can do:
import ast my_list = ast.literal_eval(expr)
Instead of:
expr = "[1, 2, 3]" my_list = eval(expr)
I am sure that it’s something new for most of us but it has been a part of Python for a long time.
Profiling a script
You can easily profile a script by running it like this:
python -m cProfile my_script.py
Object introspection
You can inspect objects in Python by using dir(). Here is a simple example:
>>> foo = [1, 2, 3, 4] >>> dir(foo) ['__add__', '__class__', '__contains__', '__delattr__', '__delitem__', '__delslice__', ... , 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort']
Debugging scripts
You can easily set breakpoints in your script using the pdb module. Here is an example:
import pdb pdb.set_trace()
You can write pdb.set_trace() anywhere in your script and it will set a breakpoint there. Super convenient. You should also read more about pdb as it has a couple of other hidden gems as well.
Simplify if constructs
If you have to check for several values you can easily do:
if n in [1,4,5,6]:
instead of:
if n==1 or n==4 or n==5 or n==6:
Reversing a list/string
You can quickly reverse a list by using:
>>> a = [1,2,3,4] >>> a[::-1] [4, 3, 2, 1] # This creates a new reversed list. # If you want to reverse a list in place you can do: a.reverse()
and the same can be applied to a string as well:
>>>>> foo[::-1] 'boosay'
Pretty print
You can print dicts and lists in a beautiful way by doing:
from pprint import pprint pprint(my_dict)
This is more effective on dicts. Moreover, if you want to pretty print json quickly from a file then you can simply do:
cat file.json | python -m json.tools
Ternary Operators
Ternary operators are shortcut for an if-else statement, and are also known as a conditional operators. Here are some examples which you can use to make your code compact and more beautiful.
[on_true] if [expression] else [on_false] x, y = 50, 25 small = x if x < y else y
Thats all for today! I hope you enjoyed this article and picked up a trick or two along the way. See you in the next article. Make sure that you follow us on Facebook and Twitter!
Do you have any comments or suggestions? You can write a comment or email me on yasoob.khld (at) gmail.com
17 thoughts on “Nifty Python tricks”
>>> a = [1,2,3,4]
>>> a[::-1]
It does not reverse the list, it creates a new list which is reversed. The ‘list’ class has a reverse() method for that purpose.
(but if ‘a’ is a string — strings are immutable — the a[::-1] way is quite good)
do a = a[::-1] problem solved, lol
Hey, these are great tidbits. :: hard clapping ::
Good tips..Thanks
Curious that you’re mentioning ast.literal_eval without any reason. It’s actually a much more limited eval and is not designed to evaluate general Python. The ast module is for modeling Python’s syntax. Try a few statements with literal_eval; you will not get far.
The limits ARE the feature: We use lit.eval all the time for *secure* (de)serializiation of structures into strings and back. The reason we prefer compared to json is this:
>>> ‘é’ in json.loads(json.dumps([‘é’]))
False
>>> ‘é’ in literal_eval(str([‘é’]))
True
you must use range instead of xrange for python3
a[::-1] slicing is obtuse and made obsolete by the `reversed` builtin:
Thank you, love the tips especially enumerate, will save me some lines!
I think we should use `{1,4,5,6}` instead of `[1,4,5,6]` for `in` operator because `set` can access each element by O(1).
You can even use a ternary operator in a list comprehension:
[x**2 if x > 10 else x**4 for x in range(40)]
Very good! Thank you 🙂
Great work
Lovely site and informative.
Thanks a lot for your love of sharing.
How do i put an output of last python command into a var for example url = (scheme, resultid[0]) (gives error ??
On simple if/else statements like shown under Ternary Operators, I like to use indexing where the test returns 0 (False) or 1 (True). So you could represent as: small = (y,x)[x<y] | https://pythontips.com/2015/04/19/nifty-python-tricks/ | CC-MAIN-2018-39 | refinedweb | 792 | 73.37 |
A map is an indexed data structure, similar to a vector or a deque. However, a map differs from a vector or deque in two important respects:
First, in a map the index values or key values need not be int, but can be any ordered datatype. For example, a map can be indexed by real numbers, or by strings. Any datatype for which a comparison operator can be defined can be used as a key. As with a vector or deque, elements can be accessed through the subscript operator or other techniques.
Second, a map is an ordered data structure. This means that elements are maintained in sequence, the ordering determined by key values. Because maps maintain values in order, they can very rapidly find the element specified by any given key. Searching is performed in logarithmic time. Like a list, maps are not limited in size, but expand or contract as necessary as new elements are added or removed. In large part, a map can simply be considered a set that maintains a collection of pairs.
In other programming languages, a map-like data structure is sometimes referred to as a dictionary, a table, or an associative array. In the C++ Standard Library, there are two varieties of maps:
The map data structure demands unique keys; that is, there is a one-to-one association between key elements and their corresponding values. In a map, the insertion of a new value that uses an existing key is ignored.
The multimap permits multiple different entries to be indexed by the same key.
Both data structures provide relatively fast insertion, deletion, and access operations in logarithmic time.
Whenever you use a map or a multimap, you must include the map header file.
#include <map> | http://stdcxx.apache.org/doc/stdlibug/9-1.html | CC-MAIN-2013-48 | refinedweb | 294 | 62.58 |
Feature #1981
[PATCH] CSV Parsing Speedup
Description
=begin
This patch replaces the regex used in the Ruby 1.9 CSV parser with ruby code.
Running all CSV tests (ts_all.rb) is much faster (36% on my machine). Probably because they don't have to rebuild the regex over and over again. I tested on large CSV files and got an average speedup of 23% on my machine.
(James, this patch is improved & faster from the one I emailed to you on 8/21. Putting into this ticket to make it easier to track any changes.)
=end
History
#1
Updated by JEG2 (James Gray) about 8 years ago
- Due date set to 12/31/2009
- Target version set to 2.0.0
=begin
I just wanted to state that I am aware of this patch and do plan to apply something like it. I'm purposefully waiting a bit to see how the new FasterCSV release, which uses a similar non-regex approach, fairs in the wild. When I fill confident we've made the right choice to switch parsing strategies, I will commit some version of this patch.
=end
#2
Updated by JEG2 (James Gray) over 7 years ago
=begin
Timothy, is it possible for you to redo this patch against the current trunk, so I can look at what it would take to get it applied?
You mention that it is faster which is great. Are you also pretty confident that it maintains CSV's m17n savvy implementation? I don't want to lose the ability to parse in any encoding. I have some tests for this and I'll try to look over the code as we change it, but I know I'm not perfect and could have missed something.
=end
#3
Updated by ender672 (Timothy Elliott) over 7 years ago
- File ruby_19_csv_speedup_02.patch ruby_19_csv_speedup_02.patch added
- File csv.rb csv.rb added
=begin
I updated the patch to against trunk. Attaching to this ticket update.
I took care to make it pass all tests, including the m17n tests in test/csv/test_encodings.rb . I have not tested it against m17n CSV files outside of the tests. It would be a good idea to test against a few large m17n CSV files to make sure that the new operations used by this parser (String.gsub!, String#count & String#last) perform well in that scenario.
=end
#4
Updated by JEG2 (James Gray) over 7 years ago
=begin
Thanks Timothy, but this patch doesn't include your latest fix from the FasterCSV side, to restore the strict parser behavior. This tests is currently failing:
def test_non_regex_edge_cases
# An early version of the non-regex parser fails this test
[ [ "foo,\"foo,bar,baz,foo\",\"foo\"",
["foo", "foo,bar,baz,foo", "foo"] ] ].each do |edge_case|
assert_equal(edge_case.last, CSV.parse_line(edge_case.first))
end
assert_raise(CSV::MalformedCSVError) do CSV.parse_line("1,\"23\"4\"5\", 6") end
end
Can I bother you for one more patch that includes this fix as well? Thanks!
(I'm applying all of the new FasterCSV fixes and am now missing only this in my local checkout, so a patch adding this after the latest patch you gave me would be best. You also don't need to add the test, since I already have.)
James Edward Gray II
P.S. It is quite a bit faster and does seem to play well with m17n, from what I can see. Great work!
=end
#5
Updated by ender672 (Timothy Elliott) over 7 years ago
=begin
Patch attached.
Thanks,
Tim Elliott
=end
#6
Updated by JEG2 (James Gray) over 7 years ago
- Status changed from Open to Closed
=begin
Thanks for all the help Tim. We're all caught up now with all of your patches applied.
=end
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/1981 | CC-MAIN-2017-39 | refinedweb | 632 | 81.33 |
Help:Categories
- See Wikibooks:Categories for a description of the Wikibooks-specific usage of categories.
Categories are a software feature of MediaWiki which allows indexing of pages, files, templates and other categories (referred collectively from now on as simply pages). Together with links and templates, categories help structure a book.
Pages are said to be "members of a category" if a category link is included in the text. Pages can be members of zero, one or more categories. Pages that are not a member of any category are said to be uncategorized. Pages should be a member of at least one category, with the exception of Category:Categories, so that all pages can be found through the category system. An automated list of all uncategorized pages is maintained at Special:UncategorizedPages, Special:UncategorizedFiles, Special:UncategorizedTemplates and Special:UncategorizedCategories.
Category pages
Categories have an editable part that typically contains at least a description of the category's purpose and category links to any related categories. If the editable part has not been created, links to the category will treat the category as a new page.
Categories which themselves are categorized are said to be a subcategory for another category. However categories allow more flexibility than simply a parent-child relationship.
Category pages may have a:
- Description section. This section is the only editable part and can include anything that is normally allowed on any other page.
- Related categories section, which lists categories that were categorized using this category if any.
- Books and Pages section, which list books and pages that were categorized using this category if any.
- Files section, which lists files that were categorized using this category.
Adding __NOGALLERY__ to a category causes categorized files to be shown only by their name in the Books and Pages section, rather than showing a preview for supported file formats.
Category pages list only 200 pages at a time.
Adding pages
Each page lists the categories that it is a member of. Categories are listed in the order they first occurred. To add a page to a category, simply put
[[Category:NAME]] in any page you are editing. Where NAME is the name of the category you want to add it to. This provides a link to the appropriate category page, which is in the "Category" namespace. Any number of categories may be added to a page and the page will be listed in all the categories. Categories can be added wherever you like in the text, but in general for the convenience of other editors, categories are added at bottom of content pages and at the top of discussion pages. Files can be categorized by adding categories to their description page.
If you want to link to a category without the current page being added to it, you should use the link form
[[:Category:NAME]] (where NAME is the category name). Note the extra ":" before Category.
Please familiarize yourself with Wikibooks:Categories, before you start categorizing pages.
Sorting pages
A "sort key" can be added that specifies where a page will appear within a category. This is done by using [[Category:NAME|SORT]]. For example [[Category:Help|Category]] would add a page to the Help category and sort it under the "C" heading after "Can" and before "Catalog". Sort keys are case sensitive and even a space is recognized as a sort key. A sort key only affects how pages are ordered. A sort key does not change what name is used to display category and page links.
Without a sort key pages by default are sorted using their full page name, this includes any namespace they belong to. The default sort order can be overridden by adding
{{DEFAULTSORTKEY:SORT}} to pages where SORT is the name to sort pages under.
Pitfalls and Gotchas
- Category pages currently cannot be moved or renamed.
- Pages added to a redirected category continue to show in the original category rather than the new category.
- Pages can only be added to a category once; only the last category link is used along with its sort key and the rest are ignored.
- Category listings may be outdated at times when categories contained within templates are changed.
- "What links here" only shows pages which include
[[:Category:NAME]]links. Note the extra ":".
- "Related changes" can't be reliably used to detect when pages are added or removed as it only shows changes to pages that are currently members of the category. | http://en.wikibooks.org/wiki/Help:Categories | CC-MAIN-2014-10 | refinedweb | 741 | 55.24 |
conj, conjf, conjl - calculate the complex conjugate
#include <complex.h> double complex conj(double complex z); float complex conjf(float complex z); long double complex conjl(long double complex z); Link with -lm.
The conj() function returns the complex conjugate value of z. That is the value obtained by changing the sign of the imaginary part. One has: cabs(z) = csqrt(z * conj(z))
These functions first appeared in glibc in version 2.1.
C99.
cabs(3), sqrt(3), complex(7)
This page is part of release 3.24 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. 2008-08-11 CONJ(3) | http://huge-man-linux.net/man3/conjl.html | CC-MAIN-2017-13 | refinedweb | 114 | 58.99 |
General Web development / Nova chat
1.1is the current branch on Nova Press
@daveismyname I have update also the
composer.json from the branches of
Please be kind to check if everything is OK on packagist side for this app.
@daveismyname
HP@DESKTOP-7TPCTSP MINGW64 /c/xampp/htdocs/nova
$ php forge module:migrate
There are no commands defined in the "module" namespace.
Header always set Access-Control-Allow-Origin "*" Header always set Access-Control-Allow-Headers "Origin, X-Requested-With, Content-Type, Access-Control-Allow-Origin" Header always set Access-Control-Allow-Methods "PUT, GET, POST, DELETE, OPTIONS”
Hi, guys!
I solved the CORS issues on Nova 4.2 per route, in the following way:
// /app/Middleware/Cors.php namespace App\Middleware; use Closure; class Cors { public function handle($request, Closure $next) { return call_user_func($next, $request) ->header('Access-Control-Allow-Origin', '*') ->header('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE, OPTIONS'); } }
// /app/Config/App.php /** * The Application's route Middleware. */ 'routeMiddleware' => array( 'auth' => 'Nova\Auth\Middleware\Authenticate', 'guest' => 'App\Middleware\RedirectIfAuthenticated', 'throttle' => 'Nova\Routing\Middleware\ThrottleRequests', 'cors' => 'App\Middleware\Cors', // <--- Add this line ),
// /app/Routes/Api.php Route::get('cors-works-here', array('middleware' => 'cors', function () { return 'CORS works here!'; }));
The example route responds on
/api/cors-works-here and I suggest to people to limit their CORS routes to API, because the WEB way on Nova is basically a true fort defending the app against CORS.
Everything there fights against CORS, from sessions to auth system.
// /app/Middleware/Cors.php namespace App\Middleware; use Closure; class Cors { public function handle($request, Closure $next) { return call_user_func($next, $request) ->header('Access-Control-Allow-Origin', '*') ->header('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE, OPTIONS') ->header('Access-Control-Allow-Headers', 'X-Requested-With, Content-Type, X-Token-Auth, Authorization'); } }
@ZhaoLin1457 Hmm...
This thing is really nasty:
->header('Access-Control-Allow-Origin', '*')
Maybe it's good enough for tests, but on the wild I will prefer to enforce the acceptance of well known CORS origins, like in something similar with:
class Cors { protected $container; public function __construct(Container $container) { $this->container = $container; } public function handle($request, Closure $next) { $config = $this->container['config']; // Get the "Origin" header from Request. $origin = $request->header('Origin', $url = $config->get('app.url')); // Handle the middleware flow and retrieve the Response. $response = $next($request); // Retrieve all accepted CORS origins. $origins = array_merge((array) $url, $config->get('cors.origins', array())); if (! in_array($origin, $origins)) { return $response; } return $response->header('Access-Control-Allow-Origin', $origin) ->header('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE, OPTIONS') ->header('Access-Control-Allow-Headers', 'X-Requested-With, Content-Type, X-Token-Auth, Authorization'); } }
There is supposed that the valid CORS
origins are enumerated in an array from a config file
app/Config/Cors.php
And regarding the API and Auth System, there's a way to use stateless authenticated requests via API: using the Token Guard.
In theory, you need a special API authentication endpoint, which returns a token on successful logins. Then this token should beused to authorize the following requests.
hey @DonWagner, hope your keeping well.
Did you install /app or /framework repo?
I’ve just installed from app using
composer create-project nova-framework/framework novatemp that installed correctly I’ve then connected to a database and ran
php forge seed that worked also did
php forge db:seed and also tried
php forge package:migrate and
php forge package:seed all worked without issue
A new Q from the Dane...
Now I have searched and searched, but I have not been able to find an answer I could use. That is why I now ask in here.
How to easily create a list of users without the users mentioned in another table.
Eg: In my database I have the User table with a column called zip code. I also have a table called Cities with a list of zip codes.
Now I would like to make a list of the users from the User table, which does not have the zip codes that are in the Cities table.
How to do it easiest? Or smartest?
I’ve been re-reading this a few times I don’t quite get what you need as you can select all users where the zipcode is empty, but it sounds like that’s not what you need.
To select all users that does not have a zip code but is in the cities table? how does a user relate to the cities table?
Thanks Dave, i will try to explain in another way.
It's only a test setup, to learn from. So there is no logic why i'm using zip codes, that could be anything :laughing:
I have made a table with 100 users, all with a zip code (they don't have the same zip code):
id|name|zip_code
Then i have another table called Cities, where i'm adding some cities and there zip codes:
id|city|zip_code|
I then have a list of all the users. Then every time a new city is added to the Cities tabel, the users from the same zip_code, as the newly added city, will be removed from the list.
I hope that made more sens :smil:
Thank you for your patience. I apologize for being poor at explaining.
I try like this:
I would like to make a list of users who come from zip codes that are not in the Cities table.
So the list will look like this:
Martin 8240 Otherwise 8210 Christian 8200 Niels 8000.
Then I add
Aarhus C | 8200 to the
Cities table and I add
Aarhus V | 8210
Then the list will look like this:
Martin 8240 Niels 8000
It's certainly simple, but I can't tell if you need to use whereNotIn. I can't really make it work | https://gitter.im/nova-framework/framework/novausers | CC-MAIN-2019-51 | refinedweb | 972 | 53.81 |
by Matthew Ford 15th January 2015 –
corrected library include (originally posted 1st April
2014)
© Forward Computing and Control Pty. Ltd. NSW Australia
The Arduino langauge provides an
AnalogRead()
method to perform A/D coversions on the analog inputs A0 to A5 etc.
The AnalogRead()
method has two problems:-
i) AnalogRead() halts the main loop() while it waits for the A/D conversion to complete, typically 0.1mS to 0.2mS for each conversion.
ii) AnalogRead does NOT discard the first reading after changing the reference voltage. The Atmel datasheets for the micro-processors Arduino uses advises the user that the first A/D reading after changing the reference voltage may be incorrect and should be ignored.
The library available here solves both those problems. There are a number of other A/D conversion libraries available that use interrupts to report back when the conversion is finished. Using interrupts is an un-necessary complication and requires adding interrupt handlers and taking special care to access the results atomically. Using the polling approach is a much simpler and more straight forward.
Sample Code for Polling Analog Read
#include <elapsedMillis.h> #include <pollingAnalogRead.h> elapsedMillis batteryVoltsTimer; int batteryVoltsInterval = 1000; // the interval at which the battery volts is read void loop() { checkBatteryVolts(); // call this every loop …. other loop code here } void checkBatteryVolts() { if (batteryVoltsTimer > interval) { batteryVoltsTimer -= interval; //reset the timer triggerAnalogRead(A0); // start another conversion // if the last conversion not finished // or the value not read yet (by calling getAnalogReading()), // then calls to triggerAnalogRead() have no effect. } if (isAnalogReadingAvailable()) { // conversion finished batteryVoltsReading = getAnalogReading(); // pick up reading // Serial.print(F(" Reading:")); // Serial.println(batteryVoltsReading); } }
In the above code, checkBatteryVolts is call every loop(). Once a second, an analog read on A0 is triggered. Every loop isAnalogReadingAvailable() is checked to see if the A/D conversion has finished and the reading is a available. When a new reading is available, it is stored in batteryVoltReading for later use.
Download pollingAnalogRead.zip to your computer, move it to your desktop or some other folder you can easily find and then use Arduino 1.5.5 IDE menu option Sketch → Import Library → Add Library to install them.).
Stop and restart the arduino IDE and under File->Examples you should now see pollingAnalogRead example.
Description
pollingAnalogRead for Arduino performs AnalogReads without holding up the main loop() and automically discards the first reading after the reference voltage is changed, as recommended by the Atmel datasheets.
void pollingAnalogReference(uint8_t mode) – set the A/D reference voltage, see the Arduino AnalogRead() reference page for details. When a different reference is set this library automatically makes two A/D conversions and discards the first and returns the second.
bool isAnalogReadingAvailable() – returns true when the A/D conversion has been triggered and has finished and a reading is available. It only returns true once after the A/D conversion completes.
void triggerAnalogRead(uint8_t pin) – starts a new A/D coversion but only if the last conversion has finished and isAnalogReadingAvailable() has been called and returned true.
int getAnalogReading() – get the last A/D conversion result. This can be called more then once. The latest A/D conversion value is returned.
The pollingAnalogReadTest illustrates how pollingAnalogRead continues to process the loop() while waiting for the A/D conversion to complete.
The Serial monitor shows the number of times loop() is executed between readings. If you use the standard AnalogRead() the loop count would be only 1 for each read. | https://www.forward.com.au/pfod/ArduinoProgramming/pollingAnalogReadLibrary/index.html | CC-MAIN-2018-05 | refinedweb | 574 | 56.15 |
22 May 2012 11:08 [Source: ICIS news]
SINGAPORE (ICIS)--Linde LienHwa has signed a deal with ?xml:namespace>
Linde LienHwa is a joint venture between
Under this agreement, Linde LienHwa will provide Samsung Electronics with a turnkey installation of the thin-film transistor (TFT)-LCD plant’s bulk gases supply systems, the company said in a statement.
This includes the construction of a new on-site nitrogen generator which will supply gas to Samsung Electronics via an underground pipeline, it said.
“Overall investment is in the region of €50m ($64.1m),” the company added without elaborating further.
Industrial gases are key components in the production of transistors that control the pixels in LCD screens, according to Linde LienH | http://www.icis.com/Articles/2012/05/22/9561993/linde-lienhwa-s-koreas-samsung-electronics-sign-gas-supply-deal.html | CC-MAIN-2015-14 | refinedweb | 119 | 52.7 |
I have a function defined by making use of the first-class nature of Python functions, as follows:
add_relative = np.frompyfunc(lambda a, b: (1 + a) * (1 + b) - 1, 2, 1)
Either I need a way to add a docstring to the function defined as it is, or achieve the same thing using the more common format, so that I can write a docstring in the normal way:
def add_relative(a, b):
"""
Docstring
"""
return np.frompyfunc(lambda a, b: (1 + a) * (1 + b) - 1, 2, 1)(a, b)
which works when the function is called like
add_relative(arr1, arr2)
but I then lose the ability to call methods, for example
add_relative.accumulate(foo_arr, dtype=np.object)
I guess this is because the function becomes more like a class when using frompyfunc, being derived from ufunc.
frompyfunc
ufunc
I'm thinking I might need to define a class, rather than a function, but I'm not sure how. I would be ok with that because then I can easily add a docstring as normal.
I tagged this coding-style because the original method works but simply can't be easily documented, and I'm sorry if the title is not clear, I don't know the correct vocabulary to describe this.
coding-style
Update 1:
Close, but this still isn't good enough. Because the __doc__ attribute of the decorated function cannot be updated, and because Sphinx still only picks up the docstring of the decorated function, this doesn't solve my problem.
__doc__
Update 2:
The solution that I proposed below is nice for documentation within the source code. For documentation with Sphinx I ended up just overwriting the docstring with
.. function:: sum_relative(a, b)
<Docstring written in rst format>
It's ugly, it's hacky and it's manual but it means that I have my nice documentation in the source code, and I have my nice documentation in Sphinx.
All of the issues stem from the fact that the __doc__ attribute of a numpy.ufunc is immutable. If anyone knows why, I'd love to hear why. I'm guessing something related to the fact that it comes from something written in C, not pure Python. Regardless, it's very annoying.
numpy.ufunc
I found that I could solve the issue using a decorator to apply np.frompyfunc().
np.frompyfunc()
I write the base function (the lambda in the original example) and add a docstring as normal and then apply the decorator:
def as_ufunc(func):
return np.frompyfunc(func, 2, 1)
@as_ufunc
def sum_relative(a, b):
"""
Docstring
"""
return (1 + a) * (1 + b) - 1
It's not a perfect solution for the following reasons:
sum_relative.__doc__ is overwritten by frompyfunc to a generic and unhelpful docstring. I don't mind here because I really care about the docs generated with Sphinx from the docstring, and not accessing it programatically. You might think to try something like functools.wraps or functools.update_wrapper, however the __doc__ member of a numpy.ufunc is apparenly immutable.
sum_relative.__doc__
functools.wraps
functools.update_wrapper
I have to hardcode the second two arguments for frompyfunc. I don't mind here because all the cases that I use it here will require the same values.
Edit: It is possible to get around the above point, it's a little more verbose, but not much:
def as_ufunc(nin, nout):
def _decorator(func):
return np.frompyfunc(func, nin, nout)
return _decorator
@as_ufunc(2, 1)
def sum_relative(a, b):
"""
Docstring
"""
return (1 + a) * (1 + b) - 1 | http://jakzaprogramowac.pl/pytanie/58152,documenting-first-class-assigned-functions | CC-MAIN-2017-26 | refinedweb | 586 | 61.87 |
Welcome to WebmasterWorld Guest from 54.167.252.62
Forum Moderators: LifeinAsia & httpwebwitch
Mostly I found that the worth is about 12 months income for content revenue sites and 6 months for e-shops
What's the latest on this?
Anyone any ideas?
Actually the best rule of thumb is 5-7 times the gross profit
Hmm, so if I'm turning over $1,000,000 and making a million in gross but $0 in net profit I can sell my "business" for $5 -$7 million?
As someone who regularly buys and sells sites and sometimes consults on the subject I never ceased to be amazed by some of the figures I see.
I wonder how many people have been burnt on this basis
'it is worth whatever someone wants to give you for it', which is a pretty useless remark
As someone who regularly buys and sells sites and sometimes consults on the subject I never ceased to be amazed by some of the figures I see.
How about sharing your formula with us? If you do this so often I think we could benefit from your experience as opposed to simply telling us we are way off base.
FH
In summary
1. It's net earnings that buyers are more worried about, not gross. Net excludes all costs, from actual costs like advertising and hosting to less visible costs like a fair value for owner's time spent managing the site.
2. If people are using past earnings to price a site they are doing it in the expectation that those earnings are a guide to future earnings. The current price is then a discounted value of estimated future earnings.
3. Sites worth above a certain figure are better handled by an experienced broker than by the seller himself (Sorry, I don't make broker recommendations so no stickies on that, please).
With respect your "formula" (and the OP's "a website's value ranges from 0 to 5 times yearly earnings"), there isn't one. Each business is individual and the price it can command is a function of what the strengths of the business are, which prospective buyers it reaches, and when. Sorry if that's not helpful.
fair value for owner's time spent managing the site.
That's a very good and often overlooked point. One of my sites is netting around 125k/year however it currently has no employees. It has always been difficult for me to put a value on the site (not that it's for sale) because I do whatever needs to be done for the site. It's not time consuming, only requires about 7 - 8 hours a week to run but if someone were to purchase the site they would have to:
a. know everything I know.
b. subcontract out for maintenance, design, product development, programming, marketing, customer service, etc.
c. hire someone that knows everything I know.
There is an enormous value/cost in running your own site that many people overlook.
I have made numerous posts in WW on the subject of buying/selling and valuing sites
My bad, I wasn't aware of your previous posts on the subject. Thank you for posting the links to those. I will go read them over. In the meantime you mentioned above if we had a specific questions...
The current price is then a discounted value of estimated future earnings.
I have a degree in finance, so I am aware of the principle of discounting something, but typically I have seen this done in terms of discounting future earnings based on your rate of return. For example if I am looking at an annuity as an investment and it will pay me $100 per month for the rest of life I simply discount those future payments by the rate of interest I want to earn on my money (say 9%) to determine what I should pay for that investment in today's dollars to get that rate of return on the money I will invest. Is that what you are referring to here?
fair value for owner's time spent managing the site.
I agree this is a valuable thing to consider, but how do you put a value on this? Some people work faster than others, have better skills, more experience, etc. In addition, everyone values their time differently. I might charge $150 an hour for my time and consider that fair, but someone else might think I am totally off my rocker with such a rate. Therefore, what is a fair value for your time and who determines this value?
Last question, when selling a site, to some degree the past success of that site may have a lot to do with the current owner and his/her knowledge on a topic, the extent of their skills, etc. So how do you work around this when someone is buying a site so that the person buying it can have even a reasonable chance of duplicating your success with it? To some degree this is the same issue you run into when trying to sell a web development business, but if you don't have a team of developers and the company is "you" then "you" are the company so how do you sell that?
I find this topic extremely intriguing for a number of reasons so thank you for posting this.
FH
what is a fair value for your time and who determines this value?
If I were a buyer I wouldn't care much what you thought your time or skills were worth, I'd value them according to what it would cost me to replace them.
That could vary a lot depending on whether I had the skills myself, could learn them, could hire them, or maybe figure out a way to reduce the need for them. Another value to consider would be what else I could do with my time instead of running the site or managing help.
The same site could easily be worth very different amounts to different people.
but if you don't have a team of developers and the company is "you" then
buckworks has answered the other questions better than I could.
FH is right on the discounting. If you expect to earn $100 over the next five years how much do you invest now to reap that return?
Investing in business the investor has to see a way to recoup his investment if you showed me a business doing a million with no net I would walk as how could I recoup my investment.
How could I go to a bank and borrow the money to buy a business breaking even. I could if there were other variables mixed into this equation but it would take much more up front money to close the deal and much more work to get the deal closed.
I was giving him the easiest and simplest way to at least have a starting point to place a value on a business.
Now I agree in all casses that won't work take utube or myspace but they sold traffic volume and and don't fit the norm.
5-7 times the net lets the investor recoup his investment in 5-7 years and allows the ability to borrow money much much easier.
1) Your earlier post was about gross, not net (and you mentioned it multiple times so it wasn't a typo)
2) 5-7x profit in your example is $125K-$175K (not $200K); even your maths is wrong
3) ;)
To give you a better understanding we have just sold a site I manage for a 6 figure sum so I do have some knowledge in this matter, but since you are the self proclaimed expert in all sales please understand you have a wonderful formula to help you and that is good.
If someone has a different opinion please keep your arrogant remarks to yourself it just shows me you really are trying to show off in the forum and does have a negitive effect on anything you say.
I consider gross and net the same but you are correct I stand corrected they are different.
These are VERY different, you really need to understand this concept cold before you go into a meeting with a potential seller and/or buyer of a business or web site. I would recommend you get a couple of basic books on finance and accounting at your local library and bone up on this topic before going too far down this road. Mistakes like this could cost you a bunch of money. At the very least the other side will realize you aren't experienced at this which might end up costing you even more money.
When I said net I do mean gross profit as a company can adjust their net through better management.
You can also improve gross [profit] with good management as well, i.e. buying from a cheaper source, hiring cheaper talent, using lower cost inputs, etc. Again you really need to understand the differences between these and how you can manage each and what control you have and don't have over each before you buy. Believe me if you are selling and the seller knows what they are doing and is a sharp business person they will know this inside and out before you ever see a piece of paper with a written offer on it.
FH
If someone has a different opinion please keep your arrogant remarks to yourself it just shows me you really are trying to show off in the forum and does have a negitive effect on anything you say.
Here's one thing you'll find is absolutely, astoundingly, amazingly coincidental: People who claim 5x-7x is reasonable are never the same people who have money in their pockets ready to buy ;)
I would not buy any site for 5 years profit seems bad idea , the internet changes so quickly that who can predict those profits 5 years down the road .
I think if I was buying based on profit would be looking at one year to two years profit ( not going into Gross and Net to complicated for me ) because at the end of two years you would possibly have to look at re-design, re-moniterize using different model and a very different Internet than today.
Plus if I was interested in buying sites would be looking at vastly different matrix age, natural traffic, links, repeat visitors, growth potential, technical requirements and spec, bandwidth requirements ( any Video, Music type sites could have fantastic visitor numbers but high bandwidth requirements, Current Profit, Potential Profit, .
I suspect as a poster stated earlier once your into serious financial investment your best bet is to employ a professional for both valuation and contract.
steve
Valuation of any asset is complex and has a variety of changing contributing factors.
Using a simple multiple may work to an extent, but to truly realize a great investment, its a little more complex.
Would you trade stocks based only on a price to earnings ratio?
An exit plan is a good thing to have before you start a business ;) but it's never too late to plan this.
Given the variety of different websites around, with wildly different rates of growth and riskiness, trying to find a range of multiples of "websites" per se is probably a waste of time.
Even thinking of just plain content sites, there is a huge difference between the multiple (of profit) it is wroth paying for:
1) A spammy MFA that risks getting blasted into oblivion with every algo change.
2) A site with brilliant content that keep producing steady growth.
e-commerce sites, community sites, etc. are all very different beasts. ;)
True, but only because most websites that offered for sale are the sort of sites that tend to lose ground with every algo change.
If you can convince me that a site is really stable, I would snap it up at 4x, and would probably pay 10x, and even more if there is good growth (on net profit less value of time to run it). That would still give me a better return than other investments.
Think of all the well publicised big sites that have sold for high multiples. They did not get the high multiples just because they were big: they became big because they had a formula for success, and that is why they are worth a high multiple.
You might find the odd site that is high quality on a small scale (the best in a small niche, say). I think there are a good many of them around, but their numbers are far fewer than the other sort. They probably tend to be sold less frequently, because their owners know what they are worth, and will not sell into a market that wants to pay them low multiples: there is probably an information asymmetries here because it is difficult for the buyers to establish the real quality of a site (are there paid back links that will disappear? Are there links from the current owner's higher PR site? Is there paid traffic? Are the earnings faked? IS there some black hat SEO you have not spotted? ...).
and even more if there is good growth (on net profit less value of time to run it)
If you can convince me that a site is really stable, I would snap it up at 4x, and would probably pay 10x,
I agree that there are as many business models as there are wannabe millionaires, but a business is a business. Due diligence doesn't just involve ascertaining the profit; you need to know how that profit is achieved and the risks inherent in the business proposition. Sure, sites lose ground with algo changes (that's why you factor a bigger risk the higher the percentage of traffic from easily changeable sources like SEs). Sure, a business "model" (MFAs?) may fade into obscurity. There are a million other risks. The risks help determine the multiple.
are there paid back links that will disappear? Are there links from the current owner's higher PR site? Is there paid traffic? Are the earnings faked? IS there some black hat SEO you have not spotted? ...
the internet changes so quickly that who can predict those profits 5 years down the road
What you are referring to is risk, in this case risk that the market will change and no longer desire your particular product/service. Anytime an investor buying any business is asked to take on additional risk you demand a higher rate of return on your investment. For example, if I am buying government T-bills, which have very low risk I get paid 4% for my investment dollars. If, on the other hand, I am asked to invest in wild cat oil speculation in Alaska that might be considerably more risky hence I might ask for 20-40% return on the investment dollars.
When you are buying a web site you have to be able to make an educated guess about the market of said web site 5 years out and if the risk is high that the market won't want it any longer than you demand a high rate of return, which simply means when you discount projected earnings that you get a lower dollar figure of what that is worth today. If the risk is lower than your rate of return may be lower and once the same discounting is applied you might be willing to pay a bit more in today's dollars for that web site.
How you make the educated guess of how the market will react to that web site 5 years in the future is probably a good point to debate, any takers?
FH
Economically it is the correct thing to do, but it is not what GAAP or IFRS says.
If you're willing to pay 10x get your butt outta here quick and head off
I had a look at what is being sold at Sitepoint right now. Most of them are just like a million other sites (games sites for example). In fact, after looking though several pages I found one site that looked anything like decent. It has declining traffic, but a good position in a local area and good backlinks. This site has a current bid of about one year's REVENUES on an optimistic projection (it is probably making a loss after you deduct the value of time to run it) and a BIN of nearly four times that.
Now that is a site that is well short of what I would consider a quality site. The criteria that make something worth a premium, are the same as would apply to any business from a corner shop to huge listed companies.
Suppose you were considering buying a small chain of shops. Suppose it has little sales growth, few expansion opportunities, and Tesco (if you are in Britain, Say Walmart or similar elsewhere) have announced they are going to start stocking whatever the shop sells. You would play a very low multiple for it.
Now suppose that it is a rapidly expanding small chain. It has gone from five shops to ten in the last year, and it is easy to find hundreds of sites across the country for further openings. You might well pay say 50x profit for it, and still consider it a bargain, because it is that sort of chain that can become a huge national chain, like Bodyshop and Newlook did (sorry, UK examples only, because that is what I know).
You also need to think about barriers to entry (for most websites, none), how much it would cost a buyer to replicate the site for themselves (usually, not much), etc.
Compared with retailers websites are much more varied. e-commerce websites are on-line retailers, content sites are media businesses. Other sites provide services, others provide a medium to a community.
My point is that websites are too varied to value against each other in a simple fashion, as many people seem to want to.
Maybe due dil is a skill in itself and I should setup a business providing that service ;)
There is certainly a need for the service, the biggest problem I could see is persuading buyers that you are competent and trustworthy, when they may not know enough to assess your competence.
If, on the other hand, I am asked to invest in wild cat oil speculation in Alaska that might be considerably more risky hence I might ask for 20-40% return on the investment dollars.
Not quite true, as you may know. the risk on the oil speculation is almost entirely diversifiable, so you should calculate and expected return (taking the chance of nothing being found into account) and then discount at the same rate as other upstream oil.
There is certainly a need for the service, the biggest problem I could see is persuading buyers that you are competent and trustworthy
there is no accounting standard I know of
You might well pay say 50x profit for it...because it is that sort of chain that can become a huge national chain, like Bodyshop and Newlook did
Most of them are just like a million other sites.....well short of what I would consider a quality site... looking in the wrong places... are pretty low quality sites...I think you know why I do not ... You need to think about barriers to entry... websites are too varied to value
That was a joke. I've helped many people in WW and some of them have showed their gratitude for my several hours of help by sending me flowers or giving me the odd link.
I saw the smiley, but I thought it was worth a comment because the skills to do this are lacking.
I know a fair amount about accounts. You are plain wrong if you think that you think that you can deduct the value of your time, unless you actually pay yourself (not necessarily in cash, it could increase a debt to you for example, but something must go through the books). Of course that only applies if the business is incorporated, otherwise its profits are yours, and you cannot pay yourself so the question does not arise.
Now it is certainly right to deduct the value of your time, but that is very much a non-GAAP, non-IFRS measure.
Damn, if you're willing to pay 10x and willing to pay someone else for all the hard work you're going to be putting in to grow the business, you're every seller's dream
The main point is that what you are buying has a formula for growth that you cannot easily replicate. Why do you think Google bought Youtube? Big pharmaceutical companies keep buying small innovative ones?
Also you are double counting the value of your labour. If you deduct the value of it from the profits, and you are deducting it from the value of the growth!
Do you know what I hear? I can't, I can't, I can't. And, as someone who finds sites to buy on a regular basis, I have no problem with anyone who wants to opt themselves out.
That is why I think a lot of the best quality sites do not change hands that often!
It also means that if you had been offered stakes in some spectacularly successful businesses while they were still small, you would have turned them down. It does happen, like the man who got a big stake in Body Shop for financing their second shop. Now that is a bargain!
Now, it is fair enough for you to stick to certain types of sites because that is what is your business model - and I can see why it works for someone who know how to evaluate that kind of site. What I am saying is that there is a wide variety of sites out there that are worth more.
If someone offered me an years revenues for my site, me response would be that I might as well wait an year and collect those revenues, and still have the site at the end of it. So, I am never going to do business with you. That is fair enough. However, it does not mean that someone who bought the site off me at a higher valuation would get a bad deal: they are not going to find another investment that will give them a better return easily.
Valuation of any asset is complex and has a variety of changing contributing factors.
Using a simple multiple may work to an extent, but to truly realize a great investment, its a little more complex.
As I've said before, I believe several elements constitute the price. But if you had to choose any one single metric it's what the site is expected to earn in the near future that matters. Very often the best guide to this can be found in what the site earned in the recent past. That's the major usefulness of past earnings. And pretty much the only use. | https://www.webmasterworld.com/webmaster_business_issues/3426272.htm | CC-MAIN-2015-27 | refinedweb | 3,909 | 66.47 |
VEX constants for type limits and common math values277 3 1
- drichardson
- Member
- 27 posts
- Joined: May 2018
- Offline
Take a look at $HFS/houdini/vex/include/math.h" You can include math.h in vex like this:
You can also make you own definitions and include them when needed. Make a file “mystuff.h” and use #define to set your own constants like this
To use it in vex put #include <mystuff.h> in your vex code.
-b
Then those definitions will be available in your code.
#include <math.h>
etc.
f@debug = M_TOLERANCE
You can also make you own definitions and include them when needed. Make a file “mystuff.h” and use #define to set your own constants like this
, and place the file in YOURPREFSDIR/vex/include/mystuff.h.
#define VERYSMALLNUMBER 0.00000000000000001
To use it in vex put #include <mystuff.h> in your vex code.
-b
Edited by bonsak - March 30, 2019 05:38:03
- drichardson
- Member
- 27 posts
- Joined: May 2018
- Offline
bonsak
include math.h in vex like this
Perfect! I didn't realize you could include files in VEX because I was only reading the VEX language reference []. The documentation on includes is in the VCC compiler reference [].
In addition two the two include locations you mentioned ($HFS and YOURPREFSDIR), you can also place files in $JOB/vex/include.
I also noticed that I was able to use the defines from math.h without having to include it in my VEXpression in the Attribute Wrangler. I dove into the Attribute Wrangle node, and then right clicked on the Attribute VOP inside and selected VEX/VOP Options > View VEX Code and discovered math.h is already included. Here's the boilerplate:
// // VEX Code Generated by Houdini 17.5.173 // Date: Sun Mar 31 12:48:29 2019 // File: E:/code/HoudiniExamples/vex // Node: /obj/geo1/attribwrangle1_attribvop1_snippet1() { // The VEXpression parameter from the Attribute Wrangle node is inserted here! } cvex obj_geo1_attribwrangle1_attribvop1() { // Code produced by: snippet1 _obj_geo1_attribwrangle1_attribvop1_snippet1(); }
Cool! Didnt know math.h was included by default. Good to know.
-b
-b
- Quick Links
Search links
- Show recent posts
- Show unanswered posts | https://www.sidefx.com/forum/topic/62081/ | CC-MAIN-2019-43 | refinedweb | 354 | 68.87 |
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.
Hi, We've been investigating a performance issue that in the end turns out to be best fixed in glibc. The test case can best be described as a threaded application that has a pool of worker threads that, for a transaction, allocate a piece of memory (of around 1 to 8Mb, depending on the configuration), use that for the transaction and then free it again, and return to the pool. The performance issue arises because the glibc malloc allocator decides to use mmap() for these requests. In 2001, that might have been a fine choice.. but today it's not really. The kernel is required to zero the memory for new mmaps, and for the test application over half the total cpu time is spent just clearing these pages in the kernel. We've looked at if we could fix the kernel, but realistically there is no choice: zeroing memory is expensive. it eats cache and memory bandwidth, two precious commodities these days. When we force the use of the brk() allocator, performance is fine (eg over 2x that of the mmap-using scenario). This lead us to look at the threshold, which today is 128Kb. 128Kb was in 2001 empirically determined according to the comments in the source code. We believe there are some good reasons to reconsider this threshold: 1) The VA space layout on 32 bit linux has changed 2) Application and memory sizes have grown since 2001 In the old days (of 2.2, 2.4 and early 2.6 kernels), on 32 bit linux the mmap area started at 1Gb, and brk() was thus limited to some 800Mb in size. Getting all big allocations out of the way was important back then. Nowadays, the mmap area grows down from below the stack, so this artificial limit is gone. 2) speaks for itself basically; not just processor MHz grew, applications and workloads did too. No doubt a lot of it is programmer sloppyness, but it's also reality. Rather than just bumping the 128Kb to something bigger, we decided to make the value dynamic via a simple algorithm: within the 128Kb-32Mb range (64Mb on 64 bit), we keep track of the biggest free() that was done on an mmap'd piece of memory, and use that size as threshold from then on. The rationale is that you on the one hand want temporary memory to be brk() allocated, while for fragmentation reasons you want long term big allocations to be mmap allocated. The dynamic threshold via this simple algorithm tends to get all early allocations, and all allocations bigger than temporary ones as long term and thus mmap. The heuristic isn't perfect but in practice it seems to hold up pretty well, and solves the performance issue we've been investigating nicely. The one gotcha (and the biggest part of the patch) is that the dynamic behavior has to stop once the user manually sets these thresholds; we have to honor these obviously. This work was done by me and Val Henson <val_henson@linux.intel.com> Please consider applying. --- glibc-20060217T1609/malloc/malloc.c.org 2006-03-02 10:31:45.000000000 +0100 +++ glibc-20060217T1609/malloc/malloc.c 2006-03-02 10:39:05.000000000 +0100 @@ -1405,6 +1405,19 @@ int __posix_memalign(void **, size_ #endif /* + MMAP_THRESHOLD_MAX and _MIN are the bounds on the dynamically + adjusted MMAP_THRESHOLD. +*/ + +#ifndef DEFAULT_MMAP_THRESHOLD_MIN +#define DEFAULT_MMAP_THRESHOLD_MIN (128 * 1024) +#endif + +#ifndef DEFAULT_MMAP_THRESHOLD_MAX +#define DEFAULT_MMAP_THRESHOLD_MAX (8 * 1024 * 1024 * sizeof(long)) +#endif + +/* M_MMAP_THRESHOLD is the request size threshold for using mmap() to service a request. Requests of at least this size that cannot be allocated using already-existing space will be serviced via mmap. @@ -1443,12 +1456,63 @@ int __posix_memalign(void **, size_ "large" chunks, but the value of "large" varies across systems. The default is an empirically derived value that works well in most systems. + + + Update in 2006: + The above was written in 2001. Since then the world has changed a lot. + Memory got bigger. Applications got bigger. The virtual address space + layout in 32 bit linux changed. + + In the new situation, brk() and mmap space is shared and there are no + artificial limits on brk size imposed by the kernel. What is more, + applications have started using transient allocations larger than the + 128Kb as was imagined in 2001. + + The price for mmap is also high now; each time glibc mmaps from the + kernel, the kernel is forced to zero out the memory it gives to the + application. Zeroing memory is expensive and eats a lot of cache and + memory bandwidth. This has nothing to do with the efficiency of the + virtual memory system, by doing mmap the kernel just has no choice but + to zero. + + In 2001, the kernel had a maximum size for brk() which was about 800 + megabytes on 32 bit x86, at that point brk() would hit the first + mmaped shared libaries and couldn't expand anymore. With current 2.6 + kernels, the VA space layout is different and brk() and mmap + both can span the entire heap at will. + + Rather than using a static threshold for the brk/mmap tradeoff, + we are now using a simple dynamic one. The goal is still to avoid + fragmentation. The old goals we kept are + 1) try to get the long lived large allocations to use mmap() + 2) really large allocations should always use mmap() + and we're adding now: + 3) transient allocations should use brk() to avoid forcing the kernel + having to zero memory over and over again + + The implementation works with a sliding threshold, which is by default + limited to go between 128Kb and 32Mb (64Mb for 64 bitmachines) and starts + out at 128Kb as per the 2001 default. + + This allows us to satisfy requirement 1) under the assumption that long + lived allocations are made early in the process' lifespan, before it has + started doing dynamic allocations of the same size (which will + increase the threshold). + + The upperbound on the threshold satisfies requirement 2) + + The threshold goes up in value when the application frees memory that was + allocated with the mmap allocator. The idea is that once the application + starts freeing memory of a certain size, it's highly probable that this is + a size the application uses for transient allocations. This estimator + is there to satisfy the new third requirement. + */ #define M_MMAP_THRESHOLD -3 #ifndef DEFAULT_MMAP_THRESHOLD -#define DEFAULT_MMAP_THRESHOLD (128 * 1024) +#define DEFAULT_MMAP_THRESHOLD DEFAULT_MMAP_THRESHOLD_MIN #endif /* @@ -2237,6 +2301,10 @@ struct malloc_par { int n_mmaps; int n_mmaps_max; int max_n_mmaps; + /* the mmap_threshold is dynamic, until the user sets + it manually, at which point we need to disable any + dynamic behavior. */ + int no_dyn_threshold; /* Cache malloc_getpagesize */ unsigned int pagesize; @@ -3414,6 +3482,12 @@ public_fREe(Void_t* mem) #if HAVE_MMAP if (chunk_is_mmapped(p)) /* release mmapped memory. */ { + /* see if the dynamic brk/mmap threshold needs adjusting */ + if (!mp_.no_dyn_threshold && (p->size > mp_.mmap_threshold) && + (p->size <= DEFAULT_MMAP_THRESHOLD_MAX)) { + mp_.mmap_threshold = p->size; + mp_.trim_threshold = 2 * mp_.mmap_threshold; + } munmap_chunk(p); return; } @@ -5404,10 +5478,12 @@ int mALLOPt(param_number, value) int par case M_TRIM_THRESHOLD: mp_.trim_threshold = value; + mp_.no_dyn_threshold = 1; break; case M_TOP_PAD: mp_.top_pad = value; + mp_.no_dyn_threshold = 1; break; case M_MMAP_THRESHOLD: @@ -5418,6 +5494,7 @@ int mALLOPt(param_number, value) int par else #endif mp_.mmap_threshold = value; + mp_.no_dyn_threshold = 1; break; case M_MMAP_MAX: @@ -5427,6 +5504,7 @@ int mALLOPt(param_number, value) int par else #endif mp_.n_mmaps_max = value; + mp_.no_dyn_threshold = 1; break; case M_CHECK_ACTION: | http://sourceware.org/ml/libc-alpha/2006-03/msg00033.html | crawl-002 | refinedweb | 1,239 | 63.39 |
HTML5 Drag and Drop for Dart
Helper library to simplify HTML5 Drag and Drop in Dart.
Features
- Make any HTML Element
draggable.
- Create
dropzonesand connect them with
draggables.
- Rearrange elements with
sortables(similar to jQuery UI Sortable).
- Support for
touch eventson touch screen devices.
- Same functionality and API for IE9+, Firefox, Chrome and Safari.
- Uses fast native HTML5 Drag and Drop of the browser whenever possible.
For browsers that do not support some features, the behaviour is emulated. This is the case for IE9 and partly for IE10 (when custom drag images are used).
Demo
See HTML5 Drag and Drop in action (with code examples).
All examples are also available in the
example directory on GitHub.
Installation
Add Dependency
Add the folowing to your pubspec.yaml and run pub install
dependencies: html5_dnd: any
Import
Import the
html5_dnd library in your Dart code.
import 'package:html5_dnd/html5_dnd.dart'; // ...
Usage
See the demo page above or the
example directory to see some live examples
with code.
In general, to make drag and drop work, we will have to do two things:
- Create draggables by installing HTML elements in a
DraggableGroup.
- Create dropzones by installing HTML elements in a
DropzoneGroup.
To make elements sortable it's even easier: Create sortables by installing HTML
elements in a
SortableGroup. The
SortableGroup will make the installed
elements both into draggables and dropzones and thus creates sortable behaviour.
Disable Touch Support
There is a global property called
enableTouchEvents which is
true by
default. This means that touch events are automatically enabled on devices that
support it. If touch support should not be used even on touch devices, set this
flag to
false.
Draggables
Any HTML element can be made draggable. First we'll have to create a
DraggableGroup that manages draggable elements. The
DraggableGroup holds
all options for dragging and provides event streams we can listen to.
This is how a
DraggableGroup is created. With
install(...) or
installAll(...) elements are made draggable and registered in the group.
// Creating a DraggableGroup and installing some elements. DraggableGroup dragGroup = new DraggableGroup(); dragGroup.installAll(queryAll('.my-draggables'));
With
uninstall(...) or
uninstallAll(...) draggables can be removed from
the group and the draggable behaviour is uninstalled.
Draggable Options
The
DraggableGroup has three constructor options:
The
dragImageFunctionis used to provide a custom
DragImage. If no
dragImageFunctionis supplied, the drag image is created from the HTML element of the draggable.
If a
handleis provided, it is used as query String to find subelements of draggable elements. The drag is then restricted to those elements.
If
cancelis provided, it is used as query String to find a subelement of drabbable elements. The drag is then prevented on those elements. The default is 'input,textarea,button,select,option'.
// Create a custom drag image from a png. ImageElement png = new ImageElement(src: 'icons/smiley-happy.png'); DragImage img = new DragImage(png, 0, 0); // Always return the same DragImage here. We could also create a different image // for each draggable. DragImageFunction imageFunction = (Element draggable) { return img; }; // Create DraggableGroup with custom drag image and a handle. DraggableGroup dragGroupHandle = new DraggableGroup( dragImageFunction: imageFunction, handle: '.my-handle'); // Create DraggableGroup with a cancel query String. DraggableGroup dragGroupCancel = new DraggableGroup( cancel: 'textarea, button, .no-drag');
Other options of the
DraggableGroup:
DraggableGroup dragGroup = new DraggableGroup(); // CSS class set to html body during drag. dragGroup.dragOccurringClass = 'dnd-drag-occurring'; // CSS class set to the draggable element during drag. dragGroup.draggingClass = 'dnd-dragging'; // CSS class set to the dropzone when a draggable is dragged over it. dragGroup.overClass = 'dnd-over'; // Changes mouse cursor when this draggable is dragged over a draggable. dragGroup.dropEffect = DROP_EFFECT_COPY;
Draggable Events
We can listen to
dragStart,
drag, and
dragEnd events of a
DraggableGroup.
DraggableGroup dragGroup = new DraggableGroup(); dragGroup.onDragStart.listen((DraggableEvent event) => print('drag started')); dragGroup.onDrag.listen((DraggableEvent event) => print('dragging')); dragGroup.onDragEnd.listen((DraggableEvent event) => print('drag ended'));
Dropzones
Any HTML element can be made to a dropzone. Similar to how draggables are created, we create a dropzones:
// Creating a DropzoneGroup and installing some elements. DropzoneGroup dropGroup = new DropzoneGroup(); dropGroup.installAll(queryAll('.my-dropzones'));
Dropzone Options
The
DropzoneGroup has an option to specify which
DraggableGroups it accepts.
If no accept group is specified, the
DropzoneGroup will accept all draggables.
DraggableGroup dragGroup = new DraggableGroup(); // ... install some draggable elements ... DropzoneGroup dropGroup = new DropzoneGroup(); // ... install some dropzone elements ... // Make dropGroup only accept draggables from dragGroup. dropGroup.accept.add(dragGroup);
Dropzone Events
We can listen to
dragEnter,
dragOver,
dragLeave, and
drop events of a
DropzoneGroup.
DropzoneGroup dropGroup = new DropzoneGroup(); dropGroup.onDragEnter.listen((DropzoneEvent event) => print('drag entered')); dropGroup.onDragOver.listen((DropzoneEvent event) => print('dragging over')); dropGroup.onDragLeave.listen((DropzoneEvent event) => print('drag left')); dropGroup.onDrop.listen((DropzoneEvent event) => print('dropped inside'));
Sortables
For reordering of HTML elements we can use sortables.
// Creating a SortableGroup and installing some elements. SortableGroup sortGroup = new SortableGroup(); sortGroup.installAll(queryAll('.my-sortables'));
Note: All sortables are at the same time draggables and dropzones. This means
we can set all options of
DraggableGroup and
DropzoneGroup on sortables and
listen to all their events.
Sortable Options
In addition to the inherited
DraggableGroup and
DropzoneGroup options,
SortableGroup has the following options:
SortableGroup sortGroup = new SortableGroup(); // CSS class set to placeholder element. sortGroup.placeholderClass = 'dnd-placeholder'; // If true, forces the placeholder to have the computed size of the dragged element. sortGroup.forcePlaceholderSize = true; // Must be set to true for sortable grids. This ensures that different sized // items in grids are handled correctly. sortGroup.isGrid = false;
Sortable Events
Next to the inherited
DraggableGroup and
DropzoneGroup events
SortableGroup has one additional event:
SortableGroup sortGroup = new SortableGroup(); sortGroup.onSortUpdate.listen((SortableEvent event) => print('elements were sorted'));
Thanks and Contributions
I'd like to thank the people who kindly helped me with their answers or put some tutorial or code examples online. They've indirectly contributed to this project.
If you'd like to contribute, you're welcome to report issues or fork my repository on GitHub.
License
The MIT License (MIT) | https://www.dartdocs.org/documentation/html5_dnd/0.3.5/index.html | CC-MAIN-2017-13 | refinedweb | 985 | 51.85 |
Developer Start Guide¶
To get up to speed, you’ll need to
- Learn some non-basic Python to understand what’s going on in some of the trickier files (like tensor.py).
- Go through the NumPy documentation.
- Learn to write reStructuredText for epydoc and Sphinx.
Coding style¶
See the coding style guideline here.
We do not plan to change all existing code to follow this coding style, but as we modify the code, we update it accordingly.
Mailing list¶
See the Pylearn2 main page for the pylearn-dev, theano-buildbot and pylearn2-github mailing list. They are useful to Pylearn2 contributors.
Git config¶
git config --global user.email you@yourdomain.example.com git config --global user.name "Your Name Comes Here"
Typical development workflow¶
Clone your fork locally with
git clone git@github.com:your_github_login/pylearn2.git
and add a reference to the ‘central’ Pylearn2 repository with
git remote add central git://github.com/lisa-lab/pylearn2.git
When working on a new feature in your own fork, start from an up-to-date copy of the trunk:
git fetch central git checkout -b my_shiny_feature central/master
Once your code is ready for others to review, push your branch to your github fork:
git push -u origin my_shiny_feature
then go to your fork’s github page on the github website, select your feature branch and hit the “Pull Request” button in the top right corner. If you don’t get any feedback, bug us on the pylearn-dev mailing list.
When the your pull request have been merged, you can delete the branch from the github list of branch. That is usefull to don’t have too many that stay there!
git push origin :my_shiny_feature
You can keep you local repo up to date with central/master with those commands:
git checkout master git fetch central git merge central/master
If you want to fix a commit done in a pull request(i.e. fix small typo) to keep the history clean, you can do it like this:
git checkout branch git commit --amend git push -u origin my_shiny_feature:my_shiny_feature
Coding Style Auto Check¶
See the coding style guideline here. The principal thing to know is that we follow the pep8 coding style.
We use git hooks provided in the project pygithooks to validate that commits respect pep8. This happens when each user commits, not when we push/merge to the Pylearn2 repository. Github doesn’t allow us to have code executed when we push to the repository. So we ask all contributors to use those hooks.
For historic reason, we currently don’t have all files respecting pep8. We decided to fix everything incrementally. So not all files respect it now. So we strongly suggest that you use the “increment” pygithooks config option to have a good workflow. See the pygithooks main page for how to set it up for Pylearn2 and how to enable this option..
To checkout another user branch in his repo:
git remote add REPO_NAME HIS_REPO_PATH git checkout -b LOCAL_BRANCH_NAME REPO_NAME/REMOVE_BRANCH_NAME
You can find move information and tips in the numpy development page.
Details about
PYTHONPATH¶
$PYTHONPATH should contain a ”:”-separated list of paths, each of which
contains one or several Python packages, in the order in which you would like
Python to search for them. If a package has sub-packages of interest to you,
do not add them to
$PYTHONPATH: it is not portable, might shadow other
packages or short-circuit important things in its
__init__.
It is advisable to never import Pylearn2’s files from outside Pylearn2 itself
(this is good advice for Python packages in general). Use
from pylearn2 import
module instead of
import module.
$PYTHONPATH should only contain
paths to complete packages.
When you install a package, only the package name can be imported directly. If
you want a sub-package, you must import it from the main package. That’s how
it will work in 99.9% of installs because it is the default. Therefore, if you
stray from this practice, your code will not be portable. Also, some ways to
circumvent circular dependencies might make it so you have to import files in
a certain order, which is best handled by the package’s own
__init__.py.
More instructions¶
Once you have completed these steps, you should run the by running
nosetests from your checkout directory.
All tests should pass. If some test fails on your machine, you are encouraged to tell us what went wrong on the pylearn-dev mailing list.
To update your library to the latest revision, you should have a branch that tracks the main trunk. You can add one with:
git fetch central git branch trunk central/master
Once you have such a branch, in order to update it, do:
git checkout trunk git pull
Keep in mind that this branch should be “read-only”: if you want to patch Pylearn2, do it in another branch like described above.
Optional¶
You can instruct git to do color diff. For this, you need to add those lines in the file ~/.gitconfig
[color] branch = auto diff = auto interactive = auto status = auto
Nightly test¶
Each night we execute all the unit tests automatically. The result is sent by email to the theano-buildbot mailing list.
For more detail, see see.
To run all the tests with the same configuration as the buildbot, run this script:
pylearn2/misc/do_nightly_build
This function accepts arguments that it forward to nosetests. You can run only some tests or enable pdb by giving the equivalent nosetests parameters. | http://deeplearning.net/software/pylearn2/internal/dev_start_guide.html | CC-MAIN-2017-09 | refinedweb | 927 | 71.04 |
Using the Unity API from Other Threads
The!
There’s nothing we can do about Unity requiring us to access its API from only the main thread. Sure, there are exceptions to this rule such as using
Vector3 from any thread, but generally we can’t do anything useful like access the hierarchy outside the main thread.
So what we need is a way for a non-main (“worker”) thread to tell the main thread to access the Unity API. Then we need the main thread to communicate its results back to the worker thread.
Doing this is actually pretty simple. It allows us to set up an easily-extensible pattern that we can use to support any Unity API functionality we want to use from other threads. It’s type-safe, thread-safe, and easy-to-use. Let’s break it down into parts.
First, the commands. These are messages from worker threads to the main thread telling the main thread what the worker thread wants to do. A command is simply a function call that puts an instance of a
BaseCommand class into a
Queue<BaseCommand>. Later on the main thread will go through this queue of commands and execute them.
Now for the other side: results. This is how the main thread tells a worker thread that it’s done processing one of its commands. A
Result<T> simply wraps up the result value of type
T and uses an
AutoResetEvent to make the worker thread sleep until the result value is ready.
Of course we don’t want this system to raise the ire of the garbage collector, or we’ll be giving back some of the performance benefits of using threads in the first place. So it’s important to talk about how we’re not going to need to create and release these
BaseCommand and
Result objects.
BaseCommand objects are internally pooled by the
MainThreadQueue class that’s at the heart of this system. Worker threads don’t need to worry about it at all. In fact, the command class types are
private so they’re not even exposed outside the
MainThreadQueue class.
Result objects are passed in by the worker thread. They can be reused after the command finishes. Exactly how to reuse them is up to the worker thread because only it knows when its done using the result.
With this in mind, let’s look at a little script that uses this system. The script’s job is to create a bunch of game objects and set their positions. It does this with four threads that all share a
MainThreadQueue object. As you read, notice how there’s no manual thread synchronization with locks or mutexes. It’s not quite as direct as using the Unity API normally, but it’s pretty straightforward:
using System.Threading; using UnityEngine; class TestScript : MonoBehaviour { class ThreadInfo { public int ThreadId; public MainThreadQueue MainThreadQueue; public int CreateCount; } MainThreadQueue mainThreadQueue; void Start() { // Create the queue to do work on the main thread mainThreadQueue = new MainThreadQueue(); // Start some threads var threadStart = new ParameterizedThreadStart(OtherThread); for (int i = 0; i < 4; ++i) { var threadInfo = new ThreadInfo(); threadInfo.ThreadId = i; threadInfo.MainThreadQueue = mainThreadQueue; threadInfo.CreateCount = Random.Range(5, 20); var thread = new Thread(threadStart); thread.Start(threadInfo); } } void Update() { // Execute commands for up to 5 milliseconds mainThreadQueue.Execute(5); } static void OtherThread(object startParam) { // Create game objects and set their transforms' positions ThreadInfo threadInfo = (ThreadInfo)startParam; var mainThreadQueue = threadInfo.MainThreadQueue; var newGameObjectResult = new MainThreadQueue.Result<GameObject>(); var getTransformResult = new MainThreadQueue.Result<Transform>(); var setPositionResult = new MainThreadQueue.Result(); for (var i = 0; i < threadInfo.CreateCount; ++i) { // New game object var name = "From Thread " + threadInfo.ThreadId; mainThreadQueue.NewGameObject(name, newGameObjectResult); var go = newGameObjectResult.Value; // Get game object's transform mainThreadQueue.GetTransform(go, getTransformResult); var transform = getTransformResult.Value; // Set transform's position var pos = new Vector3(i, i, i); mainThreadQueue.SetPosition(transform, pos, setPositionResult); setPositionResult.Wait(); } } }
There are a couple more aspects of this to notice. First, the
Result objects are reused by the threads. There’s no need to pool them in this case and the worker thread didn’t even need to call
Reset because
MainThreadQueue takes care of that when you queue a command.
Also, when the script’s
Update runs
MainThreadQueue.Execute it can pass in a time limit. This allows us to put a cap on how much time we want to spend executing commands per frame. We can use this to easily spread out work across multiple frames because the remaining work simply sits in the queue to be executed on the next
Update of the script.
Now let’s look at the full source code for this system. It’s heavily-commented, but still under 500 lines in just one file. Feel free to drop it in your projects as it’s MIT-licensed.
using System.Collections.Generic; using System.Diagnostics; using System.Threading; using UnityEngine; using UnityEngine.Assertions; /// <summary> /// A queue of commands to execute on the main thread. Each command function (e.g. NewGameObject) /// takes a Result parameter that initially has no value but gets a value after the command /// executes. /// </summary> /// <author></author> /// <license>MIT</license> public class MainThreadQueue { /// <summary> /// Result of a queued command. Will have a Value when it IsReady. /// </summary> public class Result<T> { private T value; private bool hasValue; private AutoResetEvent readyEvent; public Result() { readyEvent = new AutoResetEvent(false); } /// <summary> /// Result value. Blocks until IsReady is true. /// </summary> public T Value { get { readyEvent.WaitOne(); return value; } } /// <summary> /// Check if the result value is ready. /// </summary> public bool IsReady { get { return hasValue; } } /// <summary> /// Set the result value and flag it as ready. /// This is meant to be called by MainThreadQueue only. /// </summary> /// <param name="value"> /// The result value /// </param> public void Ready(T value) { this.value = value; hasValue = true; readyEvent.Set(); } /// <summary> /// Reset the result so it can be used again. /// </summary> public void Reset() { value = default(T); hasValue = false; } } /// <summary> /// A result with no value (i.e. for a function returning "void") /// </summary> public class Result { private bool hasValue; private AutoResetEvent readyEvent; public Result() { readyEvent = new AutoResetEvent(false); } /// <summary> /// If the command has been executed /// </summary> public bool IsReady { get { return hasValue; } } /// <summary> /// Mark the result as ready to indicate that the command has been executed. /// </summary> public void Ready() { hasValue = true; readyEvent.Set(); } /// <summary> /// Blocks until IsReady is true /// </summary> public void Wait() { readyEvent.WaitOne(); } /// <summary> /// Reset the result so it can be used again. /// </summary> public void Reset() { hasValue = false; } } /// <summary> /// Types of commands /// </summary> private enum CommandType { /// <summary> /// Instantiate a new GameObject /// </summary> NewGameObject, /// <summary> /// Get a GameObject's transform /// </summary> GetTransform, /// <summary> /// Set a Transform's position /// </summary> SetPosition } /// <summary> /// Base class of all command types /// </summary> private abstract class BaseCommand { /// <summary> /// Type of the command /// </summary> public CommandType Type; } /// <summary> /// Command object for instantiating a GameObject /// </summary> private class NewGameObjectCommand : BaseCommand { /// <summary> /// Name of the GameObject /// </summary> public string Name; /// <summary> /// Result of the command: the newly-instantiated GameObject /// </summary> public Result<GameObject> Result; public NewGameObjectCommand() { Type = CommandType.NewGameObject; } } /// <summary> /// Command object for getting a GameObject's transform /// </summary> private class GetTransformCommand : BaseCommand { /// <summary> /// GameObject to get the Transform for /// </summary> public GameObject GameObject; /// <summary> /// Result of the command: the GameObject's transform. /// </summary> public Result<Transform> Result; public GetTransformCommand() { Type = CommandType.GetTransform; } } /// <summary> /// Set a Transform's position /// </summary> private class SetPositionCommand : BaseCommand { /// <summary> /// Transform to set the position of /// </summary> public Transform Transform; /// <summary> /// Position to set to the Transform /// </summary> public Vector3 Position; /// <summary> /// Result of the command: no value /// </summary> public Result Result; public SetPositionCommand() { Type = CommandType.SetPosition; } } // Pools of command objects used to avoid creating more than we need private Stack<NewGameObjectCommand> newGameObjectPool; private Stack<GetTransformCommand> getTransformPool; private Stack<SetPositionCommand> setPositionPool; // Queue of commands to execute private Queue<BaseCommand> commandQueue; // Stopwatch for limiting the time spent by Execute private Stopwatch executeLimitStopwatch; /// <summary> /// Create the queue. It initially has no commands. /// </summary> public MainThreadQueue() { newGameObjectPool = new Stack<NewGameObjectCommand>(); getTransformPool = new Stack<GetTransformCommand>(); setPositionPool = new Stack<SetPositionCommand>(); commandQueue = new Queue<BaseCommand>(); executeLimitStopwatch = new Stopwatch(); } /// <summary> /// Get an object from a pool or create a new one if none are available. /// This function is thread-safe. /// </summary> /// <returns> /// An object from the pool or a new instance /// </returns> /// <param name="pool"> /// Pool to get from /// </param> /// <typeparam name="T"> /// Type of pooled object /// </typeparam> private static T GetFromPool<T>(Stack<T> pool) where T : new() { lock (pool) { if (pool.Count > 0) { return pool.Pop(); } } return new T(); } /// <summary> /// Return an object to a pool. /// This function is thread-safe. /// </summary> /// <param name="pool"> /// Pool to return to /// </param> /// <param name="obj"> /// Object to return /// </param> /// <typeparam name="T"> /// Type of pooled object /// </typeparam> private static void ReturnToPool<T>(Stack<T> pool, T obj) { lock (pool) { pool.Push(obj); } } /// <summary> /// Queue a command. This function is thread-safe. /// </summary> /// <param name="cmd"> /// Command to queue /// </param> private void QueueCommand(BaseCommand cmd) { lock (commandQueue) { commandQueue.Enqueue(cmd); } } /// <summary> /// Queue a command to instantiate a GameObject /// </summary> /// <param name="name"> /// Name of the GameObject. Must not be null. /// </param> /// <param name="result"> /// Result to be filled in when the command executes. Must not be null. /// </param> public void NewGameObject( string name, Result<GameObject> result) { Assert.IsTrue(name != null); Assert.IsTrue(result != null); result.Reset(); NewGameObjectCommand cmd = GetFromPool(newGameObjectPool); cmd.Name = name; cmd.Result = result; QueueCommand(cmd); } /// <summary> /// Queue a command to get a GameObject's transform /// </summary> /// <param name="go"> /// GameObject to get the transform from. Must not be null. /// </param> /// <param name="result"> /// Result to be filled in when the command executes. Must not be null. /// </param> public void GetTransform( GameObject go, Result<Transform> result) { Assert.IsTrue(go != null); Assert.IsTrue(result != null); result.Reset(); GetTransformCommand cmd = GetFromPool(getTransformPool); cmd.GameObject = go; cmd.Result = result; QueueCommand(cmd); } /// <summary> /// Queue a command to set a Transform's position /// </summary> /// <param name="transform"> /// Transform to set the position of /// </param> /// <param name="position"> /// Position to set to the transform /// </param> /// <param name="result"> /// Result to be filled in when the command executes. Must not be null. /// </param> /// <param name="result"> /// Result to be filled in when the command executes. Must not be null. /// </param> public void SetPosition( Transform transform, Vector3 position, Result result) { Assert.IsTrue(transform != null); Assert.IsTrue(result != null); result.Reset(); SetPositionCommand cmd = GetFromPool(setPositionPool); cmd.Transform = transform; cmd.Position = position; cmd.Result = result; QueueCommand(cmd); } /// <summary> /// Execute commands until there are none left or a maximum time is used /// </summary> /// <param name="maxMilliseconds"> /// Maximum number of milliseconds to execute for. Must be positive. /// </param> public void Execute(int maxMilliseconds = int.MaxValue) { Assert.IsTrue(maxMilliseconds > 0); // Process commands until we run out of time executeLimitStopwatch.Reset(); executeLimitStopwatch.Start(); while (executeLimitStopwatch.ElapsedMilliseconds < maxMilliseconds) { // Get the next queued command, but stop if the queue is empty BaseCommand baseCmd; lock (commandQueue) { if (commandQueue.Count == 0) { break; } baseCmd = commandQueue.Dequeue(); } // Process the command. These steps are followed for each command: // 1. Extract the command's fields // 2. Reset the command's fields // 3. Do the work // 4. Return the command to its pool // 5. Make the result ready switch (baseCmd.Type) { case CommandType.NewGameObject: { // Extract the command's fields NewGameObjectCommand cmd = (NewGameObjectCommand)baseCmd; string name = cmd.Name; Result<GameObject> result = cmd.Result; // Reset the command's fields cmd.Name = null; cmd.Result = null; // Return the command to its pool ReturnToPool(newGameObjectPool, cmd); // Do the work GameObject go = new GameObject(name); // Make the result ready result.Ready(go); break; } case CommandType.GetTransform: { // Extract the command's fields GetTransformCommand cmd = (GetTransformCommand)baseCmd; GameObject go = cmd.GameObject; Result<Transform> result = cmd.Result; // Reset the command's fields cmd.GameObject = null; cmd.Result = null; // Return the command to its pool ReturnToPool(getTransformPool, cmd); // Do the work Transform transform = go.transform; // Make the result ready result.Ready(transform); break; } case CommandType.SetPosition: { // Extract the command's fields SetPositionCommand cmd = (SetPositionCommand)baseCmd; Transform transform = cmd.Transform; Vector3 position = cmd.Position; Result result = cmd.Result; // Reset the command's fields cmd.Transform = null; cmd.Position = Vector3.zero; cmd.Result = null; // Return the command to its pool ReturnToPool(setPositionPool, cmd); // Do the work transform.position = position; // Make the result ready result.Ready(); break; } } } } }
The above code is just a starting place for exposing the Unity API to worker threads. It only supports the three commands used in the test script: “new game object”, “get game object’s transform”, “set transform’s position”. To be useful in a real game, you’ll need to add many more commands. Thankfully, that’s easy!
First, add a new entry to the
CommandType enum for your new command. Then add a new class extending
BaseCommand. Use the other command classes as a pattern. Remember to make it
private and set
Type = CommandType.MyNewCommand in the constructor.
Next, add a new pool field and initialize it in the constructor. Just copy/paste from the others and change the command type.
Now add a new
MyCommand function to queue the command. You can copy/paste one of the others and change the specifics. It’s mostly just boilerplate code copying parameters into the fields of the command.
Finally, add a new
case CommandType.MyNewCommand to
Execute. Follow the five steps listed above the
switch or use the other cases as a pattern.
This is really just wrapper code, so it’s pretty mindless work to expose more Unity API functionality as commands and results. There’s no need to expose the whole Unity API, just the parts you need to use from other threads.
And that’s about all there is to the system. It should be easy to drop into projects and eliminate what seems to be the main reason that Unity programmers don’t use threads. So let’s go put those CPU cores to work!
#1 by pretender on July 3rd, 2017 · | Quote
Great article! Is this safe to be used on mobile devices?
#2 by jackson on July 3rd, 2017 · | Quote
I just put the sample from the article on an Android device to test. It seems to work just fine on mobile devices.
#3 by Theodoros Doukoulos on July 3rd, 2017 · | Quote
Hello there,
I am not an expert in threads but i am following you your webposts quite closely because i like the content.
So i shared your post on my facebook and i’ve got a comment from one of my friends as an enquiry of performance tests with this pattern.
Well the main context of his concern was the and if that is the case then the code would be slower if we would use them for each call on the main thread then the code would be slower than wiring it all up in the main thread instead.
Can you please provide an example with some performance tests on that matter? Thanks in advance.
#4 by jackson on July 3rd, 2017 · | Quote
Performance will depend a lot on your Unity API usage. If your code is 100% calls to the Unity API, then there’s no real point in using threads since you’re just adding overhead for the queue. But if your code is 1% calls to the Unity API then the other 99% will benefit from using more CPU cores. In that case the overhead of the queue will be dwarfed by the speed boost you get from the threads. Somewhere in between there’s an equilibrium point where it doesn’t matter.
Exactly how much of your code needs to not be using the Unity API in order for this to be an overall performance win will really depend on the specifics of your code. I can’t think of a good performance test that wouldn’t be terribly contrived, but let me know if you have any ideas for one. I’d venture to say that many games will benefit tremendously from multi-threading as usually there’s a lot of code to do game logic and then a little code to output the result via the Unity API.
#5 by Lauraaa on February 25th, 2018 · | Quote
Thank you for the detailed tutorial!
I used it to create new Texture2D in thread, works great! | https://jacksondunstan.com/articles/3930 | CC-MAIN-2019-09 | refinedweb | 2,694 | 58.48 |
On a character input in the first
scanf()
getchar()
your_choice
ch
#include <stdio.h>
void choice(int);
int main() {
char ch;
int random, your_choice;
do {
srand(time(NULL));
system("cls");
printf("** 0 is for Rock **\n");
printf("** 1 is for Scissors **\n");
printf("** 2 is for Lizard **\n");
printf("** 3 is for Paper **\n");
printf("** 4 is for Spock **\n");
printf("\nEnter your choice here:");
scanf("%d", &your_choice);
random = rand() % 5; //random number between 0 & 4
if ((your_choice >= 0) && (your_choice <= 4)) {
//choice printer omitted for this post
if ((random == ((your_choice + 1) % 5)) || (random == ((your_choice + 2) % 5)))
printf("\n\n... and you win!!!\n");
else if ((random == ((your_choice + 3) % 5)) || (random == ((your_choice + 4) % 5)))
printf("\n\n... and you lose!!!\n");
else if (random == your_choice)
printf("\n\nUnfortunately, it's a tie!\n");
} else
printf("\nWell, this is wrong! Try again with a number from 0 to 4!!\n");
printf("\nWould you like to play again? (Y/N)?: ");
scanf(" %c", &ch);
} while (ch == 'y' || ch == 'Y');
return 0;
}
If the user enters characters that cannot be converted to a number,
scanf("%d", &your_choice); returns 0 and
your_choice is left unmodified, so it is uninitialized. The behavior is undefined.
You should test for this and skip the offending input this way:
if (scanf("%d", &your_choice) != 1) { int c; /* read and ignore the rest of the line */ while ((c = getchar()) != EOF && c != '\n') continue; if (c == EOF) { /* premature end of file */ return 1; } your_choice = -1; }
Explanation:
scanf() returns the number of successful conversions. If the user types a number, it is converted and stored into
your_choice and
scanf() returns 1, if the user enters something that is not a number, such as
AA,
scanf() leaves the offending input in the standard input buffer and returns 0, finally if the end of file is reached (the user types ^Z enter in windows or ^D in unix),
scanf() returns
EOF.
if the input was not converted to a number, we enter the body of the
if statement: input is consumed one byte at a time with
getchar(), until either the end of file or a linefeed is read.
if
getchar() returned
EOF, we have read the entire input stream, no need to prompt the user for more input, you might want to output an error message before returning an error code.
otherwise, set
your_choice to
-1, an invalid value so the read of the code complains and prompts for further input.
Reading and discarding the offending input is necessary: if you do not do that, the next input statement
scanf(" %c", &ch); would read the first character of the offending input instead of waiting for user input in response to the
Would you like to play again? (Y/N)?: prompt. This is the explanation for the behavior you observe. | https://codedump.io/share/0WoSGtKMMrdZ/1/scanf-not-working-on-invalid-input | CC-MAIN-2018-09 | refinedweb | 464 | 66.07 |
This program shows a pacman like figure moving right across the screen using GArcs. I run my program and it doesnt even show the first PacMan garc that I have added. Any Help?
Heres my code:
/* * Name: Connor Moore * File: PacMan.java * Date: 3/24/13 * Purpose: The program displays a PacMan like figure moving * rightward across the canvas starting on the very left side. */ package mat2670; import acm.graphics.*; import acm.program.*; import java.awt.Color; public class PacMan extends GraphicsProgram { public static void main(String[] args) { (new PacMan()).start(); } public void run() { //Gets the window's width double windowWidth = getWidth(); //Initializes all the components to the GArc double r = 50; double x = 0; double y = getHeight() / 2; double width = 2 * r; double height = 2 * r; double start = 60; double sweep = 240; //displacement of x double dx = 120; //Creates and adds the first PacMan shape. GArc PacMan = new GArc(x, y, 2 * r, 2 * r, start, sweep); PacMan.setFilled(true); PacMan.setFillColor(Color.YELLOW); add(PacMan); int count = 0; while ((x + r) < windowWidth) { //takes 3 times for PacMans mouth to shut. //Starts open, shuts at 3, switch to opening again, etc if (count < 3 || count > 6) { start -= 20; sweep += 40; x += dx; } else if (count > 3 || count > 9) { start += 20; sweep -= 40; x += dx; } PacMan.move(x, y); pause(PAUSE_TIME); } } private static final int PAUSE_TIME = 5; } | http://www.javaprogrammingforums.com/whats-wrong-my-code/26254-garc-move-help.html | CC-MAIN-2013-48 | refinedweb | 227 | 69.31 |
Table of Contents
A name is either a simple name or a prefixed (or qualified) name. A simple name or an identifier will probably have the same syntax as an XML NCName - i.e. a XML name without a colon. A prefixed name has two parts: an optional prefix (which if not empty is a simple name), followed by a colon, followed by a local name (which is a simple name). (No whitespace or comments are allowed between the prefix and the colon, or between the colon and the local name.) A prefix is bound to a namespace using a (static) namespace declaration. A namespace in XML is just a globally unique string - a URI. However, we may associate more information with a namespace, such a default function definition. An expanded name is a local name and a namespace URI.
name ::= simple-name | qualified-name simple-name ::= NCName [[refer to XML standard]] qualified-name ::= namespace-prefix":"simple-name namespace-prefix ::= simple-name
All names must be statically bound, to detect typos. In some case there may be definitions for all the local names associated with a prefix.
There are no reserved names. Though there are predefined (builtin) names and prefixes, these can all be over-ridden. The lack of reserved names may complicate the language and/or parsing, however.
An application is the main syntactic building block Q2. A program is normally an application, which will contain nested applications. An application can be a function call, an object/element creation expression, or a special form.
application ::= name attribute* simple-expression*
expression := application | simple-expression simple-expression := literal | variable-reference | "(" expression ")"
A value is a sequence of zero or more items. Sequences are as in XQuery: They do not nest, and an item is in all respects equivalent to a sequence of that one item. (If you need nested sequences, use an element or an array.)
The concatenation operator takes two sequences and concatenates the result. XQuery uses the comma operator for this. An alternative to consider is to use juxtaposition for concatenation. (Snobol did this, as did APL for array literals.) This is appealing, but has various consequences.
Most programming languages distinguish between expressions and statements. Q2 does not: A statement is just an expression that returns a zero-item sequence. Even expression-oriented languages in the Lisp family use statements whose value is ignored. The use of sequences allows a powerful unification. A "block" of one or more statements that are evaluated sequentially becomes one or more expressions thata are combined by sequence concatenation. A loop that evaluates a statement until some condition is true becomes an expression that concatenates the results from each iteration of the loop. This is very powerful, as shown by the XQuery for form.
Sequences are normally accessed sequentially. For indexed access, convert the sequence to an array.
A sequence can be infinitely long. This corresponds to unbounded loops, or an input to a loop that gets terminated by some condition. The sequence of natural numbers is infinitely long and can be used as the loop control "while" loop.
An attribute consists of a name and a value. Attribute are normally written using an attribute-expression:
attribute-name: primary-expression
No space is allowed between the attribute-name and the colon, and there must either be whitespace between the colon and the expression, or the expression must start with a non-name character.
An attribute is a generalization of attributes in XML infosets, in that they can contain an arbitrary value (including possibly a sequence), while in XML the values can only be a string.
Attributes do not have identity, unlike in the XPath data-model. (We might go further and make attributes syntax only.)
An object consists of type, plus a sequence of zero or more public components, some of whome may be attributes. The non-attribute components of an object are the also called its children. This is similar to XML elements. In addition, an object may have private components (and maybe methods). Some objects are atomic; these have no public components. These include numbers and characters. (An object have have zero componets without neceaarily being atomic.)
There can be at most one attribute with a given (expanded) name. The order of attributes is not generally semantically significant, but the order is preserved, for use in printing and for use in operations that depend on "document order".
Object have identity. Unlike in XML, an object does not have a unique parent: An object can be the child of multiple other objects, and there is no link from a child to its parent. However, we will probably support "trails" or "paths" which are references to objects with the way we got to that object, and the parent of a trail is the trail with the final reference removed.
A constructor is a named function that creates an object. A constructor is also a type - the type or class of objects created by the constructor. Not all functions are constructors. However, merging the concepts of constructor and class is insufficient. A class, when used as a pattern, should be able to match any object that has maching attributes, children, and name/type/constructor.
struct name () attr-name:type ...
This declarares a type with name name, which must have the specified attributes. A value matches the type it it has all the named attributes, and the values of each attribute matches the specified attributes types. In addition, the value must has been contructed such that name is a member of the (magic) type attribute.
The object type may specify one or more ancestors:
struct name (base-name ...) attr-name:type ...
When viewed as a type, this additionally must match all the inherited types and their attributes. If an attr-name appears more than once in the inheritance hierarchy, that is equivalent to the intersection type. Question: should user be required to specify intersection type - i.e. as in co-variant attribute types?
Methods are parameterized attributes or attributes that are functions. First need to decide how functions are declared.
We will have some sub-typing mechanisms, where a constructor can inherit from or extend some other type.
struct name (super-type-names) attr-name:type[=default] ... let val = name attr-name: value ...
The syntax objects@/type evaluates objects, which must evaluate to a sequence of objects. Then concatenate all the children of each object that satisfy the give type.
For example:
((std:vector 2 3.4 5) (std:vector 0.5 1 1.5))*/std:integer ==> (2 5 1)
A function (method) definition has this syntax:
rule function-name parameter-list => body parameter-list ::= attribute-pattern* value-pattern*
At a call site the set of method definitions in the static scope are "merged", and the most specific matching function is applied.
A function takes as parameters a sequence of items. In XQuery in contrast, a parameter list is zero or more values, each of which can be a sequence. However, using a single sequence for all the parameters provides an element unification, and lets us use pattern matching to bind parameters.
A pattern can matched against a value. If it matches, a variable may be bound to the matched value. A simple pattern looks like:
name!type
If the value matched against this pattern has the specified type, then the pattern matches (succeeds), and the name gets bound to the value. The following example declares i to have the type Integer and value 10.
let i!Integer = 10
Both the name and the type are optional. The default for type is item which matches any single item. (I.e. not a sequence that is empty or has more than one value, nor an attribute. Nor a function??) The default for name is a unique compiler-generated name.
A sequence pattern can "take apart" a sequence:
pattern1 pattern2 ...
A tricky part is that a patterni might match zero or more than one item.
A "guarded pattern" adds a conditional Boolean expression:
pattern when condition
(We might use if or where or some other syntax instead of when.)
An optional pattern may have a default value:
pattern?=default-value
If pattern matches, then the ?=default-value is ignored. Otherwise, -'[]_ '[]
A "constructor pattern" can match against objects created using constructors.
complex re:!real im:!realSince a constructor "field" is an attribute, this is syntactic sugar for:
complex re:(re!real) im:(im!real)
A single * matches any item. (Should probably not match attributes.) A doubled ** matches any sequence.
A variable definition has the form:
let pattern = expresssion
This evaluates expresssion and matches it against the pattern. For example:
let i!integer = 2+3
A variable definition is a void expression - i.e. one that results in a sequence of zero values. Thus you can intermix definitions and other expressions:
let i!integer = 1 let j!integer = i+1 i+j
This is an expresson with two sub-expressions: The first two are the definitions, and the final is the i+j. The latter is a singleton integer, so the result of the combined expression is also a singleton integer.
The scope of the "variable" defined in a variable definition is undecided. Tentatively, assume it encompasses sub-sequent sub-expressions in the encloding compound expression.
An array is a mapping from a sequence of integers to a value. The number of integers used is the rank of the array. A vector is an array of rank one. A non-array item (such as a numbers) is viewed as an array of rank zero. An array is a single item.
There are primitives to convert a sequence to an array and vice versa. We might use [square brackets] to convert a sequence to a vector:
[ val1, val2, ... ]
The function size returns the dimensions of an array as a sequence of integers. The size of a scalar is the empty sequence. For example (size 3) => (), and (size [2 3 4]) => 3. In contrast length returns the number of iterms in a sequence.
As a general rules you use arrays for "random-access" indexing, while you use sequences for iteration and composition.
A string is a sequence of one or more characters. A string is not an array. A problem with this is that you cannot create a sequence of strings, since that would just be there concatenation. This may be a disadvantage, but one can always wrap sequences as arrays.
Because a string is a sequence, we can use the regular sequence-based patterns to match string. We do not need regexps. | http://www.gnu.org/software/kawa/q2/ | crawl-002 | refinedweb | 1,762 | 58.48 |
process_create - create a new process
#include <zircon/syscalls.h> zx_status_t zx_process_create(zx_handle_t job, const char* name, size_t name_size, uint32_t options, zx_handle_t* proc_handle, zx_handle_t* vmar_handle);
zx_process_create() creates a new process.
Upon success, handles for the new process and the root of its address space are returned. The thread will not start executing until
zx_process_start() is called.
name is silently truncated to a maximum of
ZX_MAX_NAME_LEN-1 characters.
When the last handle to a process is closed, the process is destroyed.
Process handles may be waited on and will assert the signal ZX_PROCESS_TERMINATED when the process exits.
job is the controlling job object for the new process, which will become a child of that job.
job must be of type ZX_OBJ_TYPE_JOB and have ZX_RIGHT_MANAGE_PROCESS.
On success,
zx_process_create() returns ZX_OK, a handle to the new process (via proc_handle), and a handle to the root of its address space (via vmar_handle). In the event of failure, a negative error value is returned.
ZX_ERR_BAD_HANDLE job is not a valid handle.
ZX_ERR_WRONG_TYPE job is not a job handle.
ZX_ERR_ACCESS_DENIED job does not have the ZX_RIGHT_WRITE right (only when not ZX_HANDLE_INVALID).
ZX_ERR_INVALID_ARGS name, proc_handle, or vmar_handle was an invalid pointer, or options was non-zero.
ZX_ERR_NO_MEMORY Failure due to lack of memory. There is no good way for userspace to handle this (unlikely) error. In a future build this error will no longer occur.
ZX_ERR_BAD_STATE The job object is in the dead state.
zx_handle_close()
zx_handle_duplicate()
zx_job_create()
zx_object_wait_async()
zx_object_wait_many()
zx_object_wait_one()
zx_process_start()
zx_task_kill()
zx_thread_create()
zx_thread_exit()
zx_thread_start() | https://fuchsia.googlesource.com/zircon/+/HEAD/docs/syscalls/process_create.md | CC-MAIN-2019-04 | refinedweb | 247 | 59.19 |
Physical units¶
Brian includes a system for physical units. The base units are defined by their
standard SI unit names:
amp/
ampere,
kilogram/
kilogramme,
second,
metre/
meter,
mole/
mol,
kelvin, and
candela.
In addition to these base units, Brian defines a set of derived units:
coulomb,
farad,
gram/
gramme,
hertz,
joule,
liter/
litre,
molar,
pascal,
ohm,
siemens,
volt,
watt,
together with prefixed versions (e.g.
msiemens = 0.001*siemens) using the
prefixes
p, n, u, m, k, M, G, T (two exceptions to this rule:
kilogram
is not defined with any additional prefixes, and
metre and
meter are
additionaly defined with the “centi” prefix, i.e.
cmetre/
cmeter).
For convenience, a couple of additional useful standard abbreviations such as
cm (instead of
cmetre/
cmeter),
nS (instead of
nsiemens),
ms (instead of
msecond),
Hz (instead of
hertz),
mM
(instead of
mmolar) are included. To avoid clashes with common variable
names, no one-letter abbreviations are provided (e.g. you can use
mV or
nS, but not
V or
S).
Using units¶
You can generate a physical quantity by multiplying a scalar or vector value with its physical unit:
>>> tau = 20*ms >>> print(tau) 20. ms >>> rates = [10, 20, 30]*Hz >>> print(rates) [ 10. 20. 30.] Hz
Brian will check the consistency of operations on units and raise an error for dimensionality mismatches:
>>> tau += 1 # ms? second? Traceback (most recent call last): ... DimensionMismatchError: Cannot calculate ... += 1, units do not match (units are second and 1). >>> 3*kgram + 3*amp Traceback (most recent call last): ... DimensionMismatchError: Cannot calculate 3. kg + 3. A, units do not match (units are kilogram and amp).
Most Brian functions will also complain about non-specified or incorrect units:
>>> G = NeuronGroup(10, 'dv/dt = -v/tau: volt', dt=0.5) Traceback (most recent call last): ... DimensionMismatchError: Function "__init__" expected a quantitity with unit second for argument "dt" but got 0.5 (unit is 1).
Numpy functions have been overwritten to correctly work with units (see the developer documentation for more details):
>>> print mean(rates) 20. Hz >>> print rates.repeat(2) [ 10. 10. 20. 20. 30. 30.] Hz
Removing units¶
There are various options to remove the units from a value (e.g. to use it with analysis functions that do not correctly work with units)
- Divide the value by its unit (most of the time the recommended option because it is clear about the scale)
- Transform it to a pure numpy array in the base unit by calling
asarray()(no copy) or
array(copy)
- Directly get the unitless value of a state variable by appending an underscore to the name
>>> tau/ms 20.0 >> asarray(rates) array([ 10., 20., 30.]) >>> G = NeuronGroup(5, 'dv/dt = -v/tau: volt') >>> print G.v_[:] [ 0., 0., 0., 0., 0.]
Temperatures¶
Brian only supports temperatures defined in °K, using the provided
kelvin
unit object. Other conventions such as °C, or °F are not compatible with Brian’s
unit system, because they cannot be expressed as a multiplicative scaling of the
SI base unit kelvin (their zero point is different). However, in biological
experiments and modeling, temperatures are typically reported in °C. How to use
such temperatures depends on whether they are used as temperature differences
or as absolute temperatures:
- temperature differences
- Their major use case is the correction of time constants for differences in temperatures based on the Q10 temperature coefficient. In this case, all temperatures can directly use
kelvineven though the temperatures are reported in Celsius, since temperature differences in Celsius and Kelvin are identical.
- absolute temperatures
Equations such as the Goldman–Hodgkin–Katz voltage equation have a factor that depends on the absolute temperature measured in Kelvin. To get this temperature from a temperature reported in °C, you can use the
zero_celsiusconstant from the
brian2.units.constantspackage (see below):
from brian2.units.constants import zero_celsius celsius_temp = 27 abs_temp = celsius_temp*kelvin + zero_celsius
Note
Earlier versions of Brian had a
celsius unit which was in fact
identical to
kelvin. While this gave the correct results for
temperature differences, it did not correctly work for absolute
temperatures. To avoid confusion and possible misinterpretation,
the
celsius unit has therefore been removed.
Constants¶
The
brian2.units.constants package provides a range of physical constants that
can be useful for detailed biological models. Brian provides the following
constants:
Note that these constants are not imported by default, you will have to
explicitly import them from
brian2.units.constants. During the import, you
can also give them shorter names using Python’s
from ... import ... as ...
syntax. For example, to calculate the \(\frac{RT}{F}\) factor that appears
in the Goldman–Hodgkin–Katz voltage equation
you can use:
from brian2 import * from brian2.units.constants import zero_celsius, gas_constant as R, faraday_constant as F celsius_temp = 27 T = celsius_temp*kelvin + zero_celsius factor = R*T/F
The following topics are not essential for beginners.
from brian2.units import * or import
everything
from brian2 import * which only imports the units mentioned in
the introductory paragraph (base units, derived units, and some standard
abbreviations).
In-place operations on quantities¶
In-place operations on quantity arrays change the underlying array, in the same way as for standard numpy arrays. This means, that any other variables referencing the same object will be affected as well:
>>> q = [1, 2] * mV >>> r = q >>> q += 1*mV >>> q array([ 2., 3.]) * mvolt >>> r array([ 2., 3.]) * mvolt
In contrast, scalar quantities will never change the underlying value but instead return a new value (in the same way as standard Python scalars):
>>> x = 1*mV >>> y = x >>> x *= 2 >>> x 2. * mvolt >>> y 1. * mvolt | http://brian2.readthedocs.io/en/2.0.2/user/units.html | CC-MAIN-2017-39 | refinedweb | 925 | 53.51 |
Partial function evaluator
ABANDONEDTurns out that if you just hammer on v8's bind() enough times it gives basically the same performance as manually inlining anyway :P So there is no point to using this library (ie the concept is flawed)
An optimizing implementation of partial function evaluation for JavaScript in JavaScript. Speed ups for free!
var specialize =//Create a test function{ifreturn a + 10return a - 10}//Specializevar specialized =//Prints out 11console//Prints out -11console
require("specialize")(func, arg1, arg2, ...)
Optimizes
func by binding static values from arguments to the scope of function.
funcis the function to bind
arg1, arg2, ...are the arguments for the function to specialize. Pass in
undefinedto skip specialization.
Returns A specialized version of
func
Because objects have properties that can change over time this code doesn't support inlining objects. If you do pass an object it gets bound dynamically, just like calling bind() and so there isn't much benefit to this library. It is also possible to inline functions which use variables from their outside scope. An exception is made for variables in the global namespace even though this is not technically correct (since they could be rebound in a shadowing closure). This is a conscientious choice, and if you want to overload globals then you should just use bind().
What it can do is inline constant values and some closures for well encapsulated functions, giving a substantial speed up with little loss of flexibility.
After experimenting a bit, I discovered that at least in v8 there is really no point to doing all this since the native implementation of bind already does partial evaulation for you. So there is really no need for this library :P.
(c) 2013 Mikola Lysenko. MIT License | https://www.npmjs.com/package/specialize | CC-MAIN-2017-09 | refinedweb | 290 | 53.61 |
#include <qstring.h>
QString uses implicit sharing, which makes it very efficient and easy to use.
In all of the QString methods that take
{const char *} parameters, the
{const char *} is interpreted as a classic C-style ''-terminated ASCII string. It is legal for the
{const char *} parameter to be 0. If the
{const char *} is not ''-terminated, the results are undefined. Functions that copy classic C strings into a QString will not copy the terminating '' character. The QChar array of the QString (as returned by unicode()) is generally not terminated by a ''. If you need to pass a QString to a function that requires a C ''-terminated string use latin1().
QString::null A QString that has not been assigned to anything is null, i.e. both the length and data pointer is 0. A QString that references the empty string ("", a single '' char) is empty. Both null and empty QStrings are legal parameters to the methods. Assigning
{(const char *) 0} to QString gives a null QString. For convenience,
QString::null is a null QString. When sorting, empty strings come first, followed by non-empty strings, followed by null strings. We recommend using
{if ( !str.isNull() )} to check for a non-null string rather than
{if ( !str )}; see operator!() for an explanation.
Note that if you find that you are mixing usage of QCString, QString, and QByteArray, this causes lots of unnecessary copying and might indicate that the true nature of the data you are dealing with is uncertain. If the data is ''-terminated 8-bit data, use QCString; if it is unterminated (i.e. contains ''s) 8-bit data, use QByteArray; if it is text, use QString.
Lists of strings are handled by the QStringList class. You can split a string into a list of strings using QStringList::split(), and join a list of strings into a single string with an optional separator using QStringList::join(). You can obtain a list of strings from a string list that contain a particular substring or that match a particular regex using QStringList::grep().
Note for C programmers
Due to C++'s type system and the fact that QString is implicitly shared, QStrings can be treated like ints or other simple base types. For example:
QString boolToString( bool b ) { QString result; if ( b ) result = "True"; else result = "False"; return result; }
The variable, result, is an auto variable allocated on the stack. When return is called, because we're returning by value, The copy constructor is called and a copy of the string is returned. (No actual copying takes place thanks to the implicit sharing, see below.)
Throughout Qt's source code you will encounter QString usages like this:
The 'copying' of input to output is almost as fast as copying a pointer because behind the scenes copying is achieved by incrementing a reference count. QString (like all Qt's implicitly shared classes) operates on a copy-on-write basis, only copying if an instance is actually changed.
If you wish to create a deep copy of a QString without losing any Unicode information then you should use QDeepCopy.
Definition at line 397 of file qstring.h. | http://qt-x11-free.sourcearchive.com/documentation/3.3.4/classQString.html | CC-MAIN-2018-13 | refinedweb | 524 | 72.46 |
To avoid importing loads of images into a Flash file and putting them on the timeline, i should like to simulate a movie clip by using an image-sequence, that is, by sequentially loading a series of still images, to& then, to "activate" - or view the sequence upon mouse-click, .
i figure this involves a loop and array, but i can't figure out the code.
Plz help.
thanks.
To sequentially load anything you need to use a functional loop (and the array and a counter variable). The basics of it are outlined below, but not in actual coding syntax, mainly descriptive...
var array:Array = new Array(your array of images); (if these are numerically named, then you may not need the array, just a count value so you know when to stop)
var count:int = 0;
function loadCurrentI mage(){
loader.load(array[count])
loader.contentLoaderInfo.addEventListener(COMPLETE,loadComplete)
}
function loadComplete(evt...){
// first process loaded image, then...
count++;
if(count < array.length){
loadCurrent(); // load the next image
} else {
// sequencial loading complete, carry on to next activity
}
}
hi Ned. Have built it up to proper syntax but getting an error:
This happens when the jpg images are in the same directory as the FLA file.
On the other hand, when the images are in a folder on their own, also in the same file as the FLA file, using the path in AS URL request (var req:URLRequest = new URLRequest("../twirl_test"+imageArray); i get a security error that i've experienced in the past as relating to improper path: SecurityError: Error #2000: No active security context.
I can't help with the 1067 error without seeing the code.
import flash.net.URLRequest;
import flash.display.Loader;
import flash.events.Event;
var count:int = 0;
var imageArray:Array=["image1.jpg","image2.jpg","image3.jpg","image4.jpg" ,"image5.jpg","image6.jpg","image7.jpg",
"image8.jpg","image9.jpg","image10.jpg"];
var url:String = "D:/flash cs5.5/image_sequence/twirl_test/"+imageArray;//TypeError: Error #2007: Parameter url must be non-null.
//var req:URLRequest = new URLRequest("../twirl_test/"+imageArray);
var request:URLRequest = new URLRequest(url);//SecurityError: Error #2000: No active security context.
var loader:Loader = new Loader();
//this is if u dont want the loader on the stage BETTER PRACTICE
function imageLoaded(event:Event):void
{
addChild(loader);
}
loader.contentLoaderInfo.addEventListener(Event.COMPLETE, imageLoaded);
loader.load(request);
function loadCurrentImage(){
loader.load(imageArray[count])
loader.contentLoaderInfo.addEventListener(Event.COMPLETE, loadComplete)
}
function loadComplete(event:Event){
// first process loaded image, then...
count++;
if(count < imageArray.length){
loadCurrentImage(); // load the next image
} else {
// sequencial loading complete, carry on to next activity
}
}
var url:String = "D:/flash cs5.5/image_sequence/twirl_test/"+imageArray;
That array being added to the end isn't going to do much for you. I am guess you really just wanted one element of the array, not the array itself.
one element added at a time.
any help, from anyone, would be appreciated.
Ned, i am wondering if this - adding images sequentially to have a moving-image effect may be done using indexing - by adding one image to one loader on one frame? if so, can u help with the code for index? i cant remember how it's done. thanks for your help.
If you only want one element, then only specify one element. Using "imageArray" like you show is attempting to assign an array to a string. One element involves specifying an index value... imageArray[index]
Ned, sorry, this is too cryptic for me to follow, i suppose i need to see examples of code to know what u mean.
thanks anyway.
best.
YOu've already provided the example, you just need to realize what you have written versus what you want to write.
var url:String = "D:/flash cs5.5/image_sequence/twirl_test/"+imageArray;
What do you imagine that "imageArray" is doing at the end of your line of code there?
i imagine that once the path is established (the string), imageArray is the variable specified for it. The array - imageArray, is, i am imagining, the still images inside the folder thats in the url string, and therefor, in adding imageArray, it's like referring to the group of images versus one specific image, as such "D:/flash cs5.5/image_sequence/twirl_test/image1.jpg
so instead of getting image1.jpg, i am trying to tell Flash to get the images in the array, in accordance with their numeric order.
that's what i imagine.
If that is what you think you want to do, then that is the problem. You only want to ask for one image at a time. By appending the entire array to the url, your attempting to add apples (an array) to oranges (a string)... they are two different classes of objects. You want to add strings to strings... and your array contains individual strings.
Ned. i simply do not know how to load a sequential array of images, so if you know how to and would like to share the code. Please do.
thanks. | http://forums.adobe.com/message/4497493 | CC-MAIN-2013-48 | refinedweb | 831 | 57.67 |
Puning... Back where I started with unions
We use unions to convert between int32_t and float. This is done for some canbus code. I had read there are issues with this and researched it. I found that despite it being recommended against most modern C++ compilers support this. So I am really confused. The recommended way of using reinterpret_cast ends up working with raw pointers when converting to byte arrays. The memcpy way seems to be a bit cleaner, but you have to copy the data. I then did an experiment and I get zero warnings from clang or the compiler (I am sure if I turned something on it might complain) with default Qt settings for gcc. It seems to me the cleanest way to do this is still unions.
{ // puning qInfo() << "Puning:"; using namespace std; union { int64_t i; int8_t b[8]; } Pun; qInfo() << "union Puning:"; Pun.i = 0x0102030405060708; string tmp; for_each(begin(Pun.b), end(Pun.b), [&](int8_t bv){ tmp += to_string(bv) + " "; }); qInfo() << tmp.data(); qInfo() << "reinterpret_cast puning:"; int64_t i = 0x0102030405060708; int64_t *pi = &i; int8_t* pb; pb = reinterpret_cast<int8_t*>(pi); tmp.clear(); for_each(pb, pb+8, [&](int8_t bv){ tmp += to_string(bv) + " "; }); qInfo() << tmp.data(); qInfo() << "memcpy puning:"; int8_t ab[8]; memcpy(ab, &i, sizeof(ab)); tmp.clear(); for_each(begin(ab), end(ab), [&](int8_t bv){ tmp += to_string(bv) + " "; }); qInfo() << tmp.data(); }
So I just am back where I started. As long as the compiler supports this gcc/mingw I am just not inclined to change. Yes, I read the lengthy discussions on SO and elsewhere. I just don't see why this cannot be part of standard C++. Are the compiler makers just rebelling against the standard? I have C++17 turned on BTW.
- kshegunov Moderators last edited by
@fcarney said in Puning... Back where I started with unions:
Yes, I read the lengthy discussions on SO and elsewhere. I just don't see why this cannot be part of standard C++.
It simply isn't, it's stated as undefined behaviour (in C++11); however all the compilers I've worked with just comply with the C99 standard on that particular topic (which states it's valid to read and write different fields of a union).
- aha_1980 Lifetime Qt Champion last edited by
@fcarney just to mention the obvious:
memcpyon
uint32_tand probably
uint64_tist most likely only load and store instructions, so the overhead is minimal. Esp. on I/O bound data transfer like CAN bus.
Btw., do you use QtCanBus for that?
Regards
I think we use some custom library for canbus. I am not sure where it is from. I was not involved in developing that piece.
- Kent-Dorfman last edited by
Old thread and I should know better than to chime in, but everyone is entitled to my opinion. Also, I had this very issue posed to me the other day in our embedded domain.
If you are dealing with existing CAN devices that expect network transport of IEEE floating point numbers then you have to go with what is expected and my comments are moot. However, if you have control over the endpoints in your networking of devices then you should NOT transport floating point numbers as such across a network. AUTOSAR, MISRA, and JPL coding standards all recognize why this is a bad idea.
Our engineer is designing a microcontroller based controller and he did the union/float thing to represent data being exchanged. I quickly got him on-board to use scaled integers instead, which are more the standard in the automotive CAN arena. There are some legacy devices that use fixed precision ASCII representations of floating point numbers, such as NMEA/GPS, so that is also an option where bandwidth is not a concern.
Some points to consider regarding transport of native floating point numbers:
- different or adhoc endian format of devices
- devices that require native types to start on processor word sized boundaries
- devices that emulated floating point operations (no FPU)
Anyway, just something to think about. | https://forum.qt.io/topic/109147/puning-back-where-i-started-with-unions | CC-MAIN-2022-33 | refinedweb | 666 | 64.41 |
!
I use CUDA C++ and I design customized math routines for DPD simulations using hand-coded PTX assembly. The template metaprogramming feature of CUDA C++ also turns out to be handy for writing concise and efficient codes.
Streams, zero-copy memory, texture objects, PTX (parallel thread execution) assembly, warp-level vote/shuffle, template programming.
C++11. The benefit is two-fold:
Figures 2 and 3 were from a system of vesicles spontaneously assembled from amphiphilic polymers in aqueous solution. The result, together with the USERMESO code and algorithm, is published in: Tang, Yu-Hang, and George Em Karniadakis. “Accelerating dissipative particle dynamics simulations on GPUs: Algorithms, Numerics and Applications.” Computer Physics Communications 185.11 (2014): 2809-2822.
In USERMESO I invented a warp-synchronous neighbor list building algorithm that allows the neighbor list for a particle to be constructed deterministically in parallel by all the threads within a warp without using any atomic operations. This actually makes neighbor searching much faster than evaluating pairwise forces, while traditionally the searching takes longer time. For details and visualization of the algorithms you can check out my slides from the 2014 GPU Technology Conference.
And, yes our code USERMESO is open source; you can find it on my wiki page.
At the system level I look forward to GPUs that are more tightly coupled to other system parts like CPUs, RAM, interconnects and drives. In terms of the CUDA architecture I think configurable warp size and a faster and bigger non-coherent cache would benefit applications in both my area of interest and many other algorithms.
Download CUDA 7 today! If you have tried it, please comment below and let us know your thoughts.]]>.
As a sponsor of the LPIRC, NVIDIA is offering a free Jetson TK1 developer kit to participating teams, and each winner will receive an NVIDIA GPU. If your team would like to use a Jetson TK1 for the LPIRC, fill out this application form. NVIDIA will review proposals and provide TK1 DevKits to selected applicants..]]>
Six.
Two mentors from Cray Inc.: Aaron Vose and John Levesque. Aaron and John provided strong technical support to boost the performance of a GPU-enabled NekCEM version.).
Want to learn more? Check out these related sessions from GTC 2015..]]>
The cuDNN library team is excited to announce the second version of cuDNN, NVIDIA’s library of GPU-accelerated primitives for deep neural networks (DNNs). We are proud that the cuDNN library has seen broad adoption by the deep learning research community and is now integrated into major deep learning toolkits such as CAFFE, Theano and Torch. While cuDNN was conceived with developers of deep learning toolkits and systems in mind, this release is all about features and performance for the deep learning practitioner. Before we get into those details though, let’s provide some context.
Data science and machine learning have been growing rapidly in importance in recent years, along with the volume of “big data”. Machine learning provides techniques for developing systems that can automatically recognize, categorize, locate or filter the torrent of big data that flows endlessly into corporate servers (and our email inboxes). Deep neural networks (DNNs) have become an especially successful and popular technique, because DNNs are relatively straightforward to implement and scale well—the more data you throw at them the better they perform. Most importantly, DNNs are now established as the most accurate technique across a range of problems, including image classification, object detection, and text and speech recognition. In fact, research teams from Microsoft, Google and Baidu have recently shown DNNs that perform better on an image recognition task than a trained human observer!
Deep learning and machine learning have been popular topics on Parallel Forall recently, so here are some pointers to excellent recent posts for more information. The original cuDNN announcement post provides an introduction to machine learning, deep learning and cuDNN. There are excellent posts on using cuDNN with Caffe for computer vision, with Torch for natural language understanding, on how Baidu uses cuDNN for speech recognition, and on embedded deep learning on Jetson TK1. There is also a recent post about BIDMach, an accelerated framework for machine learning techniques that are not neural network-based (SVMs, K-means, linear regression and so on).
The primary goal of cuDNN v2 is to improve performance and provide the fastest possible routines for training (and deploying) deep neural networks for practitioners. This release significantly improves the performance of many routines, especially convolutions. In Figure 1, you can see that cuDNN v2 is nearly 20 times faster than a modern CPU at training large deep neural networks! Figure 1 compares speedup (relative to Caffe running on a 16-core Intel Haswell CPU) on three well-known neural network architectures: Alexnet, Caffenet and GoogLeNet. The grey bar shows the speedup of the native (legacy) Caffe GPU implementation, and the green bar shows the speedup obtained with cuDNN v2. Note that the speedup obtained with cuDNN v2 is now 80% higher than with the legacy Caffe GPU implementation.
cuDNN v2 now allows precise control over the balance between performance and memory footprint. Specifically, cuDNN allows an application to explicitly select one of four algorithms for forward convolution, or to specify a strategy by which the library should automatically select the best algorithm. Available strategies include “prefer fastest” and “use no additional working space”. The four forward convolution algorithms are
IMPLICIT_GEMM,
IMPLICIT_PRECOMP_GEMM,
GEMM and
DIRECT.
IMPLICIT_GEMM is the algorithm used in cuDNN v1. It is an in-place computation, and the only algorithm that supports all input sizes and configurations while using no extra working space. If your goal is to fit the largest possible neural network model into the memory of your GPU this is the recommended option.
The
IMPLICIT_PRECOMP_GEMM algorithm is a modification of the
IMPLICIT_GEMM approach, which uses a small amount of working space (see the Release Notes for details on how much) to achieve significantly higher performance than the original
IMPLICIT_GEMM for many use cases.
The GEMM algorithm is an “
im2col” approach, which explicitly expands the input data in memory and then uses a pure matrix multiplication. This algorithm requires significant working space, but in some cases it is the fastest approach. If you tell cuDNN to “prefer fastest”, it will sometimes choose this approach. You can use the
SPECIFY_WORKSPACE_LIMIT instead of
PREFER_FASTEST to ensure that the algorithm cuDNN chooses will not require more than a given amount of working space.
The
DIRECT option is currently not implemented, so it is really just a placeholder. In a future version of cuDNN this will specify the usage of a direct convolution implementation. We will have guidelines on how this approach compares to the others when it is made available.
Besides performance, there are other new features and capabilities in cuDNN v2 aimed at helping deep learning practitioners get the most out of their systems as easily as possible.
The cuDNN interface has been generalized to support data sets with other than two spatial dimensions (for example, 1D and 3D data). In fact, cuDNN now allows arbitrary N-dimensional tensors. This is a forward-looking change; most routines remain limited to two spatial dimensions. As a beta feature in this release, there is now support for 3D datasets (see the Release Notes for details). The cuDNN team is looking for community feedback on the importance of higher dimensional support.
Other new features include OS X support, zero-padding of borders in pooling routines (similar to what was already provided for convolutions), parameter scaling and improved support for arbitrary strides. A number of issues identified in cuDNN v1 have been resolved. cuDNN v2 will support the forthcoming Tegra X1 processor via PTX JIT compilation as well. Please see the cuDNN Release Notes for full details on all of these important developments!
Several of the improvements described above required changes to the cuDNN API. Therefore, cuDNN v2 is not a drop-in version upgrade. Applications previously using cuDNN v1 are likely to need minor changes for API compatibility with cuDNN v2. Note that the
Im2Col function is exposed as a public function in cuDNN v2, but it is intended for internal use only, and it will likely be removed from the public API in the next version.
cuDNN is still less than one year old. We expect cuDNN to mature rapidly, making API changes rare in the future. The cuDNN library team genuinely appreciates all feedback from the deep learning community, and carefully considers any API change.
cuDNN is free for anyone to use for any purpose: academic, research or commercial. Just sign up for a registered CUDA developer account. Once your account is activated, log in and you will see a link to the cuDNN download page. You will likely want to start by reading the included User Guide. Get started with cuDNN today!]]>
CUDA 7 adds C++11 feature support to nvcc, the CUDA C++ compiler. This means that you can use C++11 features not only in your host code compiled with
nvcc, but also in device code. In my post “The Power of C++11 in CUDA 7” I covered some of the major new features of C++11, such as lambda functions, range-based for loops, and automatic type deduction (
auto). In this post, I’ll cover variadic templates.
There are times when you need to write functions that take a variable number of arguments: variadic functions. To do this in a typesafe manner for polymorphic functions, you really need to take a variable number of types in a template. Before C++11, the only way to write variadic functions was with the ellipsis (
...) syntax and the
va_* facilities. These facilities did not enable type safety and can be difficult to use.
As an example, let’s say we want to abstract the launching of GPU kernels. In my case, I want to provide simpler launch semantics in the Hemi library. There are many cases where you don’t care to specify the number and size of thread blocks—you just want to run a kernel with “enough” threads to fully utilize the GPU, or to cover your data size. In that case we can let the library decide how to launch the kernel, simplifying our code. But to launch arbitrary kernels, we have to support arbitrary type signatures. Well, we can do that like this:
template
void cudaLaunch(const ExecutionPolicy &p, void(*f)(Arguments...), Arguments... args);
Here,
Arguments... is a “type template parameter pack”. We can use it to refer to the type signature of our kernel function pointer
f, and to the arguments of
cudaLaunch. To do the same thing before C++11 (and CUDA 7) required providing multiple implementations of
cudaLaunch, one for each number of arguments we wanted to support. That meant you had to limit the maximum number of arguments allowed, as well as the amount of code you had to maintain. In my experience this was prone to bugs. Here’s the implementation of
cudaLaunch.
// Generic simplified kernel launcher // configureGrid uses the CUDA Occupancy API to choose grid/block dimensions template
void cudaLaunch(const ExecutionPolicy &policy, void (*f)(Arguments...), Arguments... args) { ExecutionPolicy p = policy; checkCuda(configureGrid(p, f)); f<< >>(args...); } // and a wrapper for default policy -- i.e. automatic execution configuration template void cudaLaunch(void(*f)(Arguments... args), Arguments... args) { cudaLaunch(ExecutionPolicy(), f, args...); }
Here you can see how we access the types of the arguments (
Arguments...) in the definition our variadic template function, in order to specify the type signature of the kernel function pointer
*f. Inside the function, we unpack the parameters using
args... and pass them to our kernel function when we launch it. C++11 also lets you query the number of parameters in a pack using
sizeof...().
Using
hemi::cudaLaunch, I can launch any
__global__ kernel, regardless of how many parameters it has, like this (here I’m launching my
xyzw_frequency kernel from my post The Power of C++11 in CUDA 7.
hemi::cudaLaunch(xyzw_frequency, count, text, int n);
Here we leave the launch configuration up to the runtime, and if we write our kernel in a portable way, this code can be made fully portable. This simplified launch code is currently available in a development branch of Hemi, which you can find on Github.
Of course, you can also define kernel functions and
__device__ functions with variadic arguments. I’ll finish up with a little program that demonstrates a few things. The
__global__ function
Kernel is a variadic template function which just forwards its parameter pack to the function
adder, which is where the really interesting use of variadic templates happens. (I borrowed the
adder example from an excellent post on variadic templates by Eli Bendersky.)
adder demonstrates how a variadic parameter pack can be unpacked recursively to operate on each parameter in turn. Note that to terminate the recursion we define the “base case” function
template , so that when the parameter pack is just a single parameter it just returns its value. The second
adder function unpacks one argument at a time because it is defined to take one parameter and then a parameter pack. Clever trick, and since all the recursion happens at compile time, the resulting code is very efficient.
We define a utility template function
print_it with various specializations that print the type of an argument and its value. We launch the kernel with four different lists of arguments. Each time, we vary the type of the first argument to demonstrate how our variadic adder can handle multiple types, and the output has a different type each time. Note another C++11 feature is used here:
static_assert and type traits. Our
adder only works with integral and floating point types, so we check the types at compile time using
static_assert to check if an arithmetic type is used. This allows us to print a custom error message at compile time when the function is misused.
#include
#include template __host__ __device__ T adder(T v) { return v; } template __host__ __device__ T adder(T first, Args... args) { static_assert(std::is_arithmetic ::value, "Only arithmetic types supported"); return first + adder(args...); } template __host__ __device__ void print_it(T x) { printf("Unsupported type\n"); } template<> __host__ __device__ void print_it(int x) { printf("int %d\n", x); } template<> __host__ __device__ void print_it(long int x) { printf("long int %ld\n", x); } template<> __host__ __device__ void print_it(float x) { printf("float %f\n", x); } template<> __host__ __device__ void print_it(double x) { printf("double %lf\n", x); } template __global__ void Kernel(Arguments... args) { auto sum = adder(args...); print_it(sum); } struct { int x; } s; int main(void) { Kernel<<<1, 1>>>(1, 2.0f, 3.0, 4, 5.0); // "int 15" Kernel<<<1, 1>>>(1l, 2.0f, 3.0, 4, 5.0); // "long int 15" Kernel<<<1, 1>>>(1.0f, 2.0f, 3.0, 4, 5.0); // "float 15.000000" Kernel<<<1, 1>>>(1.0, 2.0f, 3.0, 4, 5.0); // "double 15.000000" // Kernel<<<1, 1>>>("1.0", 2.0f, 3.0, 4, 5.0); // static assert! cudaDeviceReset(); // to ensure device print happens before exit return 0; }
You can compile this code with
nvcc --std=c++11 variadic.cu -o variadic.
Note that in CUDA 7, A variadic
__global__ function template has the following (documented) restrictions:
In practice I don’t find these limitations too constraining.
The CUDA Toolkit version 7 is available now, so download it today and try out the C++11 support and other new features.]]>
Today I’m excited to announce the official release of CUDA 7, the latest release of the popular CUDA Toolkit. Download the CUDA Toolkit version 7 now from CUDA Zone!
CUDA 7 has a huge number of improvements and new features, including C++11 support, the new cuSOLVER library, and support for Runtime Compilation. In a previous post I told you about the features of CUDA 7, so I won’t repeat myself here. Instead, I wanted to take a deeper look at C++11 support in device code.
CUDA 7 adds C++11 feature support to nvcc, the CUDA C++ compiler. This means that you can use C++11 features not only in your host code compiled with
nvcc, but also in device code. New C++ language features include
auto, lambda functions, variadic templates,
static_assert, rvalue references, range-based for loops, and more. To enable C++11 support, pass the flag
--std=c++11 to
nvcc (this option is not required for Microsoft Visual Studio).
In my earlier CUDA 7 feature overview post, I presented a small example to show some C++11 features. Let’s dive into a somewhat expanded example to show the power of C++11 for CUDA programmers. This example will proceed top-down, covering a couple of layers of abstraction that allow us to write concise, reusable C++ code for the GPU, all enabled by C++11. The complete example is available on Github.
Let’s say we have a very specific (albeit contrived) goal: count the number of characters from a certain set within a text. (In parallel, of course!) Here’s a simple CUDA C++11 kernel that abstracts the mechanics of this a bit.
__global__ void xyzw_frequency(int *count, char *text, int n) { const char letters[] { 'x','y','z','w' }; count_if(count, text, n, [&](char c) { for (const auto x : letters) if (c == x) return true; return false; }); }
This code puts most of the algorithmic implementation inside of the
count_if function. Let’s dig deeper.
xyzw_frequency() uses an initializer list to initialize the
letters array to four characters
x,
y,
z, and
w. It then calls a function,
count_if, which is a generic algorithm which increments a counter for each element in its input for which the specified predicate evaluates to
true. Here’s the interface of
count_if; we’ll look at the implementation shortly.
template
__device__ void count_if(int *count, T *data, int n, Predicate p);
The last argument is a predicate function object that
count_if calls for each element of the
data input array. You can see in
xyzw_frequency that we use a special syntax for this function object:
[&](char c) { ... }. That
[](){} syntax indicates a lambda function definition. This definition constructs a “closure”: an unnamed function object capable of capturing variables in scope.
C++11 lambdas are really handy for cases where you have a simple computation that you want to use as an operator in a generic algorithm, like our
count_if. As Herb Sutter says,
Lambdas are a game-changer and will frequently change the way you write code to make it more elegant and faster.
By the way, I recommend you check out Sutter’s brief “Elements of Modern C++ Style” for a brief guide to using new C++ features.
Let’s look at that Lambda again:
[&](char c) { for (const auto x : letters) if (c == x) return true; return false; }
This defines an anonymous function object that has an
operator() method that takes one argument,
char c. But you can see that it also accesses
letters. It does so by “capturing” variables in the enclosing scope, in this case by reference, as the
[&] capture list specifies. This gives us a way to define functions inline with the code that calls them, and access local variables without having to pass every variable to the function. That’s powerful.
Inside our lambda function we use two more C++11 features. The
auto specifier, and a range-based for loop.
auto specifies that the compiler should deduce the type of the declared variable from its initializer.
auto lets you avoid specifying type names that the compiler already knows, but more importantly, sometimes it lets you declare variables of unknown or “unutterable” types (like the type of many lambdas). You’ll find yourself using
auto all the time.
A range-based for loop simply executes a loop over a range. In our case, the loop
for (const auto x : letters) is equivalent to this loop:
for (auto x = std::begin(letters); x != std::end(letters); x++)
Range-based for loops can be used to iterate over arrays of known size, as in our example (
char letters[] { 'x', 'y', 'z', w'};), or over any object that defines
begin() and
end() member functions. I think the range-based for loop is much clearer in this case, and I’ll come back to how we can use it for something more specific to CUDA C++.
Now we need to define our
count_if function. This function needs to count the elements of an input array for which a predicate returns true. And of course, we want to do this in parallel! Here’s a possible implementation:
template
__device__ void count_if(int *count, T *data, int n, Predicate p) { for (int i = blockDim.x * blockIdx.x + threadIdx.x; i < n; i += gridDim.x * blockDim.x) { if (p(data[i])) atomicAdd(count, 1); } }
If you are an avid CUDA C++ programmer, or an avid reader of this blog, you probably recognize the idiom used in this loop. We call it a “grid-stride loop”, and I’ve written about it before on Parallel Forall. The thing to know about grid-stride loops is that they let you decouple the size of your CUDA grid from the data size it is processing, resulting in less coupling between your host and device code. It also has portability and debugging benefits.
But the downside of this CUDA C++ idiom is that it is verbose, ugly, and bug prone (it’s easy to type
blockIdx when you meant
blockDim). Maybe we can use C++11 range-based for loops to make it better. Wouldn’t it be nice to implement
count_if like this instead?
template
__device__ void count_if(int *count, T *data, int n, Predicate p) { for (auto i : grid_stride_range(0, n)) { if (p(data[i])) atomicAdd(count, 1); } }
To do this we just need to define a helper function
grid_stride_range() that takes start and end index, and returns an object with the appropriate interface for a C++ range, that steps through the range with a grid stride. Writing a range class is fairly straightforward; there are various examples on StackOverflow or Github. I chose one that I liked on Github,
range.hpp by Github user klmr. To make this useful in CUDA, I forked the repository and annotated all functions with
__host__ __device__ so the classes can be used on either the CPU or GPU (and I wrapped this in a macro so you can still compile it with any C++11 compiler). You can find the updated utilities on Github here.
These utilities let us define a range as simply as
range(0, n), and we can make it a strided range with
range(0, n).step(2). This creates an object representing the range from 0 to
n with a stride of 2. So I can create our CUDA
grid_stride_range() utility function like this:
#include "range.hpp" using namespace util::lang; // type alias to simplify typing... template
using step_range = typename range_proxy ::step_range_proxy; template __device__ step_range grid_stride_range(T begin, T end) { begin += blockDim.x * blockIdx.x + threadIdx.x; return range(begin, end).step(gridDim.x * blockDim.x); }
Now we can type
for(auto i : grid_stride_range(0, n)) when we want to implement a kernel that covers (in parallel) the entire range of indices from 0 to
n. This makes our grid-stride loop idiom clean, safe, and fast (in my limited testing performance was more or less the same as the explicit grid-stride loop).
It’s worth noting one other C++11 feature in use here: type aliases (sometimes called “template typedefs”). The code
template using step_range = ... gives us a way to define generic type aliases that act as synonyms for the original type. Here I just used it to simplify the long typename in klmr’s range classes. The C++14 standard defines a new feature that lets us use
auto as the return type of the function. In this case it would cause the compiler to deduce the return type from the call to
range().step(). Alas, C++14 features are not yet supported by
nvcc in CUDA 7; but we plan to support them in a future release.
With this, our example is complete. Here’s the code for a complete example. In it, we load the complete text of Tolstoy’s “War and Peace”, and run our
xyzw_frequency kernel on it. Here’s the output.
Read 3288846 byte corpus from warandpeace.txt counted 107310 instances of 'x', 'y', 'z', or 'w' in "warandpeace.txt"
As I mentioned in my post about CUDA 7 features, CUDA 7 also includes a major update to Thrust. Thrust is a powerful, open source C++ parallel algorithms library, and the new features of C++11 make Thrust more expressive than ever.
The most obvious way that Thrust benefits from C++11 is through the
auto keyword. You’ll find that
auto saves you from typing (knowing, even) the names of complex Thrust types. Here’s a rather extreme example where we need to store an instance of a complex iterator type in a variable:
typedef typename device_vector
::iterator FloatIterator; typedef typename tuple FloatIteratorTuple; typedef typename zip_iterator Float3Iterator; Float3Iterator first = make_zip_iterator(make_tuple(A0.begin(), A1.begin(), A2.begin()));
With
auto, the above code is drastically simplified:
auto first = make_zip_iterator(make_tuple(A0.begin(), A1.begin(), A2.begin()));
Thrust is designed to resemble the C++ Standard Template Library (STL), and just like the STL, C++11 lambda makes a powerful combination with Thrust algorithms. We can easily use lambda as an operator when we apply Thrust algorithms to
host_vector containers. Here’s a version of our
xyzw_frequency() function that executes on the host. Instead of our custom
count_if, it uses
thrust::count_if and a host-side lambda function.
void xyzw_frequency_thrust_host(int *count, char *text, int n) { const char letters[] { 'x','y','z','w' }; *count = thrust::count_if(thrust::host, text, text+n, [&](char c) { for (const auto x : letters) if (c == x) return true; return false; }); }
Another major new feature of Thrust is the ability to call Thrust algorithms from CUDA device code. This means that you can launch a CUDA kernel and then call Thrust algorithms such as
for_each and
transform_reduce from device threads. It also means that you can nest Thrust algorithm calls; in other words you could call
transform_reduce inside
for_each.
To make sure this can be done as efficiently and flexibly as possible, Thrust now provides execution policies to control how algorithms are executed. If you call a Thrust algorithm from a device thread (or inside another Thrust algorithm), then you can use the
thrust::seq execution policy to run the “inner” algorithm sequentially within a single CUDA thread. Alternatively, you can use
thrust::device to have the algorithm launch a child kernel (using Dynamic Parallelism). On devices that don’t support Dynamic Parallelism Thrust algorithms will be run on a single thread on the device.
Here’s another CUDA kernel version of
xyzw_frequency() that uses
thrust::count_if() just like before, except now it operates on device memory using a device-side lambda. Other than the
__global__ specifier and the
thrust::device execution policy, this code is identical to the host version!
__global__ void xyzw_frequency_thrust_device(int *count, char *text, int n) { const char letters[] { 'x','y','z','w' }; *count = thrust::count_if(thrust::device, text, text+n, [=](char c) { for (const auto x : letters) if (c == x) return true; return false; }); }
Note that
count_if() here is called with the
thrust::device execution policy, so each calling thread will perform the count using a dynamic parallel kernel launch on devices that support it (
sm_35 and higher). Therefore we launch the kernel using a single thread, and let Thrust generate more parallelism on the device. Using
thrust_seq here would probably perform much worse, since the device thread would process the entire text sequentially. (See the our series of posts on Dynamic Parallelism (1, 2, 3) to get a good understanding.)
It’s worth noting one other very minor change in this function versus the host version. Notice the capture list of the lambda. In the host version, it was
[&]. In the device version it must be
[=]. The reason is that when using Dynamic Parallelism, the child kernel cannot access the local memory of the parent kernel. So we must capture the letters array by value in the lambda, or the child kernel that executes the lambda will perform an invalid memory access.
In addition to device-side algorithms and C++11 support, Thrust now provides much higher performance for a number of algorithms, and supports execution of algorithms on specified CUDA streams. See the Thrust 1.8 changelog for more information.
Download the CUDA Toolkit version 7 now from CUDA Zone!
I’ll be covering this material and more in person at the 2015 GPU Technology Conference this week. If you missed by talk “CUDA 7 and Beyond” on Tuesday, March 17, I’ll be presenting it again as a featured talk on Friday, March 20. If you are unable to attend, the talk will also be available from GTC on demand.
The complete example from this post is available on Github. Note: CUDA 7 does have some documented limitations for C++11 feature support.]]>
The hottest area in machine learning today is Deep Learning, which uses Deep Neural Networks (DNNs) to teach computers to detect recognizable concepts in data. Researchers and industry practitioners are using DNNs in image and video classification, computer vision, speech recognition, natural language processing, and audio recognition, among other applications.
The success of DNNs has been greatly accelerated by using GPUs, which have become the platform of choice for training these large, complex DNNs, reducing training time from months to only a few days. The major deep learning software frameworks have incorporated GPU acceleration, including Caffe, Torch7, Theano, and CUDA-Convnet2. Because of the increasing importance of DNNs in both industry and academia and the key role of GPUs, last year NVIDIA introduced cuDNN, a library of primitives for deep neural networks.
Today at the GPU Technology Conference, NVIDIA CEO and co-founder Jen-Hsun Huang introduced DIGITS, the first interactive Deep Learning GPU Training System., so developers can extend or customize it or contribute to the project.
Deep Learning is an approach to training and employing multi-layered artificial neural networks to assist in or complete a task without human intervention. DNNs for image classification typically use a combination of convolutional neural network (CNN) layers and fully connected layers made up of artificial neurons tiled so that they respond to overlapping regions of the visual field.
Figure 2 shows a generic representation of a network with two hidden layers showing the interactions between layers. Feature processing occurs in each layer, represented as a series of neurons. Links between the layers communicate responses, akin to synapses. The general approach of processing data through multiple layers, performing feature abstraction at each layer, is analogous to how the brain processes information. The number of layers and their parameters can vary depending on the data and categories. Some deep neural networks are comprised of more than ten layers with more than a billion parameters [1][2].
DIGITS provides a user-friendly interface for training and classification that can be used to train DNNs with a few clicks. It runs as a web application accessed through a web browser. Figure 1 shows the typical user work flow in DIGITS. The first screen is the main console window, from which you can create databases from images and prepare them for training. Once you have a database, you can configure your network model and begin training.
The DIGITS interface provides tools for DNN optimization. The main console lists existing databases and previously trained network models available on the machine, as well as the training activities in progress. You can track adjustments you have made to network configuration and maximize accuracy by varying parameters such as bias, neural activation functions, pooling windows, and layers.
DIGITS makes it easy to visualize networks and quickly compare their accuracies. When you select a model, DIGITS shows the status of the training exercise and its accuracy, and provides the option to load and classify images while the network is training or after training completes.
Because DIGITS runs a web server, it is easy for a team of users to share datasets and network configurations, and to test and share results. Within an organization several people may use the same data set for training networks with different configurations.
DIGITS integrates the popular Caffe deep learning framework from the Berkeley Learning and Vision Center, and supports GPU acceleration using cuDNN to massively reduce training time.
Installing and using DIGITS is easy. Visit the digits home page, register and download the installer. Or, if you prefer, get the (Python-based) source code from Github.
Once everything is installed, launch DIGITS from its install directory using this command line:
python digits-devserver
Then, if DIGITS is installed on your local machine, load the DIGITS web interface in your web browser by entering the URL. If it is installed on a server you can replace
localhost with the server IP address or hostname.
When you first open the DIGITS main console it will not have any databases, as shown in Figure 2. Creating a database is easy. Select “Images” under “New Dataset” in the left pane. You have two options for creating a database from images: either add the path to the “Training Image” text box and let DIGITS create the training and validation sets, or insert paths to both sets using the “Upload Text Files” tab. Once the database paths have been defined use the “Create” button to generate the database.
After creating a database you should define the network parameters for training. Go back to the main console and select any previously created dataset under “New Model”. In Figure 4 we have selected “Database1″ from the two available datasets. Many of the features and functions available in the Caffe framework are exposed in the “Solver Options” pane on the left side. All network functions are available as well.
You have three options for defining a network: selecting a preconfigured (“standard”) network, a previous network, or a custom network, as shown in Figure 4 (middle). LeNet by Yann LeCunn and AlexNet from Alex Krizhevsky are the two preconfigured networks currently available. You can also modify these networks by selecting the customize link next to the network. This lets you modify any of the network parameters, add layers, change the bias, or modify the pooling windows.
When you customize the network you can visualize it by selecting the “Visualize” button in the upper-right corner of the network editing box. This is a handy network configuration checking tool that helps you visualize your network layout and quickly tells you if you have the wrong inputs into certain layers or forgot to put in a pooling function.
After the configuration is complete you can start training! Figure 5 shows training results for a two-class image set using the the example caffenet network configuration. The top of the training window has links to the network configuration files, information on the data set used, and training status. If an error occurs during training it is posted in this area. You can download the network configuration files from this window to quickly check parameters during training.
During training DIGITS plots the accuracy and loss values in the top chart. This is handy because it provides real-time visualization into how well or poorly the network is learning. If the accuracy is not increasing or is not as expected you can abort training and/or delete it using the buttons in the upper-right corner. The learning rate as a function of the training epoch is plotted in the lower plot.
You can classify images with the network using the interface below the plots. Like Caffe, DIGITS takes snapshots during training; you can use the most recent (or any previous) snapshot to classify images. You can select a snapshot with the “Select Model” drop-down menu and choosing your desired Epoch. Then simply input the URL of an online image or upload one from your local computer, and click “Test One Image”. You can also classify multiple images at once by uploading a text file with a list of URLs or images located on the host machine.
Let’s demonstrate DIGITS with a look at a test network and how I used it to put it through its paces. I chose a relatively simple task: identifying images of ships. I started with two categories, “ship” and “no ship”. I obtained approximately 34,000 images from ImageNet via the URL lists provided on the site and by manually searching on USGS. ImageNet images are pre-tagged, which made it easy to categorize the images I found there. I manually tiled and tagged all of the USGS images. My ship category comprises a variety of different marine vehicles including cruise, cargo, weather, passenger and container ships; oil tankers, destroyers, and small boats. My non-ship category includes images of beaches, open water, buildings, sharks, whales, and other non-ship objects.
I was a Caffe user, and DIGITS was immediately useful thanks to the user-friendly interface and the access it provides to Caffe features.
I have multiple datasets built from the same image data. One set comprises my original images and the others are modified versions. In the modified versions I mirrored all of the training images to inflate the dataset and try to account for variation in object orientations. I like that DIGITS displays all of my previously created datasets in the Main Console, and also in the network window, making it easy to select the one I want to use in each training exercise. It’s also easy to modify my networks, either by downloading a network file from a previous training run and pasting the modified version into the custom network box, or by loading one of my previous networks and customizing it.
DIGITS is great for sharing data and results. I live in LA and work with a team in Texas and the Washington DC area. We run DIGITS on an internal server that everyone can access. This allows us to quickly check and track the iterations on our network configurations and see how changes affect network performance. Anyone with access can also configure their own network in DIGITS and perform CNN training on this host. Their activities display in the main console too. To demonstrate, Figure 6 shows an image of my current console. Three datasets I currently have stored as well as complete and current models are posted.
DIGITS makes it easy for me to visualize my network when I classify an image. When classifying a single image it displays the activations at each layer as well as the kernels. I find it hard to mentally visualize a network’s response, but this feature helps by concisely showing all of the layer and activation information. Figure 7 shows an example of my test network correctly classifying an old photo of a military ship with 100% confidence, and Figure 8 shows the results of classifying a picture of me. It shows that my two-class ship/no-ship neural network is 100% sure that I am not a ship!
[1] Krizhevsky, A., Sutskever, I. and Hinton, G. E., ImageNet Classification with Deep Convolutional Neural Networks. NIPS 2012: Neural Information Processing Systems, Lake Tahoe, Nevada.
[2] Szegedy, C. et al., Going Deeper with Convolutions. September 12, 2014.
Hist.
H.
Numba.).
If you would like to learn more about the Densest k-Subgraph application and CUDA Python programming, come see my talk "Implementing Graph Analytics with Python and Numba" on Tuesday, March 17 2015 at 3:30pm at Room 210C of the San Jose Convention Center.
Readers of Parallel Forall can use the discount code GM15PFAB to get 20% off any conference pass! Don’t miss out and register now!
.]]> | http://devblogs.nvidia.com/parallelforall/feed | CC-MAIN-2015-18 | refinedweb | 6,678 | 53.81 |
PAPAGEIPAPAGEI
Papagei is a module that proposes an implementation for verbose logging. Python has options to do verbose logging, error warning and error handling and so on. However, multiple packages are often implied and implementing the desired messages might require multiple lines of code. Papagei is an attempt to make a module that allows to do verbose logging in a simple way without importing multiple packages imports and with a minimal number of lines of code for each call. Despite being fairly simple, papagei has the downside of being more rigid. It is good for simple cases and debug. For more complex error handling or message formatting you might want to get back to python built in functions and packages.
Using papageiUsing papagei
There are three major components in papagei:
- VerboseLevel(Enum) (class)
- VERBOSE (object of type VerboseLevel)
- The display functions
VerboseLevel and verbose:VerboseLevel and verbose:
In this implementation papagei has 6 verbose levels:
- SILENT: Nothing will be displayed no errors will be raised no warnings will be returned.
- ERROR: Only mock_errors() are displayed. errors are raised as usual.
- WARNINGS: Errors behave as usual, and warnings and mock_warnings as well.
- INFO: All messages from the previous levels plus the info messages.
- DEBUG: All messages from the previous levels plus the debug messages.
- FRIVOLOUS: All messages from the previous levels plus the frivolity messages.
The verbose level can be set using the VERBOSE variable and the VerboseLevel enum. For example:
VERBOSE = VerboseLevel.INFO
NOTE: Due to its simple implementation the verbose level in papagei only works on the functions form the papagei packages. In other words putting papagei.VERBOSE to silent will not silence errors raised outside of the papagei package, won't implement any warning filter to cancel out warnings from outside of the papapgei module and won't obliterate any print() done outside of the papagei module.
FunctionsFunctions
All functions are link to a specific debug level. Two functions are available for the ERROR level and the WARNING level. One uses the actual python warnings and error the other one (preceded by "mock_") only print a message in the console without interrupting the run of the program.
- error(*args): (Level: ERROR) Formats the args into a string and uses it to raise an error.
- mock_error(*args): (Level: ERROR) Formats the args into a string and prints them in an error-like format.
- warning(*args, **kwargs): (Level: WARNING) Formats the args into a string and uses it to generate a warning. The warning type can be changed by passing a Waring class through the key-word 'type'. The warning is displayed and the warning object is returned by the function.
- mock_warning(*args): (Level: WARNING) Formats the args into a string and displays it into a warning-like format.
- info(*args): (Level: INFO) Formats the args into a string and displays it into a specific info-format.
- debug(*args): (Level: DEBUG) Formats the args into a string and displays it into a specific debug-format.
- frivolity(*args): (Level: FRIVOLOUS) Formats the args into a string and displays it into a specific frivolity-format.
ExampleExample
from papagei import papagei as ppg ppg.VERBOSE = ppg.VerboseLevel.DEBUG ppg.debug('This is example', 1) # This message will show ppg.frivolity('This is example', 2) # This won't show
NOTE: The import statement has a slight redundancy in it. This should be fixed later.
Modifying the source codeModifying the source code
Even if it is not possible to add classes from outside of the package, the source code was made in a way that should make the adding, removing, moving or reformatting class easy.
Reformatting a classReformatting a class
The formatting of a class is done through the text_format dictionary. The value of the dictionary is added before each string of the corresponding level to format it. Chang the value in this dictionary to change the formats. Same goes for the text_header dictionary which displays a header at the beginning of a message.
Adding, removing or moving a classAdding, removing or moving a class
To move a class in the hierarchy all that has to be done it to change its position in the VerboseLevel(Enum) enum. This enum is auto-numbered so moving it will adapt the value of the item and the checks in every functions will be adapted. To add an item the corresponding VerboseLevel should be added in the enum. Then the text_format and text_header dictionaries should be updated. Finally a dedicated function for the new level can be written on the model debug, info or frivolity, using _format_string_from_tuple(string_tuple) to format *args into a single string. Same process can be followed in reverse to remove a class. | https://libraries.io/pypi/papagei-niederha | CC-MAIN-2021-49 | refinedweb | 778 | 54.93 |
This project is archived and is in readonly mode.
Error on processing placeholders with negative integer
Reported by Psycopg website | May 29th, 2011 @ 10:44 PM
Submitted by: nk.0@ya.ru (Konstantin Nikitin)
Here is my error log
import psycopg2 connection = psycopg2.connect("user=konstantin dbname=konstantin") cursor = connection.cursor() cursor.execute("select 1-%s from table", (2, )) cursor.fetchall() [(-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,)] cursor.execute("select 1-%s from table", (-1, )) cursor.fetchall() [(1,)] cursor.execute("select 1 - %s from table", (-1, )) cursor.fetchall() [(2,), (2,), (2,), (2,), (2,), (2,), (2,), (2,)]
xni May 30th, 2011 @ 10:20 PM
This issue is fixed in 2.4.2...
Daniele Varrazzo June 3rd, 2011 @ 10:25 AM
- State changed from new to resolved. | https://psycopg.lighthouseapp.com/projects/62710/tickets/57 | CC-MAIN-2015-18 | refinedweb | 127 | 53.47 |
XUL-Enhanced Web Apps
February 6, 2007.
If possible, you will want to open this page in Firefox. The side-by-side examples below will not make much sense otherwise.
Side-by-Side Tabbed Panel Example
- On the left we have a (very) basic DHTML implementation of tabbed panels.
- On the right, provided you are using Firefox, you will see the same panel rendered with XUL.
- In any other browser, the second panel degrades to the DHTML implementation and therefore looks identical to the one on the left.
What is XUL?
From The Joy of XUL:
XUL (pronounced "zool") is Mozilla's XML-based user interface language that lets you build feature-rich cross-platform applications that can run connected to or disconnected from the Internet.
The user interfaces of Firefox, Thunderbird, and other Mozilla applications are written in XUL. As you might expect, XUL contains elements for the most common UI widgets, such as menus, toolbars, buttons, lists, and so on (Firefox-only link).
The rendering engine for XUL is called Gecko. If that sounds familiar, it's because Gecko is the same engine that renders HTML web pages in Mozilla's browsers. This means two things:
- You can run XUL-based applications in Firefox. They're known as "remote xul" applications.
- You can mix XUL and HTML markup in Firefox, and this approach is known as writing "XUL-in-HTML" applications.
XUL-in-HTML is that "little-known" technique that we're interested in here.
Cross-Browser Compatibility
The most obvious drawback to XUL is that it is not supported by most browsers (Internet Explorer, Safari, Opera...). Any web application relying on XUL needs to fall back on DHTML widgets for cross-browser compatibility. So the question is not about choosing XUL or DHTML, but if XUL and DHTML is a worthy development approach.
The last thing a developer wants is to maintain two different code-bases to support different browsers. And isn't that why JavaScript libraries and frameworks were created in the first place? There are so many differences among browsers that any web-based application these days relies on a JavaScript framework of some sort to abstract those annoying browser quirks.
So if you are already using a JavaScript library, could that library handle XUL for you? The response is yes. Would that lead to a bloated and slow library? Well, no, but allow me to demonstrate.
XUL vs. DHTML widgets
There are plenty of DHMTL-based widget libraries (you may also call them JavaScript or Ajax widgets). Yahoo's YUI Library, Dojo's widgets, Adobe's Spry to name just a few, so why would you want to bother with XUL?
XUL widgets are faster, more accessible, and come with more built-in behaviors than their DHTML counterparts. Simply put, with XUL the user experience feels much more like a desktop application than a web-based one.
The difference in performance is more striking with complex widgets, so let's compare a DHTML tree with a XUL tree.
Side-by-Side Tree Example
Again, the XUL tree on the right is only visible in Firefox. In any other browser it degrades to DHTML and looks identical to the one on the left.
Rendering Speed Comparison
The time measured here is the time necessary to render the tree once all the required resources have been loaded. If you have the Firebug extension for Firefox, you can easily run a profiling test on your own using the "render again" links.
Here are the results for different sizes of tree. You can see that the XUL widget renders two to six times faster than the DHTML widget.
File Size Comparison
Now, what about the overhead induced by adding the XUL implementation on top of the DHTML code? Here's the comparison.
First, the XUL implementation adds only 5Kb of code and 1Kb of css styling. Secondly, XUL actually loads fewer images than the DHTML version. This is interesting because the XUL widget looks much nicer and uses more graphics. This is possible because XUL widgets inherit the browser's theme. The images, stylesheets, and JavaScript code used by XUL are therefore already present in the browser and don't need to be downloaded from the website.
Feature Comparison
If you play a bit with the XUL tree, you can see that:
- you can open and collapse branches,
- you can resize the columns,
- the alternated row colors are maintained regardless of the state of the tree.
These are all built-in behaviors; no additional code was required.
In comparison, in DHTML, I implemented the expand/collapse functionality and left out the other behaviors to keep the code lean and fast. Of course, it is possible to create a DHTML tree that implements all the features offered by XUL and more (see, for example, Jack Slocum's improved YUI Tree widget).
The point of this comparison is to suggest that you could take some of XUL's built-in features as an enhancement reserved for users with a XUL-compatible browser (Firefox, that is) and keep an efficient, trimmed-down, DHTML version for your other users.
Accessibility Comparison
XUL comes with built-in accessibility features. You can select the XUL tree with the Tab key. You can navigate up and down the tree by using the up and down arrow keys. You can expand and collapse tree branches with the left and right arrow keys.
These are the kinds of features that are often overlooked by developers, so it's nice to get them for free when using XUL.
The Tabbed Panels Example Deconstructed
Let's dive into the code now to see how the UI library can manage both XUL and DHTML with very little overhead.
Here is the markup we are working with:
<div id="myTabBox"> <div id="tabPanelA" class="tabPanel" title="First Tab">First tab content</div> <div id="tabPanelB" class="tabPanel" title="Second Tab">Second tab content</div> <div id="tabPanelC" class="tabPanel" title="Third Tab">Third tab content</div> </div>
Each div with the class "tabPanel" represents a different panel. The title attribute will be used as the label of the tab. Note that if JavaScript is disabled, the widget will not be rendered but the content will still be accessible.
With a simple script, we can parse that HTML and find our tabs.
var tabbox = document.getElementById("myTabBox"); var tabcontent = tabbox.childNodes; var tabcount = tabcontent.length;
By the way, this was edited for clarity; the code used in the UI library is a more
complex
and more flexible.
Select the Right Implementation
Now we need to figure out if the browser supports XUL. You can do this by checking the userAgent string. We'll look for the "Gecko/" string, which uniquely identifies Mozilla's Gecko-based browsers. Note that "Gecko" alone is not enough since Apple's Safari browser includes the word "Gecko" in its user-agent string ("...KHTML, like Gecko...").
function hasSupportForXUL() { if(navigator.userAgent && navigator.userAgent.indexOf("Gecko") != -1) return true; else return false; }
DHTML Implementation
If the browser doesn't support XUL, we'll use JavaScript to create some additional markups and implement the tab-switching behavior.
if (!hasSupportForXUL()) { var tabdiv = document.createElement("div"); tabbox.insertBefore(tabdiv, tabbox.firstChild); for(var i=0;i<tabcount;i++) { var label = tabcontent[i].getAttribute("title"); var tab = document.createElement("a"); tab.onclick = switchtab; tab.className = "tab"; tab.appendChild(document.createTextNode(label)); tabdiv.appendChild(tab); } }
This code, simplified for clarity, creates a DIV element around the three panels. Then it loops through each panel and adds the tab (a CSS-styled link).
Here is the tab-switching function: :
function switchtab() { var tabdiv = this.parentNode; for(var i=0;i < tabdiv.childNodes.length; i++) { if (tabdiv.childNodes[i]!=this) tabcontent[i].style.display = "none"; else tabcontent[i].style.display = "block"; } }
XUL implementation
If the browser is XUL compatible, we need to replace the HTML markup with XUL markup. The result should look like this:
<xul:tabbox> <xul:tabs> <xul:tab <xul:tab <xul:tab </xul:tabs> <xul:tabpanels> <xul:tabpanel><div>First tab content</div></xul:tabpanel> <xul:tabpanel><div>Second tab content</div></xul:tabpanel> <xul:tabpanel><div>Third tab content</div></xul:tabpanel> </xul:tabpanels> </xul:tabbox>
Here's the JavaScript that will take care of this (abbreviated for clarity; see the source for complete code).
if (hasSupportForXUL()) { var tabBox = document.createElementNS("","tabbox"); var tabs = document.createElementNS("","tabs"); tabBox.appendChild(tabs); var tabpanels = document.createElementNS("","tabpanels"); tabBox.appendChild(tabpanels); for(var i=0;i<tabcount;i++) { var tab = document.createElementNS("","tab"); tabs.appendChild(tabs); tab.appendChild(tabcontent[i]); var tabpanel = document.createElementNS("","tabpanel"); tabpanels.appendChild(tabpanel); } // remove unused HTML markup document.getElmentById("myTabBox").innerHTML = ""; // insert XUL markup in page document.getElmentById("myTabBox").appendChild(tabBox);
The implementation logic is about the same, but you'll notice two big differences.
First,
we use
createElementNS, and we specify the XUL namespace. This lets the browser
know that it needs to handle this markup as XUL and not HTML.
The second difference is that we don't have a switchtab function. We don't need it
because tab switching is a built-in behavior of the XUL
tabbox element.
Just a Few More Things to Consider
Theme Inheritance
XUL widgets inherit by default the look and feel of your browser. On one hand, this is nice because it requires less work, less styling, and fewer resources to download. I could also argue that it improves usability since we are adopting the browser UI standards.
On the other hand, the application designer may not agree with surrendering all control over the look and feel of the application. Well, it doesn't have to be this way. It is possible to fully customize the XUL widgets with CSS. Unfortunately, this is one situation where you will end up with two different versions of the same stylesheet, making maintenance more difficult.
Security Restrictions
XUL is generally intended to run with high security privileges, as the browser user interface itself or as a browser extension. When writing XUL-in-HTML, we are running with the most restricted security setting and some XUL widgets may not work as expected in this context. This requires jumping through a few hoops to work around the restrictions. The tree widget, for instance, comes with a very nice drag-and-drop functionality. Unfortunately, this feature simply won't work when used in a web page. It must be reimplemented without using the functions that require higher security privileges.
Conclusion
This article uses a proof-of-concept library, named hXUL (for lack of a better idea). This library implements the tabbed panel and tree widgets in both XUL and DHTML (including the drag and drop for the tree). You can download it here. There are few more widgets that would be good candidates for inclusion--the combo-box, for instance. But maybe a better approach would be to build upon an existing library. Dojo's multiple renderer approach could make it a good fit, but it is certainly not the only one. With such a library, any developer could deliver a XUL-enhanced application with a fast and accessible user interface for Firefox users without sacrificing cross-browser compatibility. | http://www.xml.com/pub/a/2007/01/31/xul-enhanced-web-apps.html | CC-MAIN-2017-17 | refinedweb | 1,872 | 56.66 |
Namespace:Namespace
From Uncyclopedia, the content-free encyclopedia
Is it necessary to have a namespace for discussing the necessity of namespaces? Discuss here:
- Yes, of course it is! You don't expect us to discuss such matters in the, heaven forbid, forum, do you?!? --⇔ Sir Mon€¥$ignSTFU F@H|CUNT|+S 23:54, 5 March 2006 (UTC)
- Zombiebaron, that was just a joke...
- This is horrible. How bout I spruce up the place a bit?
- Ah, there we are. Looks better already!:21, 6 March 2006 (UTC)
The talk page is Talk:Namespace:Namespace, meaning this isn't a real namespace. Yet. Where's an admin when you need one? --[[User:Nintendorulez|Nintendorulez | talk]] 11:52, 6 March 2006 (UTC)
- "Zombiebaron, that was just a joke..." I don't get the joke! WHERE IS THE JOKE! and someone please set the logo to my faboulus logo. --Brigadier General Sir Zombiebaron 01:11, 7 March 2006 (UTC)
- guys we cant discuss Namespace:Namespace here, this is where we discuss new namespaces, not namespace:namespace itself! - 03:40, 7 March 2006 (UTC)
- Yes we can, Namespace:Namespace is for discussing the Namespace Namspace, Namespace:Game is for the Game Namespace, as is for Namespace:Talk, Namespace:Main, Namespace:Game talk, Namespace:Namespace talk etc to their appropriate namepaces. ~ 17:57, 2 April 2006 (UTC)
- Maybe UnMeta should be moved and created as a new namespace? --Carlb 13:35, 23 April 2006 (UTC)
- No, it's too big. ~ 13:36, 23 April 2006 (UTC)
- Actually, it'd better to move this to UnMeta to prevent UnMeta from dying see Forum:UnMeta. ~ 10:22, 21 April 2006 (UTC)
Also, we need a Main Page
At Namespace:Main Page. Whaddya think? ~ 13:45, 15 August 2006 (UTC) | http://uncyclopedia.wikia.com/wiki/Namespace:Namespace | CC-MAIN-2015-27 | refinedweb | 292 | 76.01 |
We have successfully created the
IsPalindrome() extension method in the previous section. It's quite easy to call the extension method since it's defined inside the same namespace as the caller method. In other words, the
IsPalindrome() extension method and the
Main() method are in the same namespace. We don't need to add a reference to any module since the method is there along with the caller. However, in common practice, we can create extension methods in the other assemblies, which we usually call class library. The use of the library will ease the use of the extension method since it can be reused, so we can use the extension method in many projects.
We are going to create ...
No credit card required | https://www.oreilly.com/library/view/functional-c/9781785282225/ch04s02.html | CC-MAIN-2019-26 | refinedweb | 125 | 72.87 |
Minimizing maximum lateness : Greedy algorithm
Since we have chosen the greed, let continue with it for one more post at least. Today’s problem is to minimize maximum lateness of a task. Let me clarify the problem: given a processor which processes one process at a time and as always given a list of processes to be scheduled on that processor, with the intention that maximum late process should be minimized. Contrary to previous problems, this time, we are not provided with start time and end time, but we are given length of time ti process will run and deadline it has to meet di, fi is actual finish time of process completion.
Lateness of a process is defined as
li = max{0, fi − di}, i.e. the length of time past its deadline that it finishes.
Goal here to schedule all tasks to minimize maximum lateness L = max li For example:
/>
Minimizing maximum lateness : algorithm
Let’s decide our optimization strategy. There is some order in which jobs can be decided: shortest job first, earliest deadline first, least slack time first.
Let’s see if any of the above strategies work for the optimal solution. For shortest processing time first, consider example P1 = (1,100) P2 = (10, 10). If we schedule the shortest job first as in order (P1, P2), lateness will be 91, but if we take them as (P2, P1), lateness will be 0. So, clearly taking the shortest process first does not give us an optimal solution.
Check for the smallest slack time approach. See if you can come up with some counterexample that it does not work.
That leaves us with only one option, take the process which has the most pressing deadline, that is the one with the smallest deadline and yet not scheduled. If you have noticed, the example given for the problem statement is solved using this method. So, we know it works.
- Sort all job in ascending order of deadlines
- Start with time t = 0
- For each job in the list
- Schedule the job at time t
- Finish time = t + processing time of job
- t = finish time
- Return (start time, finish time) for each job
Minimizing maximum lateness : implementation
from operator import itemgetter jobs = [(1, 3, 6), (2, 2, 9), (3, 1, 8), (4, 4, 9), (5, 3, 14), (6, 2, 15)] def get_minimum_lateness(): schedule =[]; max_lateness = 0 t = 0; sorted_jobs = sorted(jobs,key=itemgetter(2)) for job in sorted_jobs: job_start_time = t job_finish_time = t + job[1] t = job_finish_time if(job_finish_time > job[2]): max_lateness = max (max_lateness, (job_finish_time - job[2])) schedule.append((job_start_time, job_finish_time)) return max_lateness, schedule max_lateness, sc = get_minimum_lateness(); print "Maximum lateness will be :" + str(max_lateness) for t in sc: print t[0], t[1]
The complexity of implementation is dominated by sort function, which is
O(nlogn), rest of processing takes
O(n).
Please share your suggestions or if you find something is wrong in comments. We would love to hear what you have to say. If you find this post interesting, please feel free to share or like. | https://algorithmsandme.com/tag/greedy-algorithm/ | CC-MAIN-2020-40 | refinedweb | 507 | 65.96 |
Hi Stefano,
On Fri, 30 Nov 2001 19:38:31 +0100
Stefano Mazzocchi <stefano@apache.org> wrote:
> I've just reread the XUpdate spec. Here are my quick comments:
>
> 1) no namespace support. I mean, there is no namespace concept taken
> into consideration in the spec.
>
> This would, *alone*, make it virtually useless for any serious XML
> storage.
What do you mean with 'namespace support' and 'namespace concept'?
XUpdate is namespace aware, like the XPath implementation you are using. And so there is no
need for a special 'namespace concept' inside of XUpdate.
Or I've overlooked something?
We are working with XUpdate in some projects. There we deal intensive with namespaces, without
any problems resulting from the XUpdate or lexus specification.
Regards
Steffen...
--
______________________________________________________________________
Steffen Stundzig mailto:steffen@smb-tec.com
SMB GmbH
---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200112.mbox/%3C20011203151006.1f4ee248.Steffen.Stundzig@smb-tec.com%3E | CC-MAIN-2015-32 | refinedweb | 154 | 61.83 |
Investors in Ciena Corp (Symbol: CIEN) saw new options begin trading today, for the September 27th expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the CIEN options chain for the new September 27th contracts and identified one put and one call contract of particular interest.
The put contract at the $43.00 strike price has a current bid of $2.53. If an investor was to sell-to-open that put contract, they are committing to purchase the stock at $43.00, but will also collect the premium, putting the cost basis of the shares at $40.47 (before broker commissions). To an investor already interested in purchasing shares of CIEN, that could represent an attractive alternative to paying $43.23 46.69% annualized — at Stock Options Channel we call this the YieldBoost.
Below is a chart showing the trailing twelve month trading history for Ciena Corp, and highlighting in green where the $43.00 strike is located relative to that history:
Turning to the calls side of the option chain, the call contract at the $43.50 strike price has a current bid of $2.55. If an investor was to purchase shares of CIEN stock at the current price level of $43.23/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $43.50. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 6.52% if the stock gets called away at the September 27th $43.50 strike highlighted in red:
Considering the fact that the $43 48%..90% boost of extra return to the investor, or 46.80% annualized, which we refer to as the YieldBoost.
The implied volatility in the put contract example is 47%, while the implied volatility in the call contract example is 46%.
Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 250 trading day closing values as well as today's price of $43.23) to be 41%.. | https://www.nasdaq.com/articles/interesting-cien-put-and-call-options-for-september-27th-2019-08-12 | CC-MAIN-2022-27 | refinedweb | 347 | 64.91 |
HashiCorp Cloud Platform (HCP) Vault enables you to quickly deploy a Vault Enterprise cluster in AWS. As a fully managed service, it allows you to leverage Vault as a central secret management service while offloading the operational burden to the Site Reliability Engineering (SRE) experts at HashiCorp.
In this tutorial, you will deploy a Vault Enterprise cluster guided by the HCP portal quickstart.
»Prerequisites
You will need an HCP account.
Previous experience with Vault and Vault Enterprise are not required to deploy a Vault server in HCP.
»Create a Vault cluster
Launch the HCP Portal and login.
HashiCorp Cloud Platform (HCP) provides your account with an organization. Your account may invite others to join your organization or you may be invited to join other organizations.
Choose your organization.
From the Overview page, click Deploy Vault.
At the Create a HashiCorp Virtual Network page, you can accept or modify the default Network name.
Select the desired AWS region from the Region selection drop-down list.
Accept or modify the default CIDR block.
Click Create network. This takes a few minutes.
Once the network is created, click +Create cluster and select Vault.
Accept or edit the default Cluster ID (
vault-cluster).
Under Choose a tier, select Development for this tutorial.
For a development cluster, Extra Small is the only available cluster size.
Shift the toggle button for the Allow public connections from outside our selected network option.
Click Create cluster.
»Vault cluster overview
The Vault page displays the created Vault cluster. Within that view, the Overview tab displays the Vault configuration. These details enable you to administer the Vault server through the Web UI or command-line interface (CLI).
NOTE: The cluster is created with a top-level Namespace called
admin. Namespaces
enable you to create isolated Vault environments.
»Access the Vault cluster
Under Vault configuration, click the Public Cluster URL.
In a new browser window, enter the copied address.
The login page is displayed. By default Vault enables the token authentication method.
Return to the Vault configuration and click +Generate token.
When a confirmation dialog appears, click Generate admin token to proceed. An Admin Token pop-up dialog displays the token.
Copy the Admin Token.
Return to the Vault UI, enter the token in the Token field.
Click Sign In.
Notice that your current namespace is
admin/.
Login did not require you to specify the
admin namespace because it is
embedded in the token. For example, the token
s.jcB5UmbkSYut4HBMY8GDPC8Q.jwJJp
defines its type (
s), the token (
jcB5UmbkSYut4HBMY8GDPC8Q), and the
namespace (
jwJJp).
»Next steps
You created a Vault cluster and logged into the cluster at its
admin
namespace. In Vault Enterprise, each namespace can be treated as its own
isolated Vault environment. Learn more about namespaces in the Multi-tenancy
with Namespaces tutorial. | https://learn.hashicorp.com/tutorials/cloud/get-started-vault | CC-MAIN-2021-39 | refinedweb | 462 | 60.72 |
The Microsoft Office Project 2007 programming references include the Project Server Interface (PSI) Reference and an overview of changes in the Microsoft Office Project Standard 2007 and Project Professional 2007 Visual Basic for Applications (VBA) object model. The complete Project VBA reference is available in Project Help and in the MSDN Library.
The Reporting Database Schema Reference is in a separate help file (pj12ReportingDB.chm) in the Project 2007 SDK download.
Tables of VBA Object Model Changes Lists all of the new and changed objects, methods, properties, events, and enumerations.
PSI Reference Overview Describes the assemblies and namespaces in PSI Reference for Microsoft Office Project Server 2007.
Prerequisites for Reference Code Samples Learn how to use the PSI Reference code samples in your test applications.
Project Server Error Codes Lists the codes and descriptions of Project Server errors by functional area.
XML Schema References for Project Describes the XML schemas for Project Server. | http://msdn.microsoft.com/en-us/ms481966.aspx | crawl-002 | refinedweb | 153 | 54.32 |
Popups may have owner elements that control their position and visibility. The showTrigger and hideTrigger properties of the Popup control determine whether the Popup should be shown or hidden when the owner element is clicked or when the popup loses the focus.
In Bootstrap CSS, Popups with owner elements are called "Popovers". The most common type of Popover is the one with showTrigger set to 'Click' and hideTrigger set to 'Blur'.
Example: The owner element is a button. Clicking the button will show the Popover. Clicking anywhere outside the Popover to take the focus away and it will hide the popup:
<button id="btnClickBlur" class="btn btn-primary"> Show the Popover </button> <div id="popClickBlur" class="popover"> <h3 class="popover-title"> Title </h3> <div class="popover-content"> Hello Popup<br/> This is a multi-line message! This is a long line in my popover, which uses Bootstrap's 'popover-content' style. </div> </div>
import * as input from '@grapecity/wijmo.input'; function init() { let popClickBlur = new input.Popup('#popClickBlur', { owner: document.getElementById('btnClickBlur'), showTrigger: 'Click', hideTrigger: 'Blur' }); }
Submit and view feedback for | https://www.grapecity.com/wijmo/docs/master/Topics/Input/PopUp/Popup-OwnerElement | CC-MAIN-2022-05 | refinedweb | 181 | 57.77 |
StringBuffer represents grow able and write able character sequences. We can modify the StringBuffer object, i.e. we can append more characters or may insert a substring in middle. StringBuffer will automatically grow to make room for such additions and is very flexible to the modifications.
StringBuffer Constructors
StringBuffer defines these three constructors:
StringBuffer()
It reserves room for 16 characters without reallocation.
StringBuffer(int size)
It accepts an integer argument that explicitly sets the size of the buffer.
StringBuffer(String str)
It accepts a String argument that sets the initial contents of the StringBuffer object and reserves room for 16 more characters without reallocation.
length( ) and capacity( )
The current length of a StringBuffer can be found by the length() method, while the total allocated capacity can be found through the capacity() method.
Syntax:
int length()
int capacity()
ensureCapacity( )
If we want to preallocate room for a certain number of characters after a StringBuffer has been constructed, we can use ensureCapacity() to set the size of the buffer.
Syntax:
void ensureCapacity(int capacity)
Here, capacity specifies the size of the buffer.
setLength()
It is used to set the length of the buffer within a StringBuffer object.
Syntax:
void setLength(int len)
Here, len specifies the length of the buffer. This value must be nonnegative. When we increase the size of the buffer, null characters are added to the end of the existing buffer. If we call setLength() with a value less than the current value returned by length(), then the characters stored beyond the new length will be lost.
charAt() and setCharAt( )
A single character can be obtained from a StringBuffer by the charAt() method. We can even set the value of a character within a StringBuffer using setCharAt().
Syntax:
char charAt(int loe}
void setCharAt(int loc, char ch)
loc specifies the location of the character being obtained. For setCharAt(), loc specifies the location of the character being set, and ch specifies the new value of that character. For both methods, loc must be nonnegative and must not specify a location beyond the end of the buffer.
getChars()
To copy a substring of a StringBuffer into an array, use the getChars() method.
Syntax:
void getChars(int si, int e, char t[ ], int ti)
Here, si specifies the index of the beginning of the substring, and e specifies an index that is one past the end of the desired substring. This means that the substring contains the characters from si through e-l. The array that will receive the characters is specified by t. The index within t at which the substring will be copied is passed in ti.
append( )
The append() method concatenates the string representation of any other type of data to the end of the invoking StringBuffer object..
Syntax:
StringBuffer append(String str)
StringBuffer append(int num)
StringBuffer append(Object obf)
String.valueOf() is called for each parameter to obtain its string representation. The result is appended to the current StringBuffer object. The buffer itself is returned by each version of append(). back into a constant String
import java.io.*;
public class ReversedString
{
public static String reverseIt(String s)
{
int i, n;
n = s.length();
StringBuffer d = new StringBuffer(n);
for (i = n - 1; i >= 0; i--)
d.append(s.charAt(i));
return d.toString();
}
public static void main (String args[]) throws IOException
{
BufferedReader k=new BufferedReader(new InputStreamReader(System.in));
String p,q;
System.out.println("Enter a string");
p=k.readLine();
q=reverseIt(p);
System.out.println("Original String is "+p);
System.out.println("Reversed String is "+q);
}
}
Insert()
The insert() method inserts one string into another. It is overloaded to accept values of all the simple types, plus Strings and Objects. Like append(), it calls String.valueOf () to obtain the string representation of the value it is called with. This string is then inserted into the invoking StringBuffer object.
Syntax:
StringBuffer insert(int loc, String str)
StringBuffer insert(int loc, char ch)
StringBuffer insert(int loc, Object obj)
Here, loc specifies the location at which point the string will be inserted into the invoking StringBuffer object.
public class StringInsert
{
public static void main(String args[])
{
StringBuffer sb = new StringBuffer(" in ");
sb.append("God");
sb.append('!');
sb.insert(0,"Beleive");
sb.append('\n');
sb.append("God is Great");
sb.setCharAt(20,'I');
String s = sb.toString();
System.out.println(s);
}
}
reverse()
We can reverse the characters within a StringBuffer object using reverse()·
import java.io.*;
class StringReverse
{
public static void main(String args[])
{
StringBuffer k=new StringBuffer("Hello Java");
System.out.println(k);
k.reverse();
System.out.println(k);
}
}
delete( ) and deleteCharAt( )
StringBuffer delete(int s, int e)
StringBuffer deleteCharAt(int loc)
The delete() method deletes a sequence of characters from the invoking object. Here, s specifies the starting location of the first character to remove, and e specifies location one past the last character to remove. Thus, the substring deleted runs from s to e-l. The resulting StringBuffer object is returned.
The deleteCharAt() method deletes the character at the location specified by loco It returns the resulting StringBuffer object.
Replace()
It replaces one set of characters with another set inside a StringBuffer object.
Syntax:
StringBuffer replace(int s, int e, String str)
The substring being replaced is specified by the location sand e. Thus, the substring at s through e-l is replaced by str string.
substring( )
It returns a portion of a StringBuffer.
Syntax:
String substring(int s)
String substring(int s, int e)
The first form returns the substring that starts at s and runs to the end of the invoking StringBuffer object. The second form returns the substring that starts at s and runs through e-l.
public class StringBuffer2
{
public static void main (String args[])
{
StringBuffer p= new StringBuffer("God is Great");
char k[] = new char[20];
char q;
p.getChars(7,11,k,0);
System.out.println("Original string is "+p);
p.delete(4,6);
System.out.println("string after deleting a word "+p);
p.insert(4,"is");
System.out.println("string after inserting a word "+p);
p.deleteCharAt(4);
System.out.println("string after deleting a character"+p);
p.replace(4,5,"always");
System.out.println("string after replacing a word is "+p);
q=p.charAt(0);
System.out.println("the first character of string is " | http://ecomputernotes.com/java/array/stringbuffer-class | CC-MAIN-2019-39 | refinedweb | 1,048 | 57.67 |
I/O Completion Ports (IOCP) supported on Microsoft Windows platforms has two facets. It first allows I/O handles like file handles, socket handles, etc., to be associated with a completion port. Any async I/O completion event related to the I/O handle associated with the IOCP will get queued onto this completion port. This allows threads to wait on the IOCP for any completion events. The second facet is that we can create a I/O completion port that is not associated with any I/O handle. In this case, the IOCP is purely used as a mechanism for efficiently providing a thread-safe waitable queue technique. This technique is interesting and efficient. Using this technique, a pool of a few threads can achieve good scalability and performance for an application. Here is a small example. For instance, if you are implementing a HTTP server application, then you need to do the following mundane tasks apart from the protocol implementation:
You can implement it by creating one dedicated thread per client connection that can continuously communicate with the client to and fro. But this technique quickly becomes a tremendous overhead on the system, and will reduce the performance of the system as the number of simultaneous active client connections increase. This is because, threads are costly resources, and thread switching is the major performance bottle neck especially when there are more number of threads.
The best way to solve this is to use an IOCP with a pool of threads that can work with multiple client connections simultaneously. This can be achieved using some simple steps...
This technique will allow a small pool of threads to efficiently handle communication with hundreds of client connections simultaneously. Moreover, this is a proven technique for developing scalable server side applications on Windows platforms.
The above is a simplified description of using IOCP in multithreaded systems. There are some good in-depth articles on this topic in CodeProject and the Internet. Do a bit of Googling on words like IO Completion Ports, IOCP, etc., and you will be able to find good articles.
Managed IOCP is a small .NET class library that provides the second facet of Native Win32 IOCP. This class library can be used both by C# and VB.NET applications. I chose the name Managed IOCP to keep the readers more close to the techniques they are used to with native Win32 IOCP. As the name highlights, Managed IOCP is implemented using pure .NET managed classes and pure .NET synchronization primitives. At its core, it provides a thread-safe object queuing and waitable object receive mechanism. Apart from that, it provides a lot more features. Here is what it does:
System.Objecttypes to a threadsafe queue maintained by each Managed IOCP instance.
Managed IOCP can be used in other scenarios apart from the sample that I mentioned in the introduction to native Win32 IOCP. It can be used in process oriented server side business applications. For instance, if you have a business process ( _not_ a Win32 process) with a sequence of tasks that will be executed by several clients, you will have to execute several instances of the business process, one for each client in parallel. As mentioned in my introduction to native Win32 IOCP, you can achieve this by spawning one dedicated thread per business process instance. But the system will quickly run out of resources, and the system/application performance will come down as more instances are created. Using Managed IOCP, you can achieve the same sequential execution of multiple business process instances, but with fewer threads. This can be done by dispatching each task in a business process instance as an object to Managed IOCP. It will be picked up by one of the waiting threads and will be executed. After completing the execution, the thread will dispatch the next task in the business process instance to the same Managed IOCP, which will be picked up by another waiting thread. This is a continuous cycle. The advantage is that you will be able to achieve the sequential execution goal of a business process, as only one waiting thread can receive a dispatched object, and at the same time keep the system resource utilization to required levels. Also, the system and business process execution performance will increase as there are few threads executing multiple parallel business processes.
Multithreaded systems are complex in the context that most problems will show up in real time production scenarios. To limit the possibility of such surprises while using Managed IOCP, I created a test application using which several aspects of the Managed IOCP library can be tested. Nevertheless, I look forward for any suggestions/corrections/inputs to improve this library and its demo application.
Before getting into the demo application, below is the sequence of steps that an application would typically perform while using the Managed IOCP library:
ManagedIOCPclass:
using Sonic.Net; ManagedIOCP mIOCP = new ManagedIOCP();
The
ManagedIOCP constructor takes one argument,
concurrentThreads. This is an integer that specifies how many maximum concurrent active threads are allowed to process objects queued onto this instance of
ManagedIOCP. I used a no argument constructor, which defaults to a maximum of one concurrent active thread.
ManagedIOCPinstance, call the
Register()method on the
ManagedIOCPinstance. This will return an instance of the
IOCPHandleclass. This is like native Win32 IOCP handle, using which the registered thread can wait on the arrival of objects onto the
ManagedIOCPinstance. This thread can use the
Wait()method on the
IOCPHandleobject. The
Wait()will indefinitely wait until it grabs an object queued onto the
ManagedIOCPinstance to which the calling thread is registered. It either comes out with an object, or an exception in case the
ManagedIOCPinstance is stopped (we will cover this later).
IOCPHandle hIOCP = mIOCP.Register(); while(true) { try { object obj = hIOCP.Wait(); // Process the object } catch(ManagedIOCPException e) { break; } catch(Exception e) { break; } }
ManagedIOCPinstance and any non-registered thread) that has access to the
ManagedIOCPinstance can dispatch (
Enqueue) objects to it. These objects are picked up by waiting threads that are registered with the
ManagedIOCPinstance onto which objects are being dispatched.
string str = "Test string"; mIOCP.Dispatch(str);
ManagedIOCPinstance.
mIOCP.UnRegister();
ManagedIOCP, it should call the
Close()method on it. This will release any threads waiting on this instance of
ManagedIOCP, clears internal resources, and resets the internal data members, thus providing a controlled and safe closure of a
ManagedIOCPinstance.
mIOCP.Close();
There are certain useful statistics that are exposed as properties in the
ManagedIOCP class. You can use them for fine tuning the application during runtime.
// Current number of threads that are // concurrently processing the objects queued // onto this instance of Managed IOCP // (This is readonly property) int activeThreads = mIOCP.ActiveThreads;
// Max number of concurrent threads // allowed to process objects queued onto this // instance of Managed IOCP (This is a read/write property) int concurThreads = mIOCP.ConcurrentThreads;
// Current count of objects queued onto this Managed IOCP instance. // NOTE: This value may change very quickly // as multiple concurrent threads might // be processing objects from this instance of Managed IOCP queue. // So _do not_ depend on this value // for logical operations. Use this only for // monitoring purpose (Status reporting, etc.) // and during cleanup processes // (like not exiting main thread untill the queued object becomes 0, // i.e. no more objects to be processed, etc) // (This is readonly property) int qCount = mIOCP.QueuedObjectCount;
// Number of threads that are registered with this instance of Managed IOCP // (This is readonly property) int regThreadCount = mIOCP.RegisteredThreads;
Following are the advanced features of Managed IOCP that need to be used carefully.
Managed IOCP execution can be paused at runtime. When a Managed IOCP instance is paused, all the threads registered with this instance of Managed IOCP will stop processing the queued objects. Also, if the '
EnqueueOnPause' property of the
ManagedIOCP instance is
false (by default, it is
false), then no thread will be able to dispatch new objects onto the Managed IOCP instance queue. Calling
Dispatch on the
ManagedIOCP instance will throw an exception in the
Pause state. If the '
EnqueueOnPause' property is set to
true, then threads can dispatch objects onto the queue, but you need to be careful while setting this property to
true, as this will increase the number of pending objects in the queue, thus occupying more memory. Also, when the Managed IOCP instance is re-started, all the registered threads will suddenly start processing a huge number of objects thus creating greater hikes in the system resource utilization.
mIOCP.Pause();
Once paused, the
ManagedIOCP instance can be re-started using the
Run method.
mIOCP.Run();
The running status of the Managed IOCP instance can be obtained using the
IsRunning property:
bool bIsRunning = mIOCP.IsRunning;
You can retrieve the
System.Threading.Thread object of the thread associated with the
IOCPHandle instance, from its property named '
OwningThread'.
I provided two demo applications with similar logic. The first is implemented using Managed IOCP, the other using native Win32 IOCP. These two demo applications perform the following steps:
ManagedIOCPinstance or native Win32 IOCP.
ManagedIOCPinstance or native Win32 IOCP until the specified number of objects are completed.
The Sonic.Net (
ManagedIOCP) demo application additionally demonstrates the following features of Managed IOCP that are unavailable in the Win32 IOCP:
Below is the image showing both the demo applications after their first cycle of object processing:
Demo application results
As you can see in the above figure, Managed IOCP gives the same speed (slightly even better) as native Win32 IOCP. The goal of these two demo applications is _not_ to compare the speed or features of Win32 IOCP with that of Managed IOCP, but rather to highlight that Managed IOCP provides all the advantages of native Win32 IOCP (with additional features) but in a purely managed environment.
I tested these two demo applications on a single processor CPU and a dual processor CPU. The results are almost similar, in the sense the Managed IOCP is performing as good as (sometimes performing better than) native Win32 IOCP.
Below are the details of the files included in the article's Zip file:
Sonic.Net. All the classes that I described in this article are defined within this namespace. The folder hierarchy is described below:
Sonic.Net | --> Assemblies | --> Solution Files | --> Sonic.Net | --> Sonic.Net Console Demo | --> Sonic.Net Demo Application
The Assemblies folder contains the Sonic.Net.dll (contains the shows the usage of the Managed IOCP ThreadPool, which is explained in my Managed I/O Completion Ports - Part 2 article. This demo uses a file that will be read by the ThreadPool threads. Please change the file path to a valid one on your system. The code below shows the portion in.
This section discusses the how and why part of the core logic that is used to implement Managed IOCP.
Managed IOCP provides a thread safe object dispatch and retrieval mechanism. This could have been achieved by a simple synchronized queue. But with synchronized queue, when a thread (thread-A) dispatches (enqueues) an object onto the queue, for another thread (thread-B) to retrieve that object, it has to continuously monitor the queue. This technique is inefficient as thread-B will be continuously monitoring the queue for arrival of objects, irrespective of whether the objects are present in the queue. This leads to heavy CPU utilization and thread switching in the application when multiple threads are monitoring the same queue, thus degrading the performance of the system.
Managed IOCP deals with this situation by attaching an auto reset event to each thread that wants to monitor the queue for objects and retrieve them. This is why any thread that wants to wait on a Managed IOCP queue and retrieve objects from it has to register with the Managed IOCP instance using its '
Register' method. The registered threads wait for the object arrival and retrieve them using the '
Wait' method of the
IOCPHandle instance. The
IOCPHandle instance contains an
AutResetEvent that will be set by the Managed IOCP instance when any thread dispatches an object onto its queue. There is an interesting problem in this technique. Let us say that there are three threads, thread-A dispatching the objects, and thread-B and thread-C waiting on object arrival and retrieving them. Now, say if thread-A dispatches 10 objects in its slice of CPU time. Managed IOCP will set the
AutoResetEvent of thread-B and thread-C, thus informing them of the new object arrival. Since it is an event, it does not have an indication of how many times it has been set. So if thread-B and thread-C just wake up on the event set and retrieve one object each from the queue and again waits on the event, there would be 8 more objects left over in the queue unattended. Also, this mechanism would waste the CPU slice given to thread-B and thread-C as they are trying to go into waiting mode after processing a single object from the Managed IOCP queue.
So in Managed IOCP, when thread-B and thread-C call the '
Wait' method on their respective
IOCPHandle instances, the method first tries to retrieve an object from the Managed IOCP instance queue before waiting on its event. If it was able to successfully retrieve the object, it does not go into wait mode, rather it returns from the
Wait object. This is efficient because there is no point for threads to wait on their event until there are objects to process in the queue. The beauty of this technique is that when there are no objects in the queue, the
IOCPHandle instance
Wait method will suspend the calling thread by waiting on its internal
AutoResetEvent, which will be set again by the Managed IOCP instance '
Dispatch' method when thread-A dispatches more objects.
CAS is a very familiar term in the software community, dealing with multi-threaded applications. It allows you to compare two values, and update one of them with a new value, all in a single atomic thread-safe operation. In Managed IOCP, when a thread successfully grabs an object from the IOCP queue, it is considered to be active. Before grabbing an available object from the queue, Managed IOCP checks if the number of currently active threads is less than the allowed maximum concurrent threads. In case the number of current active threads is equal to the maximum allowed concurrent threads, then Managed IOCP will block the thread, trying to receive the object from the IOCP queue. To do this, Managed IOCP has to follow the logical steps as mentioned below:
In the above logic, step-3 consists of two operations, comparison and assignment. If we perform these two operations separately in Managed IOCP, then for instance, thread-A and thread-B might both reach the conditional expression with the same would-be value for active threads. If this value is less than or equal to the maximum number of allowed concurrent threads, then the condition will pass for both the threads, and both of them will assign the same would-be value for the active threads. Though the active thread count may not increase in this scenario, the actual number of physically active threads will be more than the desired maximum number of concurrent threads, as in the above scenario both the threads think that they can be active.
So Managed IOCP performs this operation as shown below:
CAS(ref activethreads variable, would-be value of active threads, current value of active threads stored in a local variable in step 1). Come out of the method if the would-be value is greater than the maximum number of allowed concurrent threads.
CASreturns
falsethen go to step 1.
In the above logic, the CAS operation supported by the .NET framework (
Interlocked.CompareExchange) is used to assign the new would-be value to active threads only if the original value of active threads has not been changed since the time we observed (stored in the local variable) it before proceeding to our compare and decide step. This way, though two threads might pass the decision in step-4, one of them will fail in the CAS operation thus not going into active mode. Below is the active threads increment method extracted from the
ManagedIOCP class implementation:
internal bool IncrementActiveThreads() { bool incremented = true; do { int curActThreads = _activeThreads; int newActThreads = curActThreads + 1; if (newActThreads <= _concurrentThreads) { // Break if we had successfully incremented // the active threads if (Interlocked.CompareExchange(ref _activeThreads, newActThreads,curActThreads) == curActThreads) break; } else { incremented = false; break; } } while(true); return incremented; }
I could have used a lock mechanism like
Monitor for the entire duration of the active threads increment operation. But since this is a very frequent operation in Managed IOCP, it would lead to heavy lock contention, and will decrease the performance of the system/application in multi-CPU environments. This technique that I used in Managed IOCP is generally called lock-free technique, and is used heavily to build lock-free data structures in performance critical applications.
Concurrency is one area that native Win32 IOCP excels in. It provides a mechanism where the maximum number of allowed concurrent threads can be set during its creation. It guarantees that at any given point of time, only the maximum allowed concurrent threads are running, and more importantly, it sees to it that _atleast_ the maximum allowed concurrent threads are _always_ notified/awakened to process completion events, if the number of threads using its IOCP handle is more than the maximum number of allowed concurrent threads.
Managed IOCP also provides the above two guarantees with more features like ability to modify the maximum number of allowed concurrent threads at runtime, which native Win32 IOCP does not provide. Managed IOCP provides this guarantee using the Compare-And-Swap (CAS) technique in its
Wait mode, as described in the previous section (4.2). When a thread waits on its
IOCPHandle instance to grab a Managed IOCP queue object, it first tries to become active by incrementing the active thread count using the CAS technique as mentioned in the previous section (4.2). It it fails to increment the number of active threads, it means that the number of current active threads is equal to the maximum number of allowed concurrent threads and the calling thread will go into
Wait mode. You can see this in the code implementation of the
IOCPHandle::Wait() method in ManagedIOCP.cs, in the attached source code ZIP file.
I could have used Win32 Semaphores to limit the maximum number of allowed concurrent threads. But it will defeat the whole purpose of Managed IOCP, being completely managed, as .NET 1.1 does not provide a Semaphore type. Also, I wanted this library to be as compatible as possible with the Mono .NET runtime. These are the reasons I did not explore the usage of semaphore for this feature. Maybe, I'll take a serious look at it if .NET 2.0 has a Semaphore object.
The second feature of IOCP as described in the beginning of this section is described in more detail in the next section (dispatching objects in Managed IOCP).
Managed IOCP maintains a queue of
IOCPHandle objects that are waiting on it to receive objects. When an object is dispatched to it by any thread, it pops out the next item (a
IOCPHandle object) in the queue. It then sets the
AutoResetEvent of the
IOCPHandle object that is popped out. Before doing that, Managed IOCP tries to evaluate whether the thread associated with the popped out
IOCPHandle can be used to process the object. It does it by checking whether the thread is in waiting mode using its
IOCPHandle instance's
Wait method, or the thread is running. If so, it sets its
AutoRestEvent so that the thread wakes up and processes the object if it is waiting on IOCP, or if it is running (which means it is not suspended for some reason).
If the thread is not waiting on
IOCPHandle and is also not in the running state, Managed IOCP assumes that the thread is waiting on some external resources other than its
IOCPHandle. It then simply decrements the active thread count, so that any other thread waiting on Managed IOCP or in running state could process objects dispatched to the Managed IOCP queue.
Below is the method that is used to choose a thread when an object is dispatched onto Managed IOCP:
private void WakeupNextThread() { bool empty = false; #if (DYNAMIC_IOCP) // First check if we should service this request from suspended // IOCPHandle queue // if ((_activeThreads < _concurrentThreads) && (_qIOCPHandle.Count >= _concurrentThreads)) { IOCPHandle hSuspendedIOCP = _qSuspendedIOCPHandle.Dequeue(ref empty) as IOCPHandle; if ((empty == false) && (hSuspendedIOCP != null)) { hSuspendedIOCP.SetEvent(); return; } } empty = false; #endif while (true) { #if (LOCK_FREE_QUEUE) IOCPHandle hIOCP = _qIOCPHandle.Dequeue(ref empty) as IOCPHandle; #else IOCPHandle hIOCP = null; try { if (_qIOCPHandle.Count > 0) hIOCP = _qIOCPHandle.Dequeue() as IOCPHandle; } catch (Exception) { } #endif // Note: // Checking for (hIOCP != null) is actually not required. // But we are getting a null IOCPHandle object from Lock-Free Queue. // I need to investigate this. // if ((empty == false) && (hIOCP != null)) { if (hIOCP.WaitingOnIOCP == true) { hIOCP.SetEvent(); break; } else { if (hIOCP.OwningThread.ThreadState != ThreadState.Running) { // Set the active flag to 2 and decrement the active threads // so that other waiting threads can process requests // int activeTemp = hIOCP._active; int newActiveState = 2; if (Interlocked.CompareExchange(ref hIOCP._active, newActiveState, activeTemp) == activeTemp) { DecrementActiveThreads(); } } else { // This is required because, Thread associated with hIOCP // may have got null out of ManagedIOCP queue, but still // not yet reached the QueuIOCPHandle and Wait state. // Now we had a dispatch and we enqueued the object and // trying to wake up any waiting threads. If we ignore this // running thread, this may be the only thread for us and we // will never be able to service this dispatch untill another // dispatch comes in. // hIOCP.SetEvent(); break; } } } else { // Do we need to throw this exception ??? // // throw new Exception("No threads avialable to handle the dispatch"); break; } } }
This technique provides the second aspect of native Win32 IOCP's concurrency management that guarantees _atleast_ the maximum number of allowed concurrent threads are _always_ notified/awakened to process queued objects, if the number of threads using the Managed IOCP instance is more than the maximum number of allowed concurrent threads.
I published part two of this article "Managed I/O Completion Ports - Part 2" that covers Managed IOCP with Lock-Free Queue and Lock-Free ObjectPool, ManagedIOCP based ThreadPool, and a generic Task Framework to be used by Managed IOCP ThreadPool. Here is the link for the article: Managed I/O Completion Ports - Part 2.
Managed IOCP (Sonic.Net assembly, but _not_ demo applications) conforms to core .NET specifications, and can be compiled and used on the Mono .NET runtime. I tested this with Mono 1.1.13.x, and it is working fine on both Windows and Linux platforms (Red Hat Enterprise Linux, RHEL 3).
Fixed an issue related to.
Sonic.Net v1.0 (class library hosting
ManagedIOCP and
IOCPHandle class implementations with a .NET synchronized
Queue for holding data objects in the Managed IOCP).
This software is provided "as is" with no expressed or implied warranty. I accept no liability for any type of damage or loss that this software may cause.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/cs/managediocp.aspx | crawl-002 | refinedweb | 3,882 | 61.26 |
Multiple Raspberry PI 3D Scanner 🙁.
🙂
Step 1: Setting up the hardware
So I first needed a rig to hold the Raspberry Pies. I initially did some testing with a big round circle I made out of wood, but this was really impractical to work with and hard to walk in and out of. So after some testing, I went with an “individual pole” design. Most programs that turn images into a 3D model need the images to be shoot from different angles. So I settled for each pole to hold 3 Raspberry Pies cameras. 🙂 🙁 th
e local IP address of each raspberry (the last 3 digits) for a prefix of the filename.
Here the python listening script I am using:
#!/usr/bin/python
import socket
import struct
import fcntl
import subprocess
import sys
MCAST_GRP = ‘224(‘256s’, ifname[:15])
)[20:24])
id = get_ip_address(‘eth0’)
ip1, ip2, ip3, ip4 = id.split(‘.’)
print ‘ID: ‘ + ip4
#create an options file, this file should containt the parameters for the raspistill image cmd
optionfile = open(‘/server/options.cfg’,’r’)
options = optionfile.readline()
optionfile.close()
print “optons: ” + options
while True:
data = sock.recv(10240)
data = data.strip()
if data == “reboot”:
print “rebooting…”
cmd = ‘sudo reboot’
pid = subprocess.call(cmd, shell=True)
else:
print “shooting ” + data
cmd = ‘raspistill -o /tmp/photo.jpg ‘ + options
pid = subprocess.call(cmd, shell=True)
print “creating directory”
cmd = ‘mkdir /server/3dscan/’ + data
pid = subprocess.call(cmd, shell=True)
print “copy image”
cmd = ‘cp /tmp/photo.jpg /server/3dscan/’ + data + “/” + data + “_” + ip4 + ‘photo name:’
n = sys.stdin.readline()
n = n.strip(‘\n’)
MCAST_GRP = ‘224.
For more detail: Multiple Raspberry PI 3D Scanner | http://projects-raspberry.com/multiple-raspberry-pi-3d-scanner/ | CC-MAIN-2018-05 | refinedweb | 269 | 60.72 |
As a lead maintainer of the Appium project, I was very excited to learn about TestProject building a test building and running tool on top of Appium and Selenium. Recently, I was even more excited to see that this tool was going free for public use! I’ve had a chance to play around with it quite a bit now, including putting together a webinar a while back on how to distribute your Appium tests using different TestProject agents.
But really, what I’m most interested in in this whole project is the fact that TestProject is trying to build a platform, not just a tool. The set of community addons is a great way to encourage lots of people to develop new functionality that plugs into TestProject, and to share their work with others. I recently built my first TestProject addon, and this post is all about the steps I took to do this!
The Idea
The basic idea I had for the addon was to implement a cross-platform method for triggering deep links across both iOS and Android apps. (If you’re not quite sure what deep links are, or how they are valuable for testing, check out this article on speeding up your tests with deep links from my newsletter and blog, Appium Pro). One of the downsides of the approach I used in that article was that it didn’t work on real physical iOS devices–only on simulators. Later on, Wim Selles contributed an article on a cross-platform method for opening deep links on all devices. It is this approach that I wanted to use in creating a TestProject addon.
(Want to skip all the words and check out the addon already? Just head over to the Deep Link Toolkit addon page.)
The addon looks like this:
Basically, all you do to use it is input the deep link URL you want to be activated, and you’re off to the races!
Developing the Addon
TestProject has a public SDK that you can use to develop addons. I chose to develop my addon in Java. As part of the TestProject Java SDK documentation, there is a section on addon development. This was an indispensable reference in developing the addon, as was the portion before on running TestProject tests from Java, which is necessary as part of the test process. I also found this very helpful tutorial by Petri Kainulainen, which filled in some of the gaps in my understanding based on the docs alone.
The first thing I did, of course, was to create an open source Git repo to host this addon (and potentially more addons I develop down the road), which you can find here.
The way creating addons work is by implementing a special interface for Web, iOS, or Android. This interface allows us to set parameters for the addon (that will show up in the TestProject recorder UI), and to define the behavior that will occur when the addon is run by the user, inside a method called
execute, which gives us access to the Appium driver and potentially other things we might need. For me, implementing the
execute function was pretty simple, especially for Android, since triggering a deep link on Android requires only two short Appium commands! Here’s the full Android class for the Android side of the deep link addon:
@Action(name = "Open Deep Link") public class OpenDeepLinkAndroid implements AndroidAction { @Parameter(description = "The deep link URL") public String url = ""; @Parameter(description = "The package of the app for the URL (optional, will default to the currently-running app)") public String pkg = ""; @Override public ExecutionResult execute(AndroidAddonHelper helper) throws FailureException { Helpers.validateURL(url); AndroidDriver driver = helper.getDriver(); if (pkg.equals("")) { pkg = driver.getCurrentPackage(); } driver.terminateApp(pkg); driver.executeScript("mobile: deepLink", ImmutableMap.of( "url", url, "package", pkg )); return ExecutionResult.PASSED; } }
In the snippet above, you can see how we give the addon a name, and set its parameters. Then, we use those parameters as well as the Appium driver attached to the device to terminate the currently-running app and then call the
mobile: deepLink command. That’s it! That’s all the code that powers this addon. You can, of course, check out the Android code on GitHub, as well as the iOS code, which is a bit more complex since we have to actually automate a bit of the mobile Safari UI.
Testing the Addon
Developing the addon is one thing, but I also needed to test it. For that, I used the test running portion of the TestProject SDK. Basically, I have access to a
Runner class which allows me to start test sessions, as long as I have my TestProject developer key handy. This class handles all the Appium session start for me, so I don’t have to worry about setting capabilities or anything like that. I just get back an Appium driver, which I can use in a test method. Here’s the test method I wrote for the Android version of the addon:
@Test void testOpenDeepLinkAction() throws Exception { // Create Action OpenDeepLinkAndroid action = new OpenDeepLinkAndroid(); action.url = "theapp://login/alice/mypassword"; // Run action runner.run(action); // Verify WebDriverWait wait = new WebDriverWait(driver, 10); wait.until(ExpectedConditions.presenceOfElementLocated(By.xpath("//*[contains(@text, \"You are logged in as alice\")]"))); }
I can directly instantiate my Android deep link class, and then I trigger it using
runner.run(). Assuming the action completes as expected, I’ve tested that my addon works! (You can see the full test class for the Android version of the test as well as the iOS version on GitHub).
Submitting the Addon
Once I was ready to upload my addon to TestProject, I had to “Create” the addon in the web UI. This opens up a wizard which prompts for various information and any permissions my addon might need beyond the Appium driver (for example, filesystem access, etc…). The most important part of this process was saving the manifest file which was generated by the wizard. I had to download this and put it in the
src/main/java/resources directory in my project. TestProject uses this file to get all the important metadata about the addon.
Finally, I had to actually compile, bundle, and upload the addon jarfile to TestProject. Luckily, the TestProject docs suggested a nice Gradle script I could drop into my project and use to zip the addon up into a distributable jarfile. To see the details, check out my project’s build.gradle file. Once I uploaded it to TestProject, I was able to immediately use it in my own projects (which I did, of course, to test out its integration with the TestProject UI). At that point, I also submitted the addon for review, so TestProject’s team could take a look at it and make sure I wasn’t some nefarious hacker! After that, it was available in the Addon directory.
All told, this was a pretty fun and easy way to encapsulate an interesting Appium technique, and make it available to others who can now use it without needing to understand any programming. I’ll definitely be dreaming up some more addons to contribute to the community. Happy testing! 😉 | https://blog.testproject.io/2019/12/03/how-i-made-a-new-testproject-addon-for-appium/ | CC-MAIN-2020-05 | refinedweb | 1,205 | 59.43 |
Simplify Kubernetes App Deployments With Cloud Native Buildpacks and kapp
Tim Downey
・9 min read
In recent years the Kubernetes wave has taken the software world by storm. And for good reason. Kubernetes makes it easy for developers to build robust distributed systems. It provides powerful building blocks for deploying and managing containerized workloads. This makes it an enticing platform for the sprawling microservice "apps" of today.
Unfortunately, all this power and flexibility carries with it enormous complexity. Kubernetes is not a PaaS (Platform as a Service) like Heroku or Cloud Foundry. It does not build your app from source or abstract away all the gritty details. It does, however, provide many of the necessary primitives for building a PaaS.
Over the past three years, I've worked as a full-time contributor to Cloud Foundry. During that time I've come to appreciate the simplicity of the
cf push experience.
I've grown fond of pushing raw source code and using buildpacks. I enjoy the ease of creating and mapping routes. I like that my app logs are a mere
cf logs away. That is if you're deploying a stateless 12 Factor app. If you're not -- or even if you just need to go a bit off the rails of your PaaS -- the platform can hinder more than it helps. It's these use cases where Kubernetes shines.
You don't have to completely forgo the PaaS experience you're used to, however. In this post we'll take a look at two tools:
pack and
kapp that help bring some of that that PaaS goodness to Kubernetes.
Prerequisites
If you want to follow along, you'll need the following:
- Access to a Kubernetes cluster
- Install
kubectland authenticate with your cluster (follow these docs)
- Install
docker(install the Community Edition)
- Install
pack(installation instructions)
- Install
kapp(installation instructions)
sinatra-k8s-sample
Cluster Configuration
I used an inexpensive single-node managed Kubernetes cluster from Digital Ocean. It uses a Digital Ocean load balancer and, all in all, I expect it to cost about $20 a month1. I have DNS configured to direct traffic on my domain
*.k8s.downey.dev to the load balancer's IP and the LB itself points to an NGINX server in the cluster.
I am using ingress-nginx as my Ingress Controller and installed it using the GKE ("generic") installation steps here.
Since I'm using a
.dev domain I need to have valid TLS certs since
.dev domains are on the HSTS preload list for mainstream browsers. To automate this I used cert-manager with the following
ClusterIssuer.
apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: letsencrypt-prod namespace: cert-manager spec: acme: email: email@example.com # replace with your own http01: {} privateKeySecretRef: name: letsencrypt-prod server: solvers: - http01: ingress: class: nginx
I loosely followed this blog post for the Contour Ingress Controller to set that up.
1$20 for a k8s cluster is still nothing to sneeze at, so if you want to help me pay for it sign up with this referral link 🤑
Our App The Belafonte
Throughout this post we'll be deploying a simple, stateless Ruby app called
belafonte. It is named after The Belafonte, the esteemed research vessel helmed by oceanographer Steve Zissou and it will carry us safely on our Kubernetes journey. If you want to follow along, simply clone the app.
git clone
To make things a bit more interesting,
belafonte relies on a microservice to feed it UUIDs for display. This is a bit contrived, but that's ok. We'll be deploying a small Python app called httpbin to serve this purpose.
At the end of the day, navigating to the app will return a simple webpage containing some information about the Kubernetes pod that it is deployed on as well as our artisanally crafted UUID.
Creating Container Images Using Cloud Native Buildpacks
Kubernetes is a platform for running containers. That means to run our app, we first must create a container image for it.
Traditionally this would mean creating a
Dockerfile to install ruby, download all of our dependencies, and more. For simple applications this can wind up causing a lot of overhead and maintenance. Fortunately, there's another option.
As I mentioned earlier, buildpacks are one of my favorite features of PaaSes. Just push up your code and let the package handle making it runnable. Lucky for us, developers from Heroku and Cloud Foundry have been working on a Cloud Native Computing Foundry project called Cloud Native Buildpacks that lets anyone have this power.
We can use the
pack CLI to run our code against Heroku's Ruby Cloud Native buildpack with the following command (you may need to
docker login first to publish).
pack build downey/sinatra-k8s-sample --builder heroku/buildpacks --buildpack heroku/ruby --publish
This will produce an OCI container image and publish it to DockerHub (or a container registry of your choosing). Wow!
One thing to note is that, at least for the Ruby buildpack I used, I did not get a default command for the image that worked out of the box. To get it working on Kubernetes I had to first invoke the Cloud Native Buildpack launcher (
/cnb/lifecycle/launcher) to load up the necessary environment (adding
rackup,
bundler, etc. to the
$PATH). The command I ended up using on Kubernetes to run the image looked like this:
command: ["/cnb/lifecycle/launcher", "rackup -p 8080"]
The YAML Configuration
Workloads are typically deployed to Kubernetes via YAML configuration files. Within the
deploy directory of
sinatra-k8s-samples you'll find the necessary files for deploying the
belafonte app, the
httpbin "microservice" it depends on, and a file declaring the
belafonte namespace for them all to live under.
Specifically within
deploy/belafonte you'll find:
deployment.yaml
service.yaml
ingress.yaml
deployment.yaml
The
deployment.yaml defines the Deployment for our app. This is where we'll tell Kubernetes how to run our app. It contains properties that declare how many instances of our app we want running, how updates should be carried out (e.g. rolling updates), and where to download the image for our container.
... containers: - name: belafonte image: docker.io/downey/sinatra-k8s-sample:latest env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: UUID_SERVICE_NAME value: httpbin ports: - containerPort: 8080 name: http command: ["/cnb/lifecycle/launcher", "rackup -p 8080"] ...
The snippet above shows that we'll be using the image we just built with
pack and that we're setting some environment variables on it.
service.yaml
The
service.yaml file contains configuration for setting up a Kubernetes Service. It will tell Kubernetes to allocate us an internal Cluster IP and Port that can be used to hit our app.
You can view services using
kubectl get services. Once we deploy our app, we'll see the following services in the
belafonte namespace.
kubectl -n belafonte get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE belafonte ClusterIP 10.245.30.60 <none> 8080/TCP 3d3h httpbin ClusterIP 10.245.157.166 <none> 8080/TCP 3d3h
ingress.yaml
Since our cluster has an Ingress Controller installed (
ingress-nginx), we can define an Ingress via
ingress.yaml.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: belafonte annotations: kubernetes.io/tls-acme: "true" cert-manager.io/cluster-issuer: "letsencrypt-prod" namespace: belafonte spec: tls: - secretName: belafonte hosts: - belafonte.k8s.downey.dev rules: - host: belafonte.k8s.downey.dev http: paths: - backend: serviceName: belafonte servicePort: 8080
At its most basic, this configuration instructs the ingress NGINX to direct traffic destined for
belafonte.k8s.downey.dev to the
belafonte service defined by
service.yaml. Since we're using
cert-manager, the annotations on it will instruct cert-manager and the
letsencrypt-prod ClusterIssuer to issue and serve LetsEncrypt TLS certs for this domain. This part is only required if you want to support
https, but it's simple enough so I'd recommend it.
Similar YAML config exists for
httpbin in the
deploy/httpbin directory (minus the Ingress since we do not want it externally reachable).
Installing the App with kapp
All of that YAML declaritively represents the desired state we want our applications to be in. If you just
kubectl apply -f <file> every file in that
deploy directory you'll get a running
belafonte app and an
httpbin microservice to back it. Unfortunately,
kubectl apply can be a pretty blunt tool. It's hard to tell what it is going to do without hand inspecting each YAML file. Options like
--dry-run and commands like
kubectl diff exist to help improve things, but those used to doing
git push heroku or
cf push may still desire a nicer UX.
This is where the Kubernetes Application Management Tool, or
kapp, comes in. I like
kapp because it provides something a bit closer to that PaaS experience and you don't have to install anything special on the k8s cluster.
Deploying
With
kapp we can
kapp deploy our entire deploy directory and deploy the app all in one go. If you're following along, go ahead and run
kapp deploy -a belafonte -f deploy and check it out!
$ kapp deploy -a belafonte -f deploy Changes Namespace Name Kind Conds. Age Op Wait to Rs Ri (cluster) belafonte Namespace - - create reconcile - - belafonte belafonte Deployment - - create reconcile - - ^ belafonte Ingress - - create reconcile - - ^ belafonte Service - - create reconcile - - ^ httpbin Deployment - - create reconcile - - ^ httpbin Service - - create reconcile - - Op: 6 create, 0 delete, 0 update, 0 noop Wait to: 6 reconcile, 0 delete, 0 noop Continue? [yN]:
It will show what changes it expects to do and prompt first before applying. It then applies the changes, tracks all the resources that were created as part of this "app" (as specified per the
-a flag), and will wait until everything is running (by checking the
status of the resources) before exiting. It is even aware of some of the typical ordering requirements of config. For example, it is smart enough to create
namespaces and CRDs before applying config that might depend on them. It stores this logical "app" definition within a
ConfigMap so the definition of the app is persisted on the Kubernetes api. You can switch computers or come back days later and
kapp will still recognize your app.
Fetching Logs
To be fair, getting logs with
kubectl isn't too tough. Just
kubectl -n belafonte logs -l app=belafonte -f and we can stream them out for pods with the
app=belafonte label. As the number of apps you want to stream logs for grows, however, that label selector can become cumbersome. Streaming logs is a bit friendlier with
kapp. Just run
kapp logs -a belafonte -f and you'll stream logs from every pod that
kapp deployed. In our case that's both
httpbin and
belafonte.
$ kapp logs -a belafonte -f # starting tailing 'httpbin-57c4c9f6c6-662rh > httpbin' logs # starting tailing 'belafonte-ccc57688b-pqbkj > belafonte' logs httpbin-57c4c9f6c6-662rh > httpbin | [2019-11-07 05:22:13 +0000] [1] [INFO] Starting gunicorn 19.9.0 httpbin-57c4c9f6c6-662rh > httpbin | [2019-11-07 05:22:13 +0000] [1] [INFO] Listening at: (1) httpbin-57c4c9f6c6-662rh > httpbin | [2019-11-07 05:22:13 +0000] [1] [INFO] Using worker: sync httpbin-57c4c9f6c6-662rh > httpbin | [2019-11-07 05:22:13 +0000] [8] [INFO] Booting worker with pid: 8 # ending tailing 'httpbin-57c4c9f6c6-662rh > httpbin' logs belafonte-ccc57688b-pqbkj > belafonte | [2019-11-07 05:22:15] INFO WEBrick 1.4.2 belafonte-ccc57688b-pqbkj > belafonte | [2019-11-07 05:22:15] INFO ruby 2.5.5 (2019-03-15) [x86_64-linux] belafonte-ccc57688b-pqbkj > belafonte | [2019-11-07 05:22:15] INFO WEBrick::HTTPServer#start: pid=1 port=8080
Deleting the Apps
When you're done experimenting,
kapp makes cleaning up convenient as well. Simply run the following to delete everything that was deployed.
$ kapp delete -a belafonte Changes Namespace Name Kind Conds. Age Op Wait to Rs Ri belafonte belafonte Deployment 2/2 t 22s delete delete ok - ^ httpbin Deployment 2/2 t 22s delete delete ok - Op: 0 create, 2 delete, 0 update, 0 noop Wait to: 0 reconcile, 2 delete, 0 noop Continue? [yN]:
One gotcha, though, in the case of the LetsEncrypt certs that
cert-manager provisioned for us. LetsEncrypt rate limits certificate requests for a particular domain to 50 per month. If you plan on repeatedly churning these certs (like I did while writing this post) you'll quickly hit those limits. Luckily
kapp supports filters so you could do something like
kapp delete -a belafonte --filter-kind=Deployment to only delete the deployments and leave the
Ingress definitions (and associated certs) around.
Wrapping Up
So if you enjoy the app developer experience of a Platform, but require the power and flexibility of Kubernetes these tools are definitely worth a look. If you're interested in learning more, I recommend checking out the following resources:
- Heroku post on building Docker images with Cloud Native Buildpacks
- TGI Kubernetes 079 - ytt and kapp
Happy sailing! ⛵️
How to Practice for a Lit 🔥 Presentation
How to practice to deliver a powerful technical presentation.
| https://dev.to/downey/simplify-kubernetes-app-deployments-with-cloud-native-buildpacks-and-kapp-339b | CC-MAIN-2019-51 | refinedweb | 2,191 | 54.32 |
asp net write to example is a asp net write to document that shows the process of designing asp net write to format. A well designed asp net write to example can help design asp net write to example with unified style and layout.
asp net write to example basics
When designing asp net write to document, it is important to use style settings and tools. Microsoft Office provide a powerful style tool to help you manage your asp net write asp net write to styles may help you quickly set asp net write to titles, asp net write to subheadings, asp net write to section headings apart from one another by giving them unique fonts, font characteristics, and sizes. By grouping these characteristics into styles, you can create asp net write to documents that have a consistent look without having to manually format each section header. Instead you set the style and you can control every heading set as that style from central location. you also need to consider different variations: asp net export excel, asp net export excel word, asp net import excel, asp net import excel word, asp net export to excel, asp net export to excel word, vb net write to excel, vb net write to excel word
Microsoft Office also has many predefined styles you can use. you can apply Microsoft Word styles to any text in the asp net write asp net write to documents, You can also make the styles your own by changing how they look in Microsoft Word. During the process of asp net write to style design, it is important to consider different variations, for example, asp net write to excel c, asp net write to excel c word, asp net ajax export excel, asp net ajax export excel word, asp net write to excel file, asp net write to excel file word.
asp net write to example
export data to excel in asp net here, in this article i am going to show you how to export data to an excel file using c and vb in asp.net. however, using microsoft s interop namespace, we can export data to any version of excel. in addition, we will show you how to autofomat an excel sheet dynamically in asp how to export data to excel from an asp net application avoid this is a common task for asp.net developers. you have a web application where you expose data from a database, web service, this is how to export data from asp net database into excel moderators, if this should go into another forum that is more appropriate, please feel free to place it there. ive seen a lot of examples of how to a better way to export gridviews to excel this article shows how to export the content of an asp.net gridview to excel without showing the user a warning prompt when they try to open c i use the epplus package, which you can install via nuget. it allows you to load data onto an excel worksheet directly from your datatable, and it gridview export to excel how to export asp.net gridview to microsoft excel file. export gridview to excel in asp net with formatting using c and here mudassar ahmed khan has explained with an example and attached sample code, how to export gridview to excel file in asp.net with write data to excel file xls and xlsx in asp net here mudassar ahmed khan has explained how to write insert data to excel file sheet .xls and .xlsx in asp.net using c and vb.net. how to export data in a datagrid on an asp net webform to use this step by step guide to populate a datagrid web server control on an asp .net webform and then export the contents of the datagrid to microsoft excel. export gridview data to excel in asp net in this tip, im trying to explain how to create or export gridview to an ms excel file in asp.net using c . i have also placed a button on the | http://www.slipbay.com/asp-net-write-to-excel/ | CC-MAIN-2017-17 | refinedweb | 686 | 61.9 |
iofunc_utime()
Update time stamps
Synopsis:
#include <sys/iofunc.h> int iofunc_utime( resmgr_context_t* ctp, io_utime_ut:
- type
- _IO_UTIME.
- combine_len
- If the message is a combine message, _IO_COMBINE_FLAG is set in this member.
- cur_flag
- If set, iofunc_utime() ignores the times member, and set the appropriate file times to the current time.
- times
- A utimbuf structure that specifies the time to use when setting the file times. For more information about this structure, see utime().
Returns:
- EACCES
- The client doesn't have permissions to do the operation.
- EFAULT
- A fault occurred when the kernel tried to access the info buffer.
- EINVAL
- The client process is no longer valid.
- ENOSYS
- NULL was passed in info.
- EOK
- Successful completion.
- | https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/i/iofunc_utime.html | CC-MAIN-2020-10 | refinedweb | 113 | 69.79 |
How do you count lines in a file? And how do you read to the different lines? I know this question has been asked before but I never found a straight answer.
This is a discussion on Counting Lines within the C++ Programming forums, part of the General Programming Boards category; How do you count lines in a file? And how do you read to the different lines? I know this ...
How do you count lines in a file? And how do you read to the different lines? I know this question has been asked before but I never found a straight answer.
hope this helps:
Code:#include <iostream> #include <string> #include <fstream> using namespace std; int main() { ifstream read; // create our stream string content; // create a string to hold the content of the file read.open("file.ext"); // open the file (make sure it exists) while(read >> content) // create a while loop to read the file { cout << content; // print the file's contents } read.close(); // close the file return 0; }
I am against the teaching of evolution in schools. I am also against widespread
literacy and the refrigeration of food.
There are several solutions. You can read in every line and keep count. Another solution is to read in raw data and find "\n."
Kuphryn
your code doesn't quite make since, abrege, i think read and content are the same things... but i'm not sure.
Code:ifstream infile; infile.open("SomeTextfile.txt"); int lines=0; string strLine; if(infile.good()) { while(!infile.eof() && infile.good()) { getline(infile,strLine); lines++; } } infile.close();
It's like what kuphryn said, loop until you reach '\n' and that's a line, make a counter to keep track of the number of '\n's, and that's it...
does this answer your question.
Hi, I think this should help:
I just learnt this stuff from Bruce Eckels Thinking in C++I just learnt this stuff from Bruce Eckels Thinking in C++Code://using vectors to open file and count lines #include<iostream> #include<fstream> #include<string> #include<vector> int main() { vector<string> v; string line; ifstream in("something"); int i; int count=0; while(in.getline(in, line)) v.push_back(line); for(int i=0; i<v.size(); i++){ cout << v[i] << endl; count += i; } cout << "Total lines: " << count; return 0; }
I hope it helped.
"Cut the foreplay and just ask, man." | http://cboard.cprogramming.com/cplusplus-programming/28831-counting-lines.html | CC-MAIN-2014-41 | refinedweb | 400 | 84.07 |
Main Menu
Topics
Industries
Trending Now
25 Must-Read Nonprofit IT Blogs 2016
You know and love our Must-Read IT Blogs lists, but now, say hello to the nonprofit side.
Small businesses typically share information and resources among desktops, or perhaps designate a particular desktop as a server. This approach may save money at first, the costs in performance, security and management quickly add up as the business grows. The idea of an entry-level server can make everyone in a small business nervous: The owner worries about the cost, the users worry about the complexity and the technology manager worries about inadequate performance and rapid obsolescence.
HP offers an answer to all these concerns by packaging its latest entry-level server, the ProLiant ML115, with Microsoft Windows Small Business Server (SBS) 2003 R2 Standard Edition. Called “Smart Buy,” this combination brings true server performance to file and print services, and adds applications and management tools that bring immediate productivity gains for users and technology managers.
Based on an AMD Opteron dual-core processor, the ProLiant ML115 is designed to run out of the box and grow with your business. The Smart Buy version of the ML115 server is configured with a single Seagate Barracuda 160 gigabytes SATA hard drive and 1GB of ECC RAM. The cleanly designed, quiet and well-ventilated case can be configured with up to 8GB of RAM and four non hot-swap Serial ATA drives (up to 2 terabytes), which can be configured in RAID 0, 1, 5 or 0+1 array through the onboard nVidia nForce 590 platform. Six external USB ports provide plenty of connectivity, with two internal USB ports for optional tape backup and floppy drives. The onboard display and gigabit network adapters leave the 2 PCI-32 and 2 PCI-Express expansion slots open.
HP’s pre-installation of Windows SBS 2003 R2 made this one of the easiest and by far the fastest server setup I’ve ever experienced. Once I determined the system partition size, the SBS setup wizard led me through a series of surprisingly brief steps that included the company name, domain name and some network-related questions. Within 20 minutes, the server was up and running with the Active Directory domain configured, DNS, DHCP and Exchange running, and a “To Do” list, part of the larger Server Management console, on the screen and ready for more detailed configuration.
The HP ML115 is a very capable file and print server, handling GBs of data transfers and open files with ease. Having a server that’s fast and available at all times for file access and sharing makes things less complex for end users. Simply, this server as configured could handle more than 100 users. Windows SBS limits the number of users working on the server, but adds many tools that simplify their work and boost their productivity.
Exchange 2003 SP2 is ready to go after the server setup. You need only to include a mailbox with each user’s account and they can work with the greatly improved Outlook Web Access or, if you prefer, the Outlook 2003 client application, which is included with Windows SBS. Users may also send and receive faxes through the server, which is as simple to set up as a network printer.
Windows SBS configures a SharePoint internal Web site and, once you’ve personalized it for your company, this provides users with an easy way to share information. I’ve never looked at SharePoint until now, and the default configuration allowed my test clients to get right to work sharing files, posting announcements and scheduling projects. SharePoint also makes it easy to find and install approved applications stored on the server.
The tools that may have the most noticeable impact for end users are related to remote access. Authorized end users can be configured for secured remote access to the network, whether through OWA or through a virtual private network to other network resources, including their own computers. ActiveSync is included for support of Windows mobile devices.
With all these end-user advantages, it may sound as if Windows SBS 2003 R2 adds to the technology manager’s workload. In fact, it simply takes all the things your company is probably doing already — file sharing, e-mail, scheduling projects, printer and fax sharing — and centralizes them. This allows easier and more effective management, greater security and reliability, long-term lowered costs and more efficient use of the information technology team’s time.
When you have gone through as many server installs as I have, you can’t help but be a little leery of a 15-minute server setup that asks few questions yet includes Active Directory, Exchange, DNS and DHCP. Yet, I was pleased (and somewhat surprised) to see that the Windows SBS 2003 setup created a well-secured network. For example, based on the company name I supplied, “TESTGROUP,” the Active Directory namespace was set as TESTGROUP.LOCAL, which helped secure the internal network.
Once you have gone through the initial setup, you will have to run the included update to bring Windows SBS 2003 up to R2. Then like any Windows server, you will have to run additional updates to get the operating system and applications current, though this is somewhat easier with SBS Update Services, which I’ll describe shortly.
The Server Management console in Windows SBS 2003 R2 provides a series of wizards that simplify most of the usual network management tasks. Back to the “To Do” list that I mentioned earlier, everything you would first want to do once the server fires up is right on the list:
Each item on the list opens a wizard. These wizards, all part of the integrated Server Management console, guide the technology manager through the steps necessary to complete each task, and provide additional information about the steps.
The Server Management console is divided into two main groups — Standard Management and Advanced Management. Standard Management includes the simplified tools of SBS that monitor and manage common IT tasks. Two of these tools, Backup and Update Services, really stand out.
The Backup application works with Volume Shadow Copy, which allows you to retrieve previous versions of files and to back up open files. This allows reliable online backups of Exchange and Web site data. The Backup application works with tape drives, folders or external disks. The Update Services is a simplified version of Windows Server Update Services, which allows you to control and monitor the updates to all the computers on your network. The Monitoring and Reporting wizard indicates which updates need to be reviewed and installed, and reports any important events that require your attention. Green checkmarks quickly let you know that things are good, or tell you what needs to be done to get those checkmarks. I found that the Standard Management tools are easy — and, dare I say, fun — to use.
Of course, ease of use is not necessarily the best criterion for server management. The Advanced Management section includes additional console snap-ins that are more like the traditional Microsoft Management Consoles for more detailed control of server applications and services.
In fact, if you prefer you can bring up any of the MMC’s that you may already be familiar with in Windows Server 2003, including the Exchange System Manager. I expected that SBS would provide less functionality than Windows Server 2003, or that certainly Exchange or SharePoint would be significantly restricted. Instead, SBS retains all the power of the operating system and these applications, and adds simplified tools to ease network management tasks. The only apparent restriction is on the number of users it supports, though at 75 users that’s still a healthy number for a small business.
OK, so the Windows SBS 2003 server sets up quickly, the pre-configuration of key settings are well thought out and its management tools are simple and intuitive. Now, I wanted to see what the HP ML115 could handle as configured. I used the Exchange Server Load Simulator (LoadSim2003), which validates Exchange and server configurations and simulates user activity. I created 50 Outlook users with 100-plus megabyte mailboxes, which took less than 30 minutes to complete, and then set all 50 to work in LoadSim2003’s default “Heavy User” mode.
Based on Exchange best practices, this is a much bigger load than the server hardware should be expected to handle but, except for some spikes in latency, the server ran through the simulation with no notable problems. Increasing the Exchange load and adding 5GB of data transfers did create bottlenecks at the hard drive and network card, as expected, yet still resulted in only one error out of thousands of transactions. With additional RAM and hard drives, the ML115 could certainly manage 75 SBS users.
There’s no question that if you’re going to run Exchange, then you will want to take advantage of the hardware configuration options of the HP ML115 Smart Buy server. To more fully utilize the power of the Opteron processor, increase the system RAM to 2GB. Add three hard drives to create a RAID 0+1 or RAID 5 array for greater performance and redundancy.
The server ran through continued but more reasonable Exchange load simulations for a week. During that time, it also provided daily reports on the server’s condition and updates on the status of the network computers, and ran various maintenance tasks. Even under load, the server was so quiet that it would be barely noticeable on an office desk. A few minutes a day were all that was needed to check on things.
Although setting up the domain and Exchange through Windows SBS 2003 was remarkably easy, if you have not performed either of these functions before, then you will probably want someone with experience to review and modify the installation. I made several changes within Windows Server and Exchange that improved performance over the default. Again, these are not scaled-down versions of Windows Server or Exchange 2003, so it’s a good idea to be familiar with these technologies. You will also want to ensure that network file and application permissions are set to match your security model.
There are a couple of notable restrictions within Windows SBS beyond the 75 end-user limit. The Windows SBS server must be the root of an Active Directory forest, and domain trusts are not allowed. Further, the Exchange application cannot be part of a larger Exchange organization. Therefore, these two limitations make Windows SBS server unsuitable to run a remote office within a larger existing domain.
However, within the SBS domain, you can add another domain controller, so you do have the option for redundancy or a remote office. You can install another server to run Exchange – you would have to buy a separate Exchange server license, but the client licenses are valid. If your business grows beyond 75 end users, Microsoft offers a Transition Pack that removes the limitations of SBS, though it also removes those nifty SBS management tools. For a small business, these seem to be reasonable limitations that simply reduce the complexities of a full server environment, while allowing for the transition to such an environment.
The HP ProLiant ML115 Smart Buy makes the establishment and maintenance of a small network domain about as easy as I could have imagined, and any technology manager of a small business workgroup will immediately appreciate the improved performance and centralized management of this package. Users will enjoy better performance and less time taken away for maintenance. Small business owners will appreciate the low price point of the ML115, even if you add many of the additional options, as well as the increased security and reliability of business information. All told, the HP ProLiant ML115 Smart Buy is a big win for small business. | http://www.biztechmagazine.com/article/2007/05/more-file-and-print | CC-MAIN-2016-36 | refinedweb | 1,983 | 56.79 |
Python may not be required for performing computer vision with or without OpenCV, but it does make exploration easier. There are unfortunately limits to the magic of Python, contrary to glowing reviews humorous or serious. An active area of research that is still very challenging is extracting world geometry from an image, something very important for robots that wish to understand their surroundings for navigation.
My understanding of computer vision says the image segmentation is very close to an answer here, and while it is useful for robotic navigation applications such as autonomous vehicles, it is not quite the whole picture. In the example image, pixels are assigned to a nearby car, but such assignment doesn’t tell us how big that car is or how far away it is. For a robot to successfully navigate that situation, it doesn’t even really need to know if a certain blob of pixels correspond to a car. It just needs to know there’s an object, and it needs to know the movement of that object to avoid colliding with it.
For that information, most of today’s robots use an active sensor of some sort. Expensive LIDAR for self driving cars capable of highway speeds, repurposed gaming peripherals for indoor hobby robot projects. But those active sensors each have their own limitations. For the Kinect sensor I had experimented with, the limitation were that it had a very limited range and it only worked indoors. Ideally I would want something using passive sensors like stereoscopic cameras to extract world geometry much as humans do with our eyes.
I did a bit of research to figure out where I might get started to learn about the foundations of this field, following citations. One hit that came up frequently is the text Multiple View Geometry in Computer Vision (*) I found the web page for this book, where I was able to download a few sample chapters. These sample chapters were enough for me to decide I do not (yet) meet the prerequisites for this class. Having a robot make sense of the world via multiple cameras and computer vision is going to take a lot more work than telling Python to
import vision.
Given the prerequisites, it looks pretty unlikely I will do this kind of work myself. (Or more accurately, I’m not willing to dedicate the amount of study I’d need to do so.) But that doesn’t mean it’s out of reach, it just means I have to find some related previous work to leverage. “Understand the environment seen by a camera” is a desire that applies to more than just robotics.
(*) Disclosure: As an Amazon Associate I earn from qualifying purchases. | https://newscrewdriver.com/2020/07/12/i-do-not-yet-meet-the-prerequisites-for-multiple-view-geometry-in-computer-vision/ | CC-MAIN-2021-17 | refinedweb | 455 | 58.32 |
Tips for Developers
Questions
Edu Software
- How do I develop an educational Application?
- How can I figure out wether my application name is a trademark?
- What are prefered code formatting rules?
KDE/Qt Framework
- How do I manage bug reporting for my KDE program?
- What is the use of
#include "myfile.moc"?
- How do I implement a Highscore table?
- How do I use pictures?
Internationalization (i18n) Of Applications
- What KDE reference do you suggest for i18n?
- How do I add text to pictures properly?
- Where shall I store language sensitive data?
Answers
Edu Software
How do I develop an educational Application?
- Choose the age category you want to design software for
- Chooose the subject
- Ask for a subdirectory to be created on the web server so that the project's web site can be created.
- Please ensure the software stays within the common look and feel of KDE Educational software
How can I figure out wether my application name is a trademark?
The name you choose for your application must not be a trademark. You can first
run a Google search on it.
Then, to search for European trademarks, you can use:
For German trademarks, you can use (only on Mo-Fr between 7:30am-6pm MET):
What are prefered code formatting rules?
Here are some suggested guidelines:
- Use either spaces or tab to indent your code but be consistent by using the same all along
- Insert a space after a comma, after a begin parenthesis and before an end parenthesis
- { and } should be in the same column
- Put the pointer * and reference & signs adjacent to the variable they belong to
Eva pointed out a useful application called astyle that is a reindenter and reformatter of C++, C and Java source code.
KDE/Qt Framework
How do I manage bug reporting for my KDE program?
Each program must use the KDE bugs database properly. When importing an app, please remember to:
- Change the last parameter in the KAboutData constructor to "submit@bugs.kde.org" (rather than somebody's personal e-mail address).
- Add an entry for the app in bugs/Maintainers.xml (preferably with the same description as in the .desktop file).
What is the use of
#include "myfile.moc"?
Laurent Montel sent me this tip to reduce compiling time (this is very
important, even for small projects) and this tip also allows the compilation on
multi-processors machines.
In each .cpp file that generates a moc, add the following line (let's say that the cpp file is named myfile.cpp):
#include "myfile.moc"
After including this line, do a
make clean
touch Makefile.am
to regenerate the makefile in the directory.
How do I implement a Highscore table?
Have a look at the KHighscore class (you can find it in kdegames/libkdegames/khighscore.cpp and khighscore.h). This is well documented.
How do I use pictures?
If you use KDevelop, you add your picture in the project then you right click on it and you select Properties.
In Installation, check Install and in the lineedit, type:
$(kde_datadir)/project_name
This will install your picture in $KDEDIR/share/apps/project_name. Then, in your code, you have to include the following header:
kstandarddirs.h
and the path to your picture is
locate("data","project_name/my_pic.png")
Internationalization (i18n) Of Applications
What KDE reference do you suggest for i18n?
Please read the KDE i18n tutorial on KDE TechBase, and the tutorial about i18n mistakes.
How do I add text to pictures properly?
Kevin Krammer has contributed some code to replace strings in a picture by the use of a painter that draws a message on a background picture.
QPixmap* winnerPic = new QPixmap(locate( "appdata", "win.png" ) );
QPainter painter;
painter.begin(winnerPic);
// set up font and stuff
// ...
painter.drawText(i18n("You win"));
painter.end();
Where shall I store language sensitive data?
For the applications that uses several languages, it is suggested that you
put your sounds and data in l10n/<lang>/data/kdeedu/<your_app_name>.
They will then install along with the l10n package.
Have a language dialog in your app and if the user wants another language, he should install the corresponding l10n package.
Author: Anne-Marie Mahfouf and Matthias Meßmer
Last update: 2014-07-29 | https://edu.kde.org/development/tips.php | CC-MAIN-2017-43 | refinedweb | 703 | 66.94 |
Understanding Struts Controller
Understanding Struts Controller
In this section I will describe you the Controller part
of the Struts Framework. I will show you how to configure the struts
java fresher - Java Beginners
java fresher i am working on php.can i learn JAVA.If yes,then how much time it will take Hi Friend,
Yes, you can learn java.
Please visit the following link:
Thanks
Can i insert image into struts text field
Can i insert image into struts text field please tell me can i insert image into text;
MCA Fresher with JAVA,J2EE
MCA Fresher with JAVA,J2EE how to import already developed web application into my eclipse.
i mean the workspace are different for the existing project
struts
struts <p>hi here is my code in struts i want to validate my form fields but it couldn't work can you fix what mistakes i have done</p>... ActionForward execute(ActionMapping am,ActionForm af,HttpServletRequest req
Application Programmer with Fresher
time. if session i expire. Then this all process will start newly.
... till client sending request to server with in that session time. if session i... request to server with in that session time. if session i expire. Then this all
fresher
:( I am not getting Problem (RMI)
I am not getting Problem (RMI) When i am excuting RMI EXAMPLE 3,2
I am getting error daying nested exception and Connect Exception
first example - Struts
Struts first example Hi!
I have field price.
I want to check... the version of struts is used struts1/struts 2.
Thanks
Hi!
I am using struts 2 for work.
Thanks. Hi friend,
Please visit
struts
technologies like servlets, jsp,and struts.
i am doing one struts application where i...struts hi
Before asking question, i would like to thank you... into the database could you please give me one example on this where i i have
you get a password field in struts
you get a password field in struts How do you get a password field in struts...://
turorials for struts - Struts
turorials for struts hi
till now i dont about STRUTS. so want beginers struts tutorials notes. pls do
Struts ui tags example What is UI Tags in Strus? I am looking for a struts ui tags example. Thanks... validation and one of custom validation program, may be i can understand.Plz
javascript time with am pm
javascript time with am pm How can I display current time with AM... in am and pm</title>
<script type="text/javascript">
var todayDate=new... seconds=todayDate.getSeconds();
var format ="AM";
if(hours>11)
{format="PM
I have to retrieve these data from the field table
I have to retrieve these data from the field table Hi. I have a field in database named stages. its datatype is varchar(60). It contains values... the field table. Actually they are separated by comma. I want to take the values
Validation - Struts
Validation How can i use validation framework i don't understand am...==""){
alert ("This field is required. Please enter phone number without dashes...;
}
for(var i=0; i < phoneno.length; i++) {
temp
text field
text field How to retrieve data from text field
Hi......
Your code is perfect & working but now if in that textfield i supplied...; retrieve the result, how do i do it?
String value1=text1.getText();
Is there any way
Unique field
Unique field I have created a form where I have a textbox and a button. When I write something in the textbox and click on submit,it should go... in the textbox is not unique. How should I proceed with it? Plz help
Lotus notes - Hibernate
Lotus notes In Lotus notes to csv file conversion--> can we use two delimiters?
like
struts validation
struts validation I want to apply validation on my program.But i am failure to do that.I have followed all the rules for validation still I am unable to solve the problem.
please kindly help me..
I describe my program below
Fresher-Trainee
Fresher-Trainee a java program to calculate the area of different shapes using Multilevel Inheritance
Struts 2 internationalisation - Struts
struts internationalisation hi friends
i am doing struts iinternationalistaion in the site
i followed the procedure as mentioned but i am not getting
Fetching database field from servlet to jsp page ?
Fetching database field from servlet to jsp page ? Hello Java Developers.
I am facing problem please help me. I am new in this web development field.
I wanted to pass some of the database field from servlet to jsp...
(i
fetch values from database into text field
the example for fetching values from database into text field of table
as if i am...=\"ADDRESS\" value=\"rs.getString(4)\"></td>");
the i am getting "rs.getString" in the text field also.. import java.awt.*;
import
fetch values from database into text field
;/td>");
the i am getting "rs.getString" in the text field also.. ... the example for fetching values from database into text field of table
wth edit and delete option on each row
as if i am trying following
String query = - login problem - Struts
struts- login problem Hi all, I am a java developer, I am facing problems with the login application. The application's login page contains fields like username, password and a login button. With this functionality
i am inserting an image into database but it is showing relative path not absolute path
i am inserting an image into database but it is showing relative path not absolute path hi my first page.........
<html>
<head>...)
{
System.out.println(e);
}
%>
</body>
</html>
when i compiled it i | http://www.roseindia.net/tutorialhelp/comment/2918 | CC-MAIN-2014-42 | refinedweb | 939 | 66.23 |
New features:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
New features:
Bugs fixed:
Bugs fixed:
This release focuses heavily on bug fixes and performance improvements.
Varnish Shared Memory improvements
Other bugs:
Included vtree.h in the distribution for vmods and renamed the red/black tree macros from VRB_* to VRBT_* to disambiguate from the acronym for Varnish Request Body.
Added req.is_hitmiss and req.is_hitpass (2743)
Fix assinging <bool> == <bool> (2809)
Add error handling for STV_NewObject() (2831)
Fix VRT_fail for 'if'/'elseif' conditional expressions (2840)
Add VSL rate limiting (2837)
This adds rate limiting to varnishncsa and varnishlog.
For varnishtest -L, also keep VCL C source files.
Make it possible to change varnishncsa update rate. (2741)
Tolerate null IP addresses for ACL matches.
Many cache lookup optimizations.
Display the VCL syntax during a panic.
Update to the VCL diagrams to include hit-for-miss.
Fix a gzip data race
Akamai-connector can assume session timeouts
Kvstore locks
vmod-rewrite .add_rules() method can take a type (including "any")
New rule type "glob" in vmod-rewrite
Fix the startup waterlevel purge code in MSE3
Add dynamic VSC counters to KVStore
Add vmod-http body functions
vmod-http can copy req headers from CSV
Add function to create URL from a backend/director in vmod-http
VMOD_cookieplus: various fixes
New VMOD; synthbackend
Massive Storage Engine for Varnish Cache Plus 6.0
This is the first release of Varnish Cache Plus 6.0 that includes the Massive Storage Engine (MSE). See the varnish-mse manpage for configuration and usage details.
Please note the following:
The counters
now break down the detailed reason for session accept failures, the sum of which continues to be counted in sess_fail.
This version is up-to-date with Varnish Cache 6.0.0
Varnish Cache Plus is an enhanced version of Varnish Cache. The 6.0 series was forked from Varnish Cache 6.0.0, and then features were ported from Varnish Cache Plus 4.1.
There are also several new features that are only available in 6.0.
All Plus only features are described on our docs web site.
Note that this version (6.0.0r0) was released in Limited Availability. It does not contain all the changes in Varnish Cache 6.0.0, but there are no major from the user's perspective. The next version will be up to date with Varnish Cache 6.0.0 and also contain additional fixes and improvements.
Varnish Cache Plus will fetch ESI (Edge Side Includes) in parallel. For pages with many ESI includes this can speed up page loading greatly.
All the features of Edgestash is available in Varnish Cache Plus 6.0.
Varnish Cache Plus supports backend SSL/TLS through the OpenSSL library. This is enabled in the same way as in previous versions.
Many Plus only VMODs have been brought to 6.0:
Also bundled are the following VMODs, collectively known as Varnish Modules:
The following features are exclusive to Varnish Cache Plus 6.0
This new feature is the last piece in getting end-to-end encryption in Varnish Cache Plus, and, as far as we know, any HTTP cache. Enabling Varnish Total Encryption will make even in-memory data encrypted, and this will protect you against "data leak" bugs like Meltdown and Spectre.
The changelog below is identical with the Varnish Cache project. The list is not exhaustive, but should contain all major changes from the user's point of view.
Fixed implementation of the max_restarts limit: It used to be one less than the number of allowed restarts, it now is the number of return(restart) calls per request.
The cli_buffer parameter has been removed
Added back umem storage for Solaris descendants
The new storage backend type (stevedore) default now resolves to either umem (where available) or malloc.
Since varnish 4.1, the thread workspace as configured by workspace_thread was not used as documented, delivery also used the client workspace.
We are now taking delivery IO vectors from the thread workspace, so the parameter documentation is in sync with reality again.
Users who need to minimize memory footprint might consider decreasing workspace_client by workspace_thread.
The new parameter esi_iovs configures the amount of IO vectors used during ESI delivery. It should not be tuned unless advised by a developer.
Support Unix domain sockets for the -a and -b command-line arguments, and for backend declarations. This requires VCL >= 4.1.
return (fetch) is no longer allowed in vcl_hit {}, use return (miss) instead. Note that return (fetch) has been deprecated since 4.0.
Fix behaviour of restarts to how it was originally intended: Restarts now leave all the request properties in place except for req.restarts and req.xid, which need to change by design.
req.storage, req.hash_ignore_busy and req.hash_always_miss are now accessible from all of the client side subs, not just vcl_recv{}
obj.storage is now available in vcl_hit{} and vcl_deliver{}.
Removed beresp.storage_hint for VCL 4.1 (was deprecated since Varnish 5.1)
For VCL 4.0, compatibility is preserved, but the implementation is changed slightly: beresp.storage_hint is now referring to the same internal data structure as beresp.storage.
In particular, it was previously possible to set beresp.storage_hint to an invalid storage name and later retrieve it back. Doing so will now yield the last successfully set stevedore or the undefined (NULL) string.
IP-valued elements of VCL are equivalent to 0.0.0.0:0 when the connection in question was addressed as a UDS. This is implemented with the bogo_ip in vsa.c.
beresp.backend.ip is retired as of VCL 4.1.
workspace overflows in std.log() now trigger a VCL failure.
workspace overflows in std.syslog() are ignored.
added return(restart) from vcl_recv{}.
The alg argument of the shard director .reconfigure() method has been removed - the consistent hashing ring is now always generated using the last 32 bits of a SHA256 hash of "ident%d" as with alg=SHA256 or the default.
We believe that the other algorithms did not yield sufficiently dispersed placement of backends on the consistent hashing ring and thus retire this option without replacement.
Users of .reconfigure(alg=CRC32) or .reconfigure(alg=RS) be advised that when upgrading and removing the alg argument, consistent hashing values for all backends will change once and only once.
The alg argument of the shard director .key() method has been removed - it now always hashes its arguments using SHA256 and returns the last 32 bits for use as a shard key.
Backwards compatibility is provided through vmod blobdigest with the key_blob argument of the shard director .backend() method:
for alg=CRC32, replace:
<dir>.backend(by=KEY, key=<dir>.key(<string>, CRC32))
with:
<dir>.backend(by=BLOB, key_blob=blobdigest.hash(ICRC32, blob.decode(encoded=<string>)))
Note: The vmod blobdigest hash method corresponding to the shard director CRC32 method is called ICRC32
-
for alg=RS, replace:<dir>.backend(by=KEY, key=<dir>.key(<string>, RS))
with:<dir>.backend(by=BLOB, key_blob=blobdigest.hash(RS, blob.decode(encoded=<string>)))
The shard director now offers resolution at the time the actual backend connection is made, which is how all other bundled directors work as well: With the resolve=LAZY argument, other shard parameters are saved for later reference and a director object is returned.
This enables layering the shard director below other directors.
The shard director now also supports getting other parameters from a parameter set object: Rather than passing the required parameters with each .backend() call, an object can be associated with a shard director defining the parameters. The association can be changed in vcl_backend_fetch() and individual parameters can be overridden in each .backend() call.
The main use case is to segregate shard parameters from director selection: By associating a parameter object with many directors, the same load balancing decision can easily be applied independent of which set of backends is to be used.
To support parameter overriding, support for positional arguments of the shard director .backend() method had to be removed. In other words, all parameters to the shard director .backend() method now need to be named.
Integers in VCL are now 64 bits wide across all platforms (implemented as int64_t C type), but due to implementation specifics of the VCL compiler (VCC), integer literals' precision is limited to that of a VCL real (double C type, roughly 53 bits).
In effect, larger integers are not represented accurately (they get rounded) and may even have their sign changed or trigger a C compiler warning / error.
Add VMOD unix.
Add VMOD proxy.
This is the first beta release of the upcoming 5.0 release.
The list of changes are numerous and will not be expanded on in detail.
The release notes contain more background information and are highly recommended reading before using any of the new features.
Major items:
Changes since 4.1.9:
Changes since 4.1.8:
Changes since 4.1.7:
Changes since 4.1.7-beta1:
Changes since 4.1.6:
Changes between 4.0 and 4.1 are numerous. Please read the upgrade section in the documentation for a general overview.. | https://docs.varnish-software.com/varnish-cache-plus/changelog/changes/ | CC-MAIN-2019-51 | refinedweb | 1,524 | 58.79 |
a.k.a.
We got ourselves a couple-two-tree noders, but no fronchroom.
I know what you're thinking: "OMGWTF! A Chicago Nodermeet?!?"
Yes, kids. Time to retire to the fallout shelter, for the end certainly is near. For the first time in the (readily available) history of everything2, there will be a nodermeet in Chicago! Come on up to Old Irving on August 10th through the 12th for a weekend of drunken debauchery, porch-sitting, and neighbor-angering!
The Master Plan (at it currently stands)
WHEN: Friday, August 10 and Sunday, August 12, 2007. If you want to come up on Friday night, that's cool. Just know that I might be working from home, and may even need to run into the office for an hour if things really go to shit. I have fenangled Friday afternoon off. Come over after 2pm, and I'll be around. Don't tell the boss. :)
WHERE: Super-Secret! My wife and I don't like posting our address on the internet. What I will say is that we're in the Old Irving neighborhood of Chicago. Say you'll come, and I'll tell you specifically.
Our reasonably-sized apartment has a couch and a futon available, as well as plenty of floor space for those needing additional accommodations (bring a sleeping bag or what have you). We've got a huge deck, which I assume will be our major hang-out, weather permitting. A word of warning: there's only one bathroom in my apartment, which will make shower coordination a bit interesting.
My house is also conveniently located near the Kennedy, so driving here will be easy. A little note for those coming from the south: You're going to want to dodge the construction on the Dan Ryan (This is I-94 westbound after the I-57 junction). I suggest getting on I-294, and coming around through the suburbs, and taking I-90 East into the city. It may or may not be faster, but 294 is going to be the happier of the choices by far.
WHY: Well, um, my wife will be in New York for the ASA conference that weekend. Yes, yes, I know what some of you are saying: "What, that fictional wife you keep talking about?" Yes, that wife. This time you will be able to see all her books, clothes, and other personal effects. And then, of course, wave them all off as an elaborate attempt to fool all of you into thinking I'm married. Yes, you have it all figured out. (Editor's note: izubachi has now met my wife, and confirmed her existence. Take that, smartass noders!)
Anyway, since she will be gone, I was planning to sit here and drink by myself. Might as well get the noders together and trash the place, so she can be really mad when she gets home.
WHAT TO DO:
* Drink on the porch! You do like drinking, don't you?
* Take a look at the gigantic ugly homes they have built on either side of my building.
* We can swing around the corner to the 'Nug for some decent diner food. I'm also thinking we may need to make a trip to Hot Doug's on Saturday.
* Carcassonne, Apples to Apples, and Puerto Rico on premises! Perhaps I'll have figured out how to play Puerto Rico by then. Um, not so much.
* We're in the city! There's bound to be something to do here, right? We can organize expeditions to one of the excellent museums if folks would like.
* This weekend is smack in the middle of a White Sox home stand versus the Mariners, if you like that kind of thing.
* chaotic_poet suggests a trip to the Green Mill might be an good option. "The jazz bands they have there are usually two shades of awesome." Karrin Allyson is playing there Saturday night.
* chaotic_poet also reminds me that it is Market Days that weekend. I knew I was forgetting about an important event.
FOOD: I'll have some stuff here ready to go as far as snacks go, but we run a pretty healthy/anti-snacky house, so anything you guys bring will only make things better in the long run. As for booze, we've got quite a bit here, but there isn't any beer here, so you'll have to bring your own. Maybe some of us will make a run over to Miska's for some good stuff when everyone gets here.
As for meals, I don't have any cooking ability at all. It's best if I don't even try to make you anything. If someone would like to make something, the full kitchen facilities are available to you. Other than that, we can all get organized and go somewhere, or I've got a delivery menu or two around here somewhere.
THINGS TO KNOW:
* We've got two cats. If you've got allergies, you might want to take this into account. We'll also need to make sure they don't get out of the apartment, because they wouldn't last five minutes outside.
* This porch out back is shared with the other units in the building (five, including ours). Everyone else in the building is cool, but so you know, it may be more than just us out there.
* We've got some hotels here in the city, but nothing really around the house. If you want to stay somewhere and need help figuring out what's best, let me know and I'll help you out.
* If you get lost anywhere on the way here, just give me a call. I should be able to talk you down.
* No markers allowed. I'll be checking bags when you come in, so don't get any smart ideas. I'm looking at you, GCP noders.
CONTACT INFORMATION:
Cell Number is probably best - (312) 391-5577
The Cool Kids:
vandewal (naturally)
BrooksMarlin
LaggedyAnne and Sessor
Wiccanpiper and BriarCub - who have dibs on the futon in the sunroom
chaotic_poet
RoguePoet
opteek
sauth
mordel
karma debt
izubachi!?! - will be here Friday night. Show up early kids!
Ysardo - Last minute addition
Maybe they're cool enough:
jrn
hunt05
Billy
Two Sheds
You suck, but we'll see you on Labor Day:
artman2003
Apatrix
The Green Mill,,,
stolen glances,
borrowed time,,
stolen kisses,,,,
who knew that a quick visit to the city would remind us all how to be such master thieves.
It was here that I was reminded that the only way to repay a kindness undeserved, is to begin, or perhaps remember, friendships that are unbridled, unashamed, and unforgettable,,,
The music glides through the air and makes it's short distance over to our table, "Capone's booth" as it is known. Slow as the music before us the sun is setting and the bar is darkening. The band is working hard to keep up with their overly talented singer. Set one glides by far to fast, as well as the first few rounds.
The music runs us through unbelievable highs and low, and the place feels like it's really waking up as the second set starts. More crowded, more life, more energy, and more enjoyment in this little (non-smoking) club that refuses to feel like anything but a smoky little room where secrets are being told. The music is moving us all in the direction that we came here to go. People become somehow more than the sum of their parts in this fragrant atmosphere that's every bit as tangible as the table that's holding my Gin and Tonic.
We sit, soaking up to power and joy of it all as we scribble down our would-be whispered confessions, apologies, and admirations.
The street,,,
Fresh out of the car the humidity and the smells begin to sink in. Beautiful church on the right, sculptures of broken swords and beautiful angels waiting me to find them. I'm looking forward though, not up, and then comes what I had so much earlier, and also later, come to love the smell of.
Greeting,,,
Genesis,,,
Renewal,,,
On my short walk, I had time to collect myself, prepare to be working with 'proper villains' again, but instead of thinking about what I'm going to say or do with the noders I'm walking to meet, I fall into the ebb and flow of the city. Hoping against the odds to catch it's very pulse against my fingertip
I didn't, not really at this point. I was too distracted by the beautiful afternoon. The smells of the garden mulch and the beautiful people, mostly just the beautiful people.
The city of Chicago does have a certain rhythm, especially in the summer. We hadn't seen our Jazz show yet, we were just remembering hello. And I didn't feel the pulse of the whole city against my fingers until it was almost time for goodbye.
Time for goodbye,,,
Maybe hello was better suited the moment I was living, walking towards the weekend in the company of friends old and new, but I couldn't help but wonder what lie waiting for me at home...
The Deck,,,
Decidedly in the swing of things by now. It's now that I learned all about how "Apples to Apples" is played. The games is going very well, we all have full stomachs and fuller hearts. Some play, some talk, some smoke, some investigate the mysteries of the newly invented measurement system, all present love, mostly each other.
Rising,,,
Cresting,,,
Falling Away,,,
The lightning comes to play, the clouds roll in being completely irreverent to our observances. Things are protected as droplets begin to fall, and some begin to fall away. The droplets increase and more return to comfort of a drier existence, I remain. I remember that the drive to refuse the inconvenience of your environment is the ultimate expression of humanity. I spend a moment or two with the storm, but sure enough I begin to miss you all.
Pushing hands to applause, pushing ourselves to be our very best selves, simply by being our best selves. No such thing as a lonely table in our midst, no such thing as a gift given here without thanks, and no such thing as an ordinary moment.
Hell has left us some cherries on the table and I am left to admit, I can't ever recall a time before when punching myself has ever been quite so enjoyable. New memories take root, while old ones shove aside and make way for them. Someone steals my spirit just long enough to make sure I get back so that I can lock it in a jar and save it for later. I look around the faces of my people, then the face of the clock and know that we've long since been done tearing the day to shreds.
Finally it becomes time to rest. Some go, and some stay. I melt, or perhaps unfurl on the floor, embracing the quiet. I lie for a few moments just processing the joy of what a day with good friends, good music, good humor, and a particularly good reason to be happy is like.
The End,,,
Waken to the morning feeling better than I have a right to. Spend a few moments learning how God is dead from a lovely book in the corner. Someone stirs to my left and wakes and again it begins. The two of us, alone in the room find Serenity for a couple of hours and then the rest of the house seems to stir almost at once.
We wake and hunger...
A new place, someplace I have never seen and don't seem to quite remember as well as I could. It must be time for us to be coming to close; I always seem to manage to let the memory of the endings slide away.
Time to look you in the eye
Time to give you all the time you need
Time to let you hold me, turn my cheek and accept your kiss.
All that was begged, borrowed or stolen must be returned. This is the moment when I am reminded with the most power why I come out to see such amazing and beautiful strangers. Thank you Noders, for being the most naturally generous, amazing, attractive and wonderful people I have the good fortune and privilege of knowing. You, the people I love, are what make me keep fighting.
We all hug, shake hands, accept our kisses on the cheek from our new mama. You see the happiness at being together, the awkwardness of leaving one another behind, and at least if you were to look in my eyes, you might see the wish for more time peeking out to greet all of you.
Hop a bus and bend Vanderwalls ear one last time. I talk of the family I am going back to. It would only seem silly to talk of the family I just left behind because, alas, it's time to fade back into the nodegel and of course to remember, I was already home.
Time to say goodbye,,,
NEVER SAY GOODBYE!
"Knight's an energetic cocksucker and Armstrong's clearly defined balls cling close to his body in their tight sack during the lick."
It's a little surreal. It's Saturday, we are all sunbaked and shell shocked from the street fair, and we've stopped off at my apartment just to gather back together and regroup for whatever night brings. karma debt giggling voice rings out over my living room, having grabbed the local free gay entertainment guide's porn review article and reading it now outloud. "Wow. Straight porn reviews aren't anywhere near this graphic," someone chimes in from the couch. I can't remember who. I just giggle and smile. These are the little tiny moments -- these five minute asides, sometimes sweet, sometimes absurd -- that sparkle in the afterglow.
But I'm getting ahead of myself...
It was Friday around 5 pm. I was lost. Go me? Hometown advantage and all, yet when the time comes, I've managed to get completely mixed up. Chicago's perfect grid usually doesn't betray me like this, but we're here at the corner of 4000 and 4000 -- all the addresses are the same and I don't know the Northwest side of Chicago. I'm running late and it's already been a long frustrating week full of overtime and fevers. Even today, I ended up leaving work at 4 when I had asked for a 1/2 day. I needed very badly to have a good weekend.
Finally, after walking about 5 blocks more than I needed, I stumble up to vanderwal's C!'d apartment, climb up, and sink into a seat. It's a nice place -- warm and full of character with two cats of opposite demeanors. The brown one walked up and demanded to be pet, but only in the way that it wanted (cheek to back to tail, incidentally). The other, a fuzzy orange blob, just napped lazily in the filtered sun coming in through the window.
I was first to arrive, but it wasn't too long before a few others started arriving. Few by few we settled onto the deck in the back -- the place that would become the centerpiece of the weekend -- and sipped the moon out of it's daytime hiding place. As day left and night went on, we teased, we joked, we talked seriously (but mostly not)... It was comfortable basking in the glow of new friends that have felt like they've been there forever (and those that really are starting to actually be that).
What occurred on Saturday was possible the gayest thing ever to happen at a Nodermeet. It should therefore not be a surprising thing that we were swallowing sausages for brunch. 2 o'clock had found us at Hot Doug's, a self-preported "encased meat emporium." The line was out the door around the corner when we arrived, Saturday apparently being a big day for hot dogs in Chicago. It was no surprise why there was a crowd with such delicacies as fries made with rendered duck fat and hot dogs made with pheasant lined up against old favorites like traditional brats and red hots. Most of the people new to town had a taste of the traditional Chicago dog, the miniature salad on top balancing precariously, while others tried some of the more esoteric or fancier things. It was an enjoyable divey place -- small but worth it.
I had mentioned to vanderwal that the weekend also happened to hold Chicago's largest street fair: Market Days. Taking place on N Halstead, between Belmont and Addison, Market Days is one of the big events of the end of summer in Chicago. It also happens to be one of the Gayest events in the city outside of the Pride Parade. Not that it would usually be a bad thing, save the fact that the oncoming hordes of scantily clad gay men left little room for anything else.
I'd only been to the fair before in off hours, but as soon as we stepped through the gates, it occurred to me that perhaps I'd made a bad suggestion. People were packed back to back in varying states of undress, the uniform du jour seemed to be a pair of speedo briefs and tennis shoes. Squeezing our way through leather daddies and drunken twinks, we ended up rushing through the fair as if it was some sort of rainbow gauntlet of doom. Someone cried shortly after leaving, "I had no where to look. There was just man-flesh everywhere..." The shock would have been equal had we just pressed ourselves through any other mostly naked crowd of people. Still, it was fun to point out that this was really really gay.
[insert porn review escapdes here. add splitting of groups -- one back to vanderwal's and one to the Green Mill Cocktail Lounge]
"All I really want and
is to bring out the best and worst of you"
-Karrin Allyson.
We slammed ourselves into the booth a little later, the group reforming at the Golden Apple, just one of those late night diners that you end up at on late nights after many drinks with many friends. We feasted on that wonderful mix of coffee that only a diner can make and breakfast food flipped onto the darkside of the morning hours. We were all smiles, all around, and there was a peace in the chaos of conversation and passing food. Potato pancakes can leave a memory if you let them.
From there, we melted again into a night on the porch, cards flying as fast as the conversation. I ended up crashing on the couch, the long bus ride home too much for me that night....
Things lingered on Sunday... people peeled away one by one, each with goodbyes. We end with one last introduction on the border of Uptown and Lakeview - a Mexican brunch. One last coming together before the winds blew us apart again.
Messages passed across and around the table at the Green Mill:
Everyone is swapping sekrits
Noders always msg, don't they?
Sometimes the /msg is where the real action is…
Indeed. Msgs are good
To love and be loved in return...
These songs make me want to fall in love.
These songs could get someone to fall in love.
I'm in love with the sounds and happiness.
These songs take me to falling in love.
This is much more good than as market days was bad.
I'm in love with all of this. I'm in love with BIG AL.
Have to admit they are catching up with me.
They ARE much faster than you are. The music helps. And how. No food, no rest helps too.
Just love and be loved in return (food is our next stop)
THANK YOU so very much
Thank you for bringing me back to myself. Spent a lot of years singing. Thanks again.
A good night surrounded by smiling faces.
Good sounds, pleasing drinks, family around us--not just a "Tuesday at Noon" but real life happening in front of us.
Love of people+ Love of jazz+Love of communication=wonderful, enjoyable, ecstatic peace.
*Last Blank Piece*
Spent 9 months trying to answer this question.
Its sad, so sad.
Live in the moment, enjoy the now, life is unpredictable. Tomorrow…
Aye
Brightness always conquers dark, the sun follow the night. When we do the things we ought to do, when we ought to do them, there comes a day when we get to do the things we want to when we want to do them.
This place, these people, this music. We see everyday the harder parts of life… then we come together, do extraordinary things, feel extraordinary feelings and we remember life is more than meaningful it is poignant.
Got a headful of friends and music. No room for yesterdays. Who needs tomorrows when you've got jazz? Sitting in the Mill with a glass of liquid bread and I am thankful, oh gods and ladies, yes.
Log in or registerto write something here or to contact authors. | http://everything2.com/title/Noders+By+The+Lake%253A+A+Chicago-Style+Nodermeet?showwidget=showCs1904274 | CC-MAIN-2014-35 | refinedweb | 3,587 | 80.51 |
There’s a lot of hype out there regarding the drought and its potential effects on crops. Predictions range from increased food prices to dustbowlification”, a term coined by “Joe Romm. A complicit media follows. Tom Nelson points out some interesting facts:
Drought to cost $12B, most since 1988 – USATODAY.com
The Kiwis think otherwise though, from radio New Zealand: July 26, 2012 : Worst drought in 50 years driving up US food prices
Food prices in the United States are expected to rise by 3% or 4% next year because of the worst drought in more than 50 years. Corn, soybean and other commodity prices have all soared in recent weeks as fields dry out and crops wither in the heat. The drought, which is affecting much of the Midwest, is the worst since 1956.
What Drought Did to Crop Yields in the 1930s – Livinghistory.com.
[July 19, 2012]: 2012 Potential Corn Yields Based on July 15 Hybrid-Maize Model Simulations – UNL CropWatch, July
[See table 1 here: For five Nebraska locations, median forecasted yields for rainfed corn are 118-130 bushels per acre; for irrigated corn, the median forecasted yields are 228-245 bushels per acre]
From the University of Nebraska-Lincoln:
Stars indicate the sites for which in-season yield forecasting were performed using the Hybrid-Maize model with actual weather and dominant management practices and soil series at each site. Weather data were retrieved from High Plain Regional Climate Center (HPRCC) and the Water and Atmospheric Resources Monitoring Program (WARM) through the Illinois Climate Network (Illinois State Water Survey [ICWS], Prairie Research Institute, University of Illinois at Urbana- Champaign)..
Yes it was really so much worse in the 1930’s than the present:
h/t to Steve Goddard and this EPA report for the above graph and points.
103 thoughts on “Let there be corn! Reality check on the 2012 drought and corn yields in relation to droughts of the past”
Family members farming in southern IL are reporting ZERO bushels/acre unless significant and immediate rain relief occurs. Soy is also on the ropes.
irrigation means that ground water is up, probably due to the high snow levels last winter. We must not forget that ”corn” or maize as we call it, has had much genetic modification to resist drought etc.. Combination of both probably.
So ”the end” recedes yet again.
How seriously should we take the University Nebraska-Lincoln if they can’t tell Indiana from Ohio?
Dunno how good their science is but their geography is crap.
Even from as far away as London, England I know that the state due east of Minnesota is Wisconsin. Not Wyoming as labelled on the map.
Bad mistake.
The map says Ohio (OH) and should be Indiana (IN).
Interesting that one of the key points – “The recent period of increasing heat is distinguished by a rise in extremely high nighttime temperatures.” – is a predicted AGW signature since greenhouse gases trap the heat.
Best effort yet to compile the whole AGW scam should be published big time
Bushels per acre? Really???
Not only has the rest of the world agreed to a single system of units, it’s also agreed to measure yields in weight per area, not volume per area.
Wish casting.
Satellite view of the crops.
To put that statement .” into proper perspective her is information from the Agricultural Extension Service The University of Tennessee
Basics of Corn Production in North Dakota “…Grain yield of corn in the state has increased at a remarkable rate in the recent past, with yields now consistently averaging over 100 bushels per acre….”
GEE, it sure looks like “Global Warming” and CO2 is INCREASING the corn yields and how! even in colder than hades North Dakota
So that state to the east of Minnesota is now Wysconsin? :p
It would be cool to see an animation of the drought development. Anyway in Pittsburgh NOAA’s current outlook (as of 7/19) puts us in a region of drought development, but it has been quite rainy over the past week, enough to erase a small but noticeable portion of the year’s rainfall deficit (which was positive as of June 1 and then it basically didn’t rain for 6 weeks). Next outlook – improvement?
Monthly US wheat and corn prices going back to 1784 – Nominal first.
Real terms – adjusted for estimated CPI inflation.
The good news is that Northern Indiana where I live has gotten good rain in the last 10 days. That will help the crops but the very stunted 3-4 feet tall corn will yield only 20-35% of normal yields even with the recent rain. Soybeans will probably recover better. Farming in the midwest will probably become a difficult proposition when we start getting severe droughts much more frequently. The new hybrids give much better yields with less water, but almost no rain from early May to mid-July was a disaster. In the future we can probably rely on Canada for corn as well as wheat.
Yeah, Tom, bushels per acre. Welcome to the democracy of the agricultural dead. Makes it kind of hard to sell our agricultural database services abroad when we’re locked into Imperial standards, but that’s the industry standards in the US.
(Unless you’re talking about cotton – lbs/Ac or bales/Ac, or potatoes in hundredweight per acre. At least I haven’t seen anyone citing rice in barrels per acre since the late Oughts, everybody seems to have standardized on bushels per acre in rice, finally.)
Note that Figure 1 in this article only goes to 2008. Rain has been fine over recent years up until this year. How can we talk about the current drought and “up until the present” with data that only goes to 2008?
It sure was a double whammy, to have the worst drought that devastated agriculture, right in the middle of the Great Depression.
PS For all geographically challenged persons,
don’t be confused by OH being east of IL.
From living in Indiana, IN should be east of IL (NOT OH).
There was history before somebody clued me in to global warming? Seriously?
Tom says:
July 26, 2012 at 7:04 am
Bushels per acre? Really???
“Not only has the rest of the world agreed to a single system of units, it’s also agreed to measure yields in weight per area, not volume per area.”
Not to argue what’s been agreed upon, but grain moisture content would bugger the numbers worse using weight rather than volume.
Fred Bauer says:
July 26, 2012 at 6:29 am
How seriously should we take the University Nebraska-Lincoln if they can’t tell Indiana from Ohio?
___________________________
Yeah,
If this is what a college puts out, it looks like it is time to pull the plug on funding. And here I thought my opinion of US academia had already hit bottom.
Looks like the earth moved for Americans! I would have thought it would have been to hot for that sort of thing in a heatwave, especially with the hot nights!
“Interesting that one of the key points – “The recent period of increasing heat is distinguished by a rise in extremely high nighttime temperatures.” – is a predicted AGW signature since greenhouse gases trap the heat.”
Really? Try atmosphereic compression due to desending air a result of high pressure.
Fred Bauer says:
“How seriously should we take the University Nebraska-Lincoln if they can’t tell Indiana from Ohio?”
Latimer Alder says:
“Even from as far away as London, England I know that the state due east of Minnesota is Wisconsin. Not Wyoming as labelled on the map.”
Come on. It’s model output. What do you want, accuracy?
Bill Illis
The background color makes those charts very difficult to read. Or is there a way for me to make that adjustment?
I think geography isn’t taught anymore from the looks of things.
My Mom’s family were/are cattlemen in NW Kansas have been
since 1870’s .1934 was their worst year ever. In the Pac NW in NE
Oregon , My Pop’s family also ranchers, 1934 was their worst
ever too. Anecdotal,but there was something about ’34 that was
extra-ordinary…
The drought is getting in-depth coverage in the serious media as well:
There has been a huge change in crop yields over time. New hybrids, chemicals, fertilizer applications, and other changes in farming techniques mean that there is no direct comparison for yields from the 30’s till today. If the average yield today dropped to what it was in a good year in the 30’s it would be a disaster.
CAGW continually employ two public relations tricks, often used by politicians, to send their message. First release to a gullible press statistics regarding a matter that the local population would have no base knowledge about. Hence in New Zealand statistics on American droughts or in America stories on the Greenland ice sheet that are false but confirm a vague understanding of AGW. Secondly release highly localized anecdotal stories about the effects of CAGW that are too localized for broad academic derision. Such as a near seaside roadway being endangered by rising sea levels, even though that would be impossible in the given time frame.
“Vince Causey says:
July 26, 2012 at 7:51 am
It sure was a double whammy, to have the worst drought that devastated agriculture, right in the middle of the Great Depression.”
Talk about history repeating itself …
G. Karst says:
July 26, 2012 at 7:46 am
_____________________________
Yes, the corn down the street has just shot up with all the recent thunderstorms hitting mid NC. Farming has always been a crap shoot. Too early cold rain and the seed rots in the ground, then there is wilt, beetles, summer drought and early frost…
What really hit me about all this media attention to corn and crops are these three old articles combined with the paywall to the USDAs real data on crops.I can’t find that old 2007 link since the website has been reorganized, but there is Safeguarding America’s Agricultural Statistics and Special Tabulations A friend has a college age student who works for the USDA CropScape The kid said you could count the cows in fields.
To me the whole thing stinks of rigging the system in the food commodities casino. Create the illusion of scarcity and then bet the correct way to win big.
And here are the winners
Added a different dimension to the whole article doesn’t it?
Using potential yields here instead of projected actual yields. The lottery ticket in my wallet is potentially worth millions, but I’ll sell it to you for a hundred dollars.
In the category of ‘related’ –
Let there be Meat-less Mondays:
. . . USDA under fire for backing ‘meatless Mondays,’ linking ranching to climate change
Or not …
. . . Retracting a Plug for Meatless Mondays
(Words cannot express …)
.
“Really? Try atmosphereic compression due to desending air a result of high pressure.”
It sure would be interesting to find out why this didn’t happen before 1970.
Well, Ohio and Indiana are both east of Illinois, and all those east coast states look alike anyway. As for Wysconsin, Canadian provinces look alike, too.
As I recall, back in the 50’s, 100 bu/acre corn would buy the new Caddy. Several things have improved crop yields since the early 20th century, hybrid seed, dense planting, and CO2 fertilization. For soybeans, it looks like CO2 is the main culprit in the better yields.
Gail Combs says in a quote:
July 26, 2012 at 9:18 am
… Hard red spring wheat, which usually trades in the $4 to $6 dollar range per 60-pound bushel, broke all previous records as the futures contract climbed into the teens and kept on going until it topped $25. ….In a recently published briefing note, Olivier De Schutter, the U.N. Special Rapporteur on the Right to Food, concluded that in 2008 “a significant portion of the price spike was due to the emergence of a speculative bubble.”
=====================
Gail, I’m a little confused with your post here and your posts on the recent corn/bioethanol thread.
Surely, this is the real reason for the price spikes (across the board and not just corn), and not what you were arguing over there.
Apologies if I’m misrepresenting your comments. I haven’t been back there to read them all again.
Uhm, wouldn’t a year with “below average” rainfall be pretty typical? As in, aren’t a significant number of years “below average”? Why are they framing this report in such a way as it would lead the reader to expect average or above average rainfall and any amount “below average” is somehow a crisis? Below average should happen quite frequently. In fact, if it is in a place that sometimes sees extremely wet weather that can skew the average (such as in California), the annual precipitation might be “below average” more often than it is above average.
I agree with previous poster that background color choice is literally about the worst color you could choose for those images. It would be great if you could shift that to a more neutral color like a light gray where both colors would have good contrast to the background.
That said it is sort of inconvenient that the graphs show that corn and wheat in real terms are near the cheapest they have ever been in historical terms, and comparable to prices in the recent past when we had similar economic problems.
Those who are dwelling on the drought losses keep in mind that often for every area that is seeing a down turn in crop production there is another area somewhere that is seeing better than average conditions. Often the two more or less balance out on a world market.
Larry
Gail Combs’ cited observations about the effects of speculation by major banks and other purveyors of commodities index funds on commodity prices is probably far more important than any drought news. Unfortunately, it only indirectly relates to climate.
Related link to the state of development of drought-resistant corn from yesterday’s BiofuelsDigest:
The Columbia River provides irrigation to central Washington and still flows at the mouth of about 265,000 cubic feet per second ( ) .
The Mississippi River discharges at an annual average rate of between 200 and 700 thousand cubic feet per second ().
Obviously, the irrigation used from the Columbia River is not draining the river dry. With all the water that flows into the Gulf of Mexico via the Mississippi drainage area I see not why there is not some investment to use more irrigation practices rather than all the bemoaning over drought. It seems the issue it not how much rain the Midwest receives, but when it receives it. There are things we can control and there are things we cannot control.
We subsidize farming; we have to eat; we depend on the Midwest for crops; make sure they don’t fail during the growing season, which is really the wet season.
Brazil corn = $290/ ton (metric) at nation’s port, vs.USA corn = $345/ metric ton at Gulf of Mexico port & biggest pork producer Smithfield Foods early this week announced they will import Brazilian corn. Brazil’s projection is for 70 million metric tons of corn this year.
Bushel calculations always annoyed me to do conversions too. For those who care: Corn’s September Futures Trading a week ago was $8.28 &3/4 a bushel this week was $7.80 &1/2.
@SocialBlunder
“Interesting that one of the key points – “The recent period of increasing heat is distinguished by a rise in extremely high nighttime temperatures.” – is a predicted AGW signature since greenhouse gases trap the heat.”
If you want such a general statement to be taken seriously you gotta put some numbers on it. The anticipated AGW signature is how much? And this is discernable from natural variation by which process? And the ‘extremely high nighttime temperatures’ are how much higher?
‘Prediction’ implies falsification scenarios. What exactly is the falsification scenario for AGW? How many years of cooling would suffice?
According to several warmist visitors to this blog, greenhouse gases don’t ‘trap the heat’ quite the way they used to, apparently. The goal posts have been moved. You should keep up. We were told for years that they trap the heat at 8-12km altitude in the tropics and there would be a definitive hotspot in the troposphere that is embarassingly missing.
First GHG’s do not ‘trap heat’ they absorb and re-radiate it with water being by far the most efficient and plentiful member of that family. Maybe there is an increase in atmospheric water vapour at night… oh wait, ‘it’s a drought’.
Is the hot spot perhaps secretly appearing at night and keeping temperatures up? There is no meaningful change in the daytime highs in summer – did you know that? Have you looked at the records at the National Weather Service? You can search by season and notice the continental 30 year cooling trend if you want.
RE: “Speculative opportunities in food”
Great – Goldmann Sachs ‘goes farming’. Farming money that is, at the expense of the farmer and the public. Is anyone surprised? Maybe they can corner the corn market and run the price up to triple the present number. Just think of the profits they could make! If they took a global view they could pry the last red cent from the bony hands of every (still) living human soul! There is no limit to the profits that could be made by rampant speculation and monopoly control of the food sector. How is it that this opportunity was missed before? Oh, I remember: it was illegal!
They’ve corrected the state labels on the original article.
And just when I was going to blame it on the ignorance of a geeky grad assistant!
.”
This statement is misleading and has less to do with the weather than with the new super-hybrids that are very drought resistant, and with modern farming techniques.
I live, work and travel frequently across Central and Southern Illinois and this year is as bad or worse than 1988. Most farmers are expecting a 70% drop in yields, and in the hardest hit areas, many have already plowed their fields under (the cost to harvest exceeds the expected yield revenue).
Were it not for huge advancements in hybridization it’s likely we would have total crop failure across at least half of the corn belt.
It’s usually rather dry in the midwest in July and August. The problem this year is that it was exceptionally dry in May and June and not that wet in April. With much warmer weather in spring, it will be interesting to see if farmers will start planting corn in late March instread of late April/early May. Of course, there has also been an increase in years with spring flooding.
Map appears to be fixed on the source site.
When you start using change in food prices as a measure, or impact costs, then the impacts of extreme weather will seem worse in the past. The reasons are
1. Farmers are more adaptable and have greater investment in irrigation than in the past.
2. Crops are more resilient to extremes, and both pest control and fertilisers are far better than in the past.
3. Impact on food prices is less as the food market has shifted from local to global and the diet has become more varied. In the early 1970s in Britain there was a major crisis when the potato crop failed due to a drought. Now supplies of potatoes can be imported, and people eat far more rice and pasta.
This is important for policy. If the climate does get more extreme (for which there is little or no hard evidence), the best way to offset the hardship in the poorest countries is economic development. Policies to alleviate global warming by restraining growth in fossil fuel consumption (and thus economic growth), are likely to make the impacts of global warming on the poorest greater than no policy at all. This is assuming that the bigger developing nations do little or nothing to curb their emissions.
Like a couple of other readers, I think this statement
.”
is a poor one to make. Trying to make comparisons to crop [yields] between the 30’s and today sounds a lot like some of the poorly reasoned claims we hear from CAGW advocates. While I understand the frustration over ill informed media hyping a weather condition and trying to milk it for all the global warming PR they can, presenting poor counterpoints is a fail.
Crispin – please address your concerns about quantificaton and validity of nighttime heat to the author of the blog entry. I simply quoted the Key Points block at the bottom of the entry “The recent period of increasing heat is distinguished by a rise in extremely high nighttime temperatures.” I too would like to know how much hotter it has gotten at night.
Chris says:
July 26, 2012 at 8:51 am
Larry Ledwick (hotrod) says:
July 26, 2012 at 9:52 am
—————–
The background is white. Sometimes, people get linked into an imageshack chart that has a black background. Try a search of the image through google images. Anyone else having a problem seeing it.
…and a policy of increasing the cost of energy, the cost of transportation, the cost of labor, the cost of restrictive regulation is also a price factor of supply/demand.
Dairy production is a good example of over-regulating.
Food cost in January 2002
Food cost in January 2012
Conservatively a 137% increase for monthly family of four.
Price increase of fuel in 10 years over 200%.
Jan 2002 $1.23
Jan 2012 $3.79
Increase in unemployment in 10 years over 200%
The whole issue of tonnage production during “droughts” seems pale.
High yields in the past have resulted in spite of bad news. What you can count on is the farmers producing the highest yield possible.
philincalifornia says:
July 26, 2012 at 9:47 am
Gail, I’m a little confused with your post here and your posts on the recent corn/bioethanol thread.
_____________________
Sorry to confuse. I was trying to point out there is more than one factor contributing and the News Media focuses on only on the one factor that pleases their ‘financial interests’. From what I can see you have:
#1. WTO (1995) did away with import tariffs on grain by third world countries. (WTO Agreement on Ag Written by VP of Cargill Dan Amstutz)
#2. Freedom to Farm bill (1996) essentially did away with the practice of grain reserves and increased the acres put into grain by the USA. (Written by VP of Cargill Dan Amstutz)
#3. Biofuel opened another market for grain. (ADM biggest all time political contributor to Dems & Reps scored big in biofuel)
#4. The financial traders got into the act with commodity price speculative gambling. (Dan Amstutz was hired by Goldman Sachs who created the Goldman Sachs Commodity Index in 1991 and made a killing in 2008)
#5. The Government – Corporate – Lobbyist revolving door that make sure laws and regs are advantageous to the big corporations and not the consumers or small businessmen.
#6. The Mass Media knows who funds their advertising budget and who owns their loans. See Derry Brownfield
Like the earth’s climate, the machiavellian game playing in the Ag sector is much more convoluted than most people realize and the MSM makes sure only the news that is “advantages” reaches the masses.
I wonder how many innocent dupes are sinking money into the commodities market thinking the world grain harvest is going to be poor thanks to this and other stories?
Can you see the dual purpose of the story now?
Farm Futures has this :Jul 19, 2012 story: Drought’s Intensity is Impressive on Satellite Imagery yet my area (see image 2 & 3), mid NC is not particularly dry. Actually except for a ten day stretch the rain has been better than normal for summer and my grass is growing well as are my neighbors corn fields.
Farm Futures also has Quick Quotes on the commodities market.
Notice how no one mentions this from the USDA
(Bolding added above by me)
A) Ever drive along I-40 west of Memphis? Those small V-shaped roofed awnings over what look like V-8 engines connected to dynamos are actually pumps (pumping ground water by the looks of it)
B) The Mississippi is low this year … in times of drought, the rivers would seem to be affected too (mild sarc) …
C) Notice in the lead post the reference at one point to “for irrigated corn”; obviously, someone is irrigating some corn … but to irrigate _all_ corn, or at least have the facilities in place to irrigate all corm BUT have those facilities stand idle most years doe not seem to appeal to the economical ‘sense’ of those raising corn …
.
@dave
Another triumph for the blogosphere over ‘pal review’!. And it shows that they read – and are frightened by – what we write. As well as exposing their poverty of their ‘quality control’.
Dave says:
July 26, 2012 at 10:36 am
Map appears to be fixed on the source site
Nope, click on “see larger version” PDF link.
Fred Bauer says:
July 26, 2012 at 6:29 am
How seriously should we take the University Nebraska-Lincoln if they can’t tell Indiana from Ohio?
==========================================================================
At least they didn’t call Michigan Ohio.
What everyone seems to be missing here is that the prices of quite a number products will go up significantly as a result of this expected corn yield, including energy prices (ethanol). That won’t hit till late this year and next year. Trade balance will take a hit, and there may be riots in 3rd world countries over the prices. Some companies and commodity players will make a lot of money, and some will lose a lot of money.
Most farmers will not suffer as a result of the lower yield due to govt. insurance ( perhaps $4 billion total insurance claims ).
It will become a plank in the current election, and will give ammo to the “save the planet” contingent.
I’m a little worried about these folks. They make claims about “southeastern Illinois”, but the map of places sampled has southern Illinois totally chopped off, and no stars for simulated sampled locations in, you know, “southeastern Illinois” (ditto the locations in the table). There’s still a lot of corn country south of Springfield and Decatur, and quite a bit in Missouri, Indiana and Ohio, for that matter. And then they have Indiana labeled as OHio. Their “simulations” of the bread-basket are definitely flawed.
Ah, yes, the U of Nebraska site at least has the label for INdiana corrected.—-for-those-with/article_783971c6-8e8e-55fd-81d3-a42d46aaf207.html
Wheat prices are good… for those with a crop
“Protein levels in the winter wheat rolling into Moore vary from 9% in areas near Jordan, where late rains hurt crop quality, to 17% where dry, hot weather ahead of harvest drove up protein and increased crop value. It’s the high protein level that makes Montana wheat sought after worldwide for pasta.”
There is some evidence that increased atmospheric CO2 can make plants more drought resistant but in the research I have seen, it requires much higher concentrations than are currently seen in the atmosphere.
In the 30’s farmers were still using horses, and the rule of thumb for corn was “knee high by the fourth of July.” Now most corn fields are over your head by the fourth, and other than a little ethanol and biodiesel the horsepower is fossil based. No need to dedicate part of the farm to provide feed for the horses.
“…BUT have those facilities stand idle most years does not seem to appeal to the economical ‘sense’ of those raising corn.”
However, they who do not irrigate enough or not at all sure like to make economic claim for federal relief and subsidies. Why invest…just get some “free” money. And by the way, central Washington receives about 7 – 10 inches of rain per year. Those corn rows and other row crops are ALL irrigated and the equipment lies idle between harvest and the next planting.
Idaho and Washington potatoes are all irrigated. Never here chat about drought stricken crops. How about those grapes? No irrigation plan… fewer grapes.
Just pointing to different culture of thinking between those poor corn belt drought stricken farms in the mid-west. That goes for the tobacco industry as well. What a lobby of friends they must have.
Invest in an irrigation infrastructure. We can’t eat bail outs.
@Curious George who said: “Most farmers will not suffer as a result of the lower yield due to govt. insurance ( perhaps $4 billion total insurance claims ).”
That is not true. The insurance program only covers a portion of the average yield of the farm or tract. The insurance payment will not nearly cover the cost of inputs. Also, less than 60% of the corn farmers buy the insurance. The insurance cannot be purchased retroactively.
The drought will devastate an enormous number of corn farmers, especially those who cash rent their production acres. It will also hurt seed producers, elevators, seasonal employees, machinery manufacturers and dealers, etc., etc.
Here’s the rub: same farmers/corporations/farming companies are ‘betting’ on an easy return (I’ll spell it out: some ppl/orgs simply put a crop in the ground and hope there is return, if there is, a cheap bet just paid off); we aren’t hurting for foodstuffs or the raw feedstock for other uses … as a poster Larry Ledwick/hotrod up-thread pointed out, other areas of the country are producing/reaping a harvest (or will be shortly) for foodstuffs (or feedstock for other use). …
.
_Jim says:
July 26, 2012 at 3:56 pm
… …
____________________________________
I am with you on that _Jim. I do not think city people understand just how much land 96.4 million acres (150,625 sq miles) of corn , 76.1 million acres (118,906 sq miles) of soybeans , and 56 million acres (97,500 sq miles) of wheat really is. Farmers irrigate veggie and fruit crops because they are high dollar/low volume (Think California). It is easier to plant the correct crop varieties for your area instead.
The other problem is water. If there is a drought your farm pond, use for irrigating, is down sometimes to nothing but mud and you sure as heck are not going to use your well water. During the last drought I was supplying water to three other families because I had the only decent well on the top of the ridge where I live. The streams were dry and the river was rocks with puddles because all the water was supplying the city reservoirs. And those reservoirs were down about three feet from the normal shoreline. Needless to say there was a ‘hose pipe’ ban. For what it is worth my grasses (carefully chosen for drought resistance) made it through the drought just fine but my neighbors had to replant their pastures of Kentucky 31 fescue.
Ms. Combs is so correct. The media dire reports are 99% scam to manipulate the commodities futures market. Best investor strategy: short corn while the prices are astronomical. That’s what the big boys are doing. Don’t worry about $5 ears, because that’s NOT going to happen..
Corn used for ethanol was 5,021 million/bushels in 2010, 5,050 in 2011, and was originally projected at 5,450 for 2012. Current projection is 4,900 million bushels – 150 million bushels less than last year.
The USDA reports also show the US remains the largest corn exporter in the world – by a huge margin – in 2011 and 2012 we provided 41% of the total corn export for the world – the next closest are Argentina and the Ukraine at 14-16% each.
Mexico is the largest beneficiary of our exports – we’ve increased our export to Mexico of White Corn – which is the real “food” corn for them – from 229 to 581 million metric tonnes – over 2.5 times more food corn went to Mexico from 2005 to 2010/11
That said, the US is experiencing stiff price competition for corn – as noted above. Argentina and others are priced significantly less than the US.
Again – the current estimated 2012 corn crop is nearly identical to 2010 and 2011 – the claim that there will be huge shortages causing big prices increases is not based in fact. And the last few days rains and the cooler temps at least in upper Midwest, will IMO increase the yields well above the lower estimate from USDA.
Yeah, I noticed that, too. How did Wyoming migrate up to the Great Lakes? And when did Ohio annex Indiana? All this “Change” business is going way too far; it’s downright disorienting!
india sitting on 75 million tons of foodgrain stocks….come buy some before it goes to waste
Larry Ledwick (hotrod) says: July 26, 2012 at 9:52 am
Bill Illis says:July 26, 2012 at 7:34 am
Changed both charts to grey background: raw corn/wheat prices CPI adjusted prices
The originals did have a white background, however neither axis labeling came through in simply copying the image…weird. Had to use a screen capture to get the pics with the axis’ labels.
I’d like to know if my charts are not coming through properly. I post alot of them and it takes awhile to put them together. They look fine on the three computers I use.
The wheat and corn prices chart show that real prices have generally been declining over time, but not that much. They go up primarily during War years (very clear), or periods of supply problems such as the early 1800s cold period, the Russian crop failures in the early 1970s, the ethanol mandate in the US/economic growth in Asia.
A. Scott says:
July 26, 2012 at 10:07 pm.
Absolutely true the amounts are almost the same. What is not the same is the number of mouths that want that food. The US and world population is increasing (the US mainly through immigration). So at the Thanksgiving table you now have an extra 4 seats but “almost exactly the same as 2011 and 2010″ amount of food. There are no US reserves as the holding of reserves pushed down prices. Paul Ehrlich’s bomb was defused by the continual increase in food output. Perhaps it will start ticking again.
.
The discussion above about the total yield compared with other years is very helpful. It seems that whatever is taking place re temperatures, it is really helping the total food crop, all developments considered.
I note the doubling of exports to Mexico. This has had the effect of driving most farmers in N and NW Mexico out of business and has introduced GM genes into virtually every established local (historical) variety. In other words the gene bank and the farmers are being lost because of subsidised maize from the USA. I doubt this is a good thing.
Several posters have made comments regarding the fact that the corn belt has gotten a few inches of rain over the past week. The term that comes to mind regarding that rainfall is:
“Day late and a dollar short”
Corn, even the GM versions much of the US uses, needs a steady supply of water. Ideally around 1.75 inches per week. Some areas went 5-6 weeks without significant rainfall <0.5 inches. This lack of rainfall was during the time when corn is normally 'stalking up' and when the ears were forming.
This period of dry will cause a double whammy hit on yields, as the stalks were small when the ears were forming, we will see less per stalk; and we will have much smaller ears due to the lack of water when the ears were budding.
Not that I am unhappy with us finally seeing rain, but coming now, the best we can hope for is to see the ears fill out well, but considering that the rain we have been seeing is only slightly above the 1.75" per week that corn prefers, that is a hope.
Now, if we want to look at the soy bean crop, prepare for real despair.
Bill Illis says: July 27, 2012 at 6:12 am
I’d like to know if my charts are not coming through properly.
Bill, I gave some misinformation at 5:09 am, had no coffee yet. The original charts came up with a black background, with orange and green plot lines; not a white background as I stated earlier.
I could not get the axis labels to come through by simply copying the picture. I had to do a screen capture. I inverted the colors on the whole graph to get a white(almost) background, leaving the text white, so I dampened the mid-range levels of the picture to enhance the contrast with the white text, ie. darker grey background..
But demand for corn is at least a little higher today than it was in the 1930’s, including exports, use in fuels, as animal feed, and for human consumption. So this analysis appears to only be part of the story. What appears to be needed is a comparison of supply and the potential yield reductions versus consumption by all categories of “consumer” (including automobile gas tanks under the Congressional Mandate).
The numbers simply do not support that position.
When we look at Total US Corn Production – the total 2011 harvest was 12,358 million bushels vs a total US population of 311.59 million people – that amounts to 39.66 bushels per person in the US.
The total projected 2012 harvest, using the downgraded yields in July USDA report (which with recent rains and cooler temps will be overstated – yields will be higher than the downgraded numbers) will be 12,970 million bushels vs a total US population of 314.04 million people – that amounts to 41.30 bushels per person in the US.
End of year reserves were in 2011 were 903 million bushels, or 2.89 bushels per person in the US, compared to projected 2012 reserves of 1,183 million bushels – or 3.77 bushels per person in the US.
If we look at Total US Domestic Use – all uses of corn (food, alcohol, industrial, feed, seed and other uses) – 2011 was 11,005 million bushels vs 2012’s 11,120 million bushels – 35.4 bushels per person in 2012 vs 35.3 bushels per person in 2011.
When compared against US production only (excluding imports) in 2011 we produced 1,153 million bushels in 2011 and 1,850 million bushels in 2012 more than our US domestic use needs.
Even at the USDA’s drought reduced projected yields and acres harvested – we have more corn – in total and per capita – in 2012 than in 2011. And with recent rains and cooler weather in Upper Midwest corn belt those projections will be IMO low – the harvest should be considerably better.
In essentially every measure of the 2012 US Corn crop – even if the USDA worst case numbers are correct – we are in better position than 2011.
And even with the smaller projected crop our exports are projected to remain the same – 1,600 million bushels – in 2012 as 2011 … and we are still projected to make up the 225 million bushel drop in reserves from 2010 to 2011 and increase reserves from 2010 by another 55 million bushels.
If you believe the USDA numbers – and they are the experts – there simply is no valid reason for all the fear mongering about the drought reduced harvest numbers – even adjusting for US population increase.
These numbers are from:
Damned if we do (supply more food and feed corn to Mexico) and vilified if we don’t …
The US doubled its corn exports to Mexico from 2008 to 2011 but 2011’s 1,419 million bushels of US feed and white corn combined were still only 13.54% of Mexico’s total imports. Argentinian and Ukrainian and Brazilian corn is markedly lower price than US Corn -around $270 per metric tonne vs US corn at appx $310
According to Mexico’s corn imports are expected to decrease for 2012 – forecast at 9.5 million tonnes, down from 11 million.
The US does way more than its fair share – we produce 36% of the world corn but supply 41% of the total world corn exports … the next 3 producers – Argentinia, Ukraine and Brazil all combined roughly equal the US exports.
Last – the US is projected to export virtually exactly the same – 40 million metric tonnes – as in 2011.
Ian W says: @ July 27, 2012 at 6:41 am
…Absolutely true the amounts are almost the same. What is not the same is the number of mouths that want that food….
_________________________
As I said before it is complicated. Not only as Ian said are their more mouths to feed, (in July 2012 ~ 7,057,075,000 people in the world compared to ~ 6,892,319,000 in July of 2010) their tastes have changed. What is missing is the increasing appetite of China and India as their population joins the ‘middle class’ link
Not sure where you see that from the WASDE Report?
The WASDE latest (July 11) update incorporates the same exact USDA reductions as I’ve been quoting from the USDA reports … it showed following changes from their initial projections:
2012 planted acreage slightly increased, from 95.9 to 96.4 mill/acres vs 91.9 in 2011
2012 harvested acreage slightly decreased, from 89.1 to 88.9 mill/acres vs 84 in 2011
2012 Yield/acre was reduced, from 166 to 146 bu/acre vs 147.2 in 2011 (and 152.8 in 2010)
2012 Production decreased, from 14,790 to 12,970 mill/bushels vs 12,358 in 2011
2012 ethanol use decreased, from 5,450 to 4900 mill/bushels vs 5,000 in 2011 (and 2010)
2012 Exports decreased, from 1,900 to 1600 mill/bushels SAME as the 1,600 in 2011
2012 Reserves decreased from projection, from 1,881 to 1,183 mill/bushels – even this reserve however is significantly higher than both 2011’s 903 and 2010’s 1,128
The initial projections for yields were higher than 2011 – however this projection was made in Jun 2012 and at that time the corn crop had hugely benefited from near perfect conditions. A very early start to planting, early warm weather and copious but not flooding rains etc. Doesn’t seem out of line at all considering..
Reuters poll of analysts paints a bleaker than USDA picture but these are guys that benefit from commodities prices and especially from increased prices – so I take with grain of salt. USDA is collecting the crop data – these guys are interpreting it.
Here is the detailed weekly crop progress report from NOAA/USDA – they spend considerable time on the drought conditions. Will be interesting to see what next weeks report says.
As noted by other the above map of several States is incorrect, perhaps a mod can visit the link for the map and relink. If the original University map was wrong, it has been corrected on their site.
You wouldn’t want misinformation archived, right?
Pofarmer says: @ July 27, 2012 at 12:29 pm.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
A. Scott says: @ July 27, 2012 at 3:28 pm….
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Actually you are both right. It depends on who is planting what where. When NAFTA wiped out 75% of the Mexican peasant farmers Big Ag like Smithfield moved in. (Remember the swine flu scare was traced to a Mexican Smithfield farm?) (World wide Locations)
Here in the USA crops do not tassel or ripen on the same day so a country-wide drought will not necessarily effect the corn in NC the same way it effects the corn in Nebraska. Heck a farmer can intentionally stagger planting or plant crop varieties with different ripening dates to spread out harvesting and hedge his bets. Purdue University has a nice readable article on the subject: Assessing Effects of Drought on Corn Grain Yield
However as several of us have already pointed out this does not mean the MSM and commodity traders are not taking advantage by hyping a scare so trading is effected.
Thank you that is much much better.
Since I am red green colorblind, chart color choices often make then useless to me but barely acceptable to people with normal color vision.
Larry
The original posts here were literally unreadable to me.
Keep in mind 10% or slightly more of your audience is red-green colorblind and color combinations with poor gray tone separation are often indistinguishable to us.
For example dark green on a black background, dark blue red or purple on a black background, pale pastel greens and yellows, pale blues, lavanders, violets etc, without using color picker tools to read the actual color triplet of the chart traces I cannot tell these apart.
For me a pale cyan trace on an off white or pale yellow background is literally invisible, I won’t even know it is there unless there is some other clue to its existence. One way to provide that clue is to use different line styles or widths on colors in the same color family (ie blue, purple, or browns greens and reds. Or you can use only strongly contrasting color saturation differences, a bright intense blue vs a pale lavender I can tell them apart. If both have approximately the same grey tone, no chance without manually picking colors and figuring them out, and even then it is very tedious. My solution is often to ignore illustrations that are that much work to make sense of unless I have a great deal of interest in their content.
Change your charts to grey tone and if the grays do not have greater than 10% difference it is probably a bad choice of colors.
Larry
And Gail …. this year the staggered harvest is even more pronounced due to the nearly ideal early conditions of a dry and warm spring which allowed planting to be weeks early ….
Wow that is frigging bazaar — it is browser specific!
In fire fox that png comes up with an 87% gray background making the black text unreadable.
In internet exploder, it shows with a white background.
What software are you using to prepare the png image???
It is outputting a png that is not user friendly to firefox, and giving bogus background color information some how.
I have never had a problem with my images on image shack having such violent color shifts, so I suspect it is something about the png conversion. Might try saving it as a gif , then hosting it on image shack and see if the same thing happens.
Moderators if Bill wants to communicate with me free to forward my email info to him, as I would like to know the source of this issue too!
Larry
[Understood. Robt]
Here is the 2011 US Corn Production Map:
Here is the Drought Severity Map:
Here are two maps for last week precip estimate:
The USDA Crop Report did show significant downgrades – acres projected to be harvested and yields. Some outside “analysts” are predicting the sky is falling. The USDA was more measured. Some farmers in online forums etc are pretty bleak – but then you don’t usually hear from those doing well.
Appears there was decent precip over much of the growing range over the last week – and at a fairly critical stage as I understand it. Will be interesting to see the USDA Crop Report this week.
A. Scott says:
July 28, 2012 at 3:11 pm
And Gail …. this year the staggered harvest is even more pronounced due to the nearly ideal early conditions of a dry and warm spring which allowed planting to be weeks early ….
_____________________________
The growing conditions here in NC were certainly terrific this spring. It was 60F to the 80sF from March til June 20th and decent rain with decent sunshine. You might not plant in March, but the extra warmth of the ground is going to effect the speed of the seed germination when you do plant in April. The fact that the ground did not go to solid brick (red clay) until mid June makes a difference too. A quick look at Kansas City, MO shows it was about the same as middle NC. Good growing conditions til late June.
The amount of residual soil moisture when a drought hits will make a big difference as that Purdue article pointed out. Even with the strike of high temps and no rain my grass is still green as Ireland and growing like weeds. If I get a thunderstorm once a week my grass will stand up to no rain for 14 days even if it is 90-100F. It is when it goes over 14 days that it gets a bit dicey.
Corn is a C4 plant and well adapted to high daytime temperatures and intense sunlight. Mexico where maise comes from is not exactly a benign climate.
That is my understanding as well – heat and sunlight are good … but water – or at least soil moisture – is also important.
In the upper Midwest as I understand it planting was as much as 3 to 5 weeks early this year due to early frost out, warm temps, and early lack of rain meaning fields were immediately accessible. Lack of spring flooding was also a big factor.
Soon after planting many areas saw monsoon rains which gave a big shot of soil moisture.
The best man can do against drought is to build dams.
“.”
Working from the NCDC Global Summary of Day, I’ve calculated a Daily rising temp, the following nights drop in temp, and the difference between the two. Here is an introduction, and updated annual averages.
There is no trend of a loss of nightly cooling at the surface.
I also took just the northern hemisphere (North of 23 Lat), and calculated a daily average, showing the daily difference as the seasons progress for 1950-2010, again there is no trend.
A. Scott. I’ll try to be gentle. The July USDA numbers, when they were released, were already 3 weeks out of date. This is O.K. in a normal year, in a drought year it compounds their errors. Radar precipitation maps also generally GREATLY overstate the actual precip that hits the ground, especially in a drought. The extremely early planting dates and dry spring actually mean that corn planting was less staggered, not more, and it really doesn’t matter because in the scheme of things everything planted any time has gotten nailed. Also, look where this drought is centered. Some of the largest corn growing areas of the country, IL and IN. It has now progressed into IA and parts of MN, gotten most of NE, etc. The Jan projections by the USDA required EVERY state to exceed it’s old record production. Given the increase in acres in marginal growing areas, that was NEVER going to happen. At this point all they can do is reduce the numbers slowly so they don’t look like total idiots. Also, the WASDE reports are not about U.S. numbers. Look at world carryouts on oilseeds, corn, wheat, and coarse grains. They are headed the wrong way, and are now more strongly headed the wrong way. We are currently painted into a corner on grain production worldwide. The world wants more oilseeds, but we need more feed grains. This is going to leave a mark.
The costs in this post don’t include the higher prices for corn. If the crop comes in at 11 billion bushels at at a cost of an extra $2-3 per bushel, the cost to consumers is $22-33B.
@Tom:
Luckily we don’t live in the rest of the world…
Euro units are not particularly convenient nor have they done all that much to make teaching science easier. “Pounds per square inch” is much clearer than kPascals…
On the corn issue: If we just skipped mandating that corn be fed to cars for this year all of the “problem” goes away. It is the “Green Mandate” that is making food costs higher.
For all of those trashing Goldman Sachs & the rest of the speculators, you’re probably right to do so.
But do so with an eye to the whole picture, not just one snapshot.
For decades, agricultural subsidies have artificially kept prices low, devastating farmers in poorer countries and nations that refused to join the U.S. and Europe in their gluttonous overspending. Not only did this produce acres of wasted food in those countries — whole warehouses of butter rotted in some European countries, IIRC — but the rest of the world had their agricultural industries depleted. Many countries are now in a farmer crisis, where most farmers will retire in the next decade or two, with few coming up the ranks to take their place. That’s because so many people quit farming in previous decades due to depressed food prices.
Higher food prices are a bad thing, but only if they’re really high. Moderately high food prices drive investment in agriculture. Just like with natural gas and oil, the cure for high prices is high prices. The long term effect is to generate more food production, and — critically — higher capacity for food production.
In the grand scheme of things, the speculators are just correcting the short-term, kick-the-can-down-the-road politics of food that has distorted the food production system (and much else) since World War II.
For visual entertainment, here’s a submerged drought warning sign from Florida. (a la the London bus sign) Perhaps someone can negotiate the rights from the radio station? …
“Family members farming in southern IL are reporting ZERO bushels/acre unless significant and immediate rain relief occurs. Soy is also on the ropes.”
Southern Illinoisan here. It’s been well over 100 degrees every day this summer, very little rain, the rain is starting back up but for a lot of people it’s too late. I’m in a part of Southern Illinois that’s listed as “moderate” and the fields here are largely toast. Maybe we should look to farmers instead of armchair climatologists.
Wow. We have one person posting about Southern Illinois, which is listed as “moderate”, saying the fields are in trouble. As a fellow Southern Illinoisan, I can confirm this. It stayed somewhat green here, but many of the fields are bare, and the temperatures around 110 degrees every damn day didn’t help either. We’re not the worst of it. I’ve heard from too many people who’re talking about how much trouble they’ll have with insurance, and people with livestock who are talking about what a terrible price they’ll get on their cattle because everyone will be selling, because they have no way of feeding the livestock this winter. People who own horses for recreation are talking about selling them off for meat, because there’s no hay.
Then there’s someone claiming that the media reports are 99% a scam to drive up the prices. I hope so, for everyone’s sake. I hope we’re the worst of it, and everyone else is doing fine. Unfortunately, I hear too much talk “through the grapevine” by knowing farmers to think that. Maybe 50%, tops.
Personally, this is a subject where I’ll trust my own eyes, the word of farmers, and my common sense over the word of armchair climatologists and financial experts. | https://wattsupwiththat.com/2012/07/26/let-there-be-corn-reality-check-on-the-2012-drought-and-corn-yields-in-relation-to-droughts-of-the-past/ | CC-MAIN-2016-50 | refinedweb | 9,408 | 70.33 |
So, you want to use Kubernetes to orchestrate your containerized applications. Good for you. Kubernetes makes it easy to achieve enterprise-scale deployments. But before you actually go and install Kubernetes, there’s one thing you need to wrap your head around: Kubernetes distributions. In most cases, you wouldn’t install Kubernetes from source code. You’d instead use one of the various Kubernetes distributions that are offered from software companies and cloud vendors.
Here’s a primer on what a Kubernetes distribution is, and what the leading Kubernetes distros are today.
What Is Kubernetes?
Before talking about Kubernetes distributions, let’s briefly go over what Kubernetes is. Kubernetes is an open source platform for container orchestration. Kubernetes automates many of the tasks that are required to deploy applications using containers, including starting and stopping individual containers, as well as deciding which servers inside a cluster should host which containers.
Kubernetes is only one of several container orchestrators available; other popular options include Docker Swarm and Mesos Marathon. But, for reasons I won’t get into here, Kubernetes enjoys majority mindshare, and probably majority market share, too, when it comes to container orchestration.
What Is a Kubernetes Distribution?
As an open source project, Kubernetes makes it source code publicly and freely available on GitHub. Anyone can download, compile and install Kubernetes on the infrastructure of their choice using this source code. But most people who want to install Kubernetes would never download and compile the source code, for several reasons:
- Time and effort: There is a lot of Kubernetes source code, and building it all from scratch would require a fair amount of time and effort. Plus, you’d have to rebuild it all whenever you wanted to update your installation.
- Multiple components: Kubernetes is not a single application; it’s a suite of different applications and tools. If you install it from source, you’d have to install each of these parts separately on all of the servers that you are using to build your Kubernetes cluster.
- Complex configuration: Since Kubernetes comes with no installation wizard or auto-configuration script, you’d also have to configure all of Kubernetes’ various components manually.
Most people turn to a Kubernetes distribution to meet their container orchestration needs. A Kubernetes distribution is a software package that provides a pre-built version of Kubernetes. Most Kubernetes distributions also offer installation tools to make the setup process simpler. Some come with additional software integrations, too, to help handle tasks like monitoring and security.
In this sense, you can think of a Kubernetes distribution as being akin to a Linux distribution. When most people want to install Linux on a PC or server, they use a distribution that provides a pre-built Linux kernel integrated with various other software packages. Almost no one goes and downloads the Linux source code from scratch.
What Are the Main Kubernetes Distributions?
Technically speaking, any software package or platform that includes a pre-built version of Kubernetes counts as a Kubernetes distribution. Just as anyone can build his or her own Linux distribution, anyone can make a Kubernetes distribution.
However, if you want a Kubernetes distribution for getting serious work done, there are several main options available:
- OpenShift: OpenShift is a containerization platform that includes Kubernetes along with various other tools needed to run, deploy and manage containers. It’s a relatively inflexible Kubernetes distribution in the sense that it doesn’t give you a lot of choice when it comes to the tools and platforms you can use to build out your full containerization stack. On the other hand, OpenShift comes with almost everything you need out of the box. It’s about as close to turnkey Kubernetes as you can get. OpenShift is developed by Red Hat, and it can run both on-premise and in the cloud.
- Canonical Kubernetes: Canonical, the company that develops Ubuntu Linux, offers a robust and well-supported Kubernetes distribution. Other than requiring you to use Ubuntu, Canonical’s Kubernetes distribution is relatively “pure play” in that you can choose to integrate it with whichever other components you want (as long as you install them yourself). It can run on-premise or in the cloud.
- Google Kubernetes Engine: Google Cloud made a bet on Kubernetes back when other cloud vendors were focused on their own orchestration tools (which is unsurprising, because Google was a major backer of Kubernetes from the beginning of the project). Today, Google Kubernetes Engine is a flexible and simple Kubernetes distribution. Since it runs in Google Cloud, you don’t have to worry about installing it.
- Azure Kubernetes Service: Azure once placed its bets on Docker Swarm, but Azure Kubernetes Service (AKS) is now the main orchestration solution in the Azure cloud. This is a cloud-only Kubernetes distribution.
- AWS Elastic Kubernetes Service: Although Elastic Container Service (ECS), the original container service on the AWS cloud, has its own orchestrator, AWS also offers Elastic Kubernetes Services (EKS), an alternative that is built around Kubernetes. Like AKS, EKS runs only in the cloud.
- Rancher: Rancher’s container platform is now based on Kubernetes. Rancher’s Kubernetes distribution places a special emphasis on multi-cluster Kubernetes deployments, which could be useful if you want to deploy Kubernetes across multiple clouds or for some other reason don’t want to use namespaces (a Kubernetes feature that lets you divide a single cluster of servers into virtual zones) to isolate your Kubernetes workloads. Rancher can work on-premise, in the cloud or even across infrastructure that includes a mix of both. Rancher is similar to OpenShift in that it integrates Kubernetes with a variety of other tools, although it is a bit more flexible because it provides some choice in deciding which components to use.
Conclusion
To say that Kubernetes is a complex beast is to understate. Fortunately, Kubernetes distributions make it easy to take advantage of Kubernetes without all of the fuss and muss of setting up Kubernetes yourself from scratch. For most use cases, one of the Kubernetes distributions described above is the most practical way to get up and running with Kubernetes. | https://www.itprotoday.com/hybrid-cloud/how-choose-right-kubernetes-distribution | CC-MAIN-2021-43 | refinedweb | 1,023 | 52.39 |
#include <driver.h>
Inherited by Acoustics, Acts, AdaptiveMCL, AmtecPowerCube, Aodv, BumperSafe, Camera1394, CameraCompress, CameraV4L, canonvcc4, ClodBuster, Cmucam2, CMVisionBF, CRWIDevice, Dummy, ER, FakeLocalize, Festival, FixedTones, FlockOfBirds_Device, GarminNMEA, GzFiducial, GzLaser, GzPosition, GzPosition3d, GzSim, ImageSeq, InertiaCube2, Iwspy, Khepera, LaserBar, LaserBarcode, LaserCSpace, LaserFeature, LaserVisualBarcode, LaserVisualBW, LifoMCom, LinuxJoystick, LinuxWiFi, MapCspace, MapFile, MapScale, MicroStrain3DMG, Mixer, Nomad, NomadPosition, NomadSonar, Obot, P2OS, PassThrough, PTU46_Device, ReadLog, REB, RFLEX, SegwayRMP, ShapeTracker, SickLMS200, SickPLS, SimpleShape, SonyEVID30, Sphinx2, SrvAdv_LSD, SrvAdv_MDNS, StageDevice, UPCBarcode, VFH_Class, Waveaudio, Wavefront, and WriteLog.
List of all members.
This class manages driver subscriptions, threads, and data marshalling to/from device interfaces. All drivers inherit from this class, and most will overload the Setup(), Shutdown() and Main() methods.
Constructor for single-interface drivers.
Constructor for multiple-interface drivers.
Use AddInterface() to specify individual interfaces.
[virtual]
Destructor.
Add a new-style interface.
Add a new-style interface, with pre-allocated memory.
[inline]
Set/reset error code.
The Subscribe() and Unsubscribe() methods are used to control subscriptions to the driver; a driver MAY override them, but usually won't.
Reimplemented in Khepera, P2OS, REB, and RFLEX.
Unsubscribe from this driver.
[pure virtual]
Initialize the driver.
This function is called with the first client subscribes; it MUST be implemented by the driver.
Implemented in AdaptiveMCL, LifoMCom, ClodBuster, ER, Khepera, P2OS, REB, RFLEX, SegwayRMP, CRWIBumperDevice, CRWILaserDevice, CRWIPositionDevice, CRWIPowerDevice, CRWISonarDevice, and StageDevice.
Finalize the driver.
This function is called with the last client unsubscribes; it MUST be implemented by the driver.
[inline, virtual]
Do some extra initialization.
This method is called by Player on each driver after all drivers have been loaded, and immediately before entering the main loop, so override it in your driver subclass if you need to do some last minute setup with Player all set up and ready to go.
Start the driver thread.
This method is usually called from the overloaded Setup() method to create the driver thread. This will call Main().
Cancel (and wait for termination) of the driver thread.
This method is usually called from the overloaded Shutdown() method to terminate the driver thread.
Main method for driver thread.
Most drivers have their own thread of execution, created using StartThread(); this is the entry point for the driver thread, and must be overloaded by all threaded drivers.
Reimplemented in ClodBuster, ER, Khepera, P2OS, REB, RFLEX, CRWIBumperDevice, CRWILaserDevice, CRWIPositionDevice, CRWIPowerDevice, and CRWISonarDevice.
Cleanup method for driver thread (called when main exits).
Overload this method and to do additional cleanup when the driver thread exits.
Wait on the condition variable associated with this driver.
This method blocks until new data is available (as indicated by a call to PutData() or DataAvailable()). Usually called in the context of another driver thread.
Signal that new data is available.
Calling this method will release any threads currently waiting on this driver. Called automatically by the default PutData() implementation.
[static].
Reimplemented in AdaptiveMCL, and StageDevice.
Write data to the driver.
This method will usually be called by the driver.
Write data to the driver (short form).
Convenient short form of PutData() for single-interface drivers.
Read data from the driver.
This function will usually be called by the server.
Reimplemented in StageDevice.
Write a new command to the driver.
Read the current command for the driver.
This function will usually be called by the driver.
Read the current command (short form).
Convenient short form for single-interface drivers.
Clear the current command buffer.
This method is called by drivers to "consume" commands.
Convenient short form of ClearCommand(), for single-interface drivers.
Write configuration request to the request queue.
Unlike data and commands, requests are added to a queue. This function will usually be called by the server.
Reimplemented in LifoMCom.
Read the next configuration request from the request queue.
Unlike data and commands, requests are added to and removed from a queue. This method will usually be called by the driver.
Convenient short form of GetConfig(), for single-interface drivers.
Write configuration reply to the reply queue.
Convenient short form of PutReply() for single-interface drivers.
Convenient short form of PutReply(): empty reply
Convenient short form of PutReply(): empty reply for single-interface drivers.
Read configuration reply from reply queue.
A helper method for internal use; e.g., when one driver wants to make a request of another driver.
[protected, virtual]
Default device id (single-interface drivers).
Number of subscriptions to this driver.
Total number of entries in the device table using this driver. This is updated and read by the Device class.
If true, driver should be "always on", i.e., player will "subscribe" at startup, before any clients subscribe. The "alwayson" parameter in the config file can be used to turn this feature on as well (in which case this flag will be set to reflect that setting).
Last error value; useful for returning error codes from constructors | http://playerstage.sourceforge.net/doc/Player-1.6.5/player-html/classDriver.php | CC-MAIN-2015-27 | refinedweb | 796 | 52.36 |
Free for PREMIUM members
Submit
Learn how to a build a cloud-first strategyRegister Now
Sign up to receive Decoded, a new monthly digest with product updates, feature release info, continuing education opportunities, and more.
e.g.
public class StaticClass {
public static int blah (String s) {
return s + "... Blah";
}
}
in the above case... you can just do something like (in your main class):
System.out.println(StaticC
if you were to remove the static from the method declaration, you would have to do:
StaticClass sc = new StaticClass();
System.out.println(sc.blah
that being said, what is the error you are getting?
how are you calling your method?
...
}
When a method is static, everything it accesses must also be static, so your props variable must be declared static as well.
~IM
Make this method static:
public Vector getConvertedError( String errCode);
However, as Idle_Mind correctly pointed out (unless we're both terribly mistaken), whatever you use to access it must also be static.
For example,
public static void main ( String [] args ) {
// ... Some code ...
Vector v_errors = getConvertedError( s_errCode );
}
(this is all assuming that the "getConvertedError()" method, is in the same class as that calling it)..
However, if you call it from a method that is NOT static, you need to create a class object, like so:
public void methodName () {
// ... Some code ...
ClassName o_cn = new ClassName (); // Add any arguments to the constructor if necessary.
Vector c_errors = o_cn.getCnvertedErrors( s_errCode );
}
I hope that helps! :-)
[r.D] | https://www.experts-exchange.com/questions/21223429/can-this-be-made-static.html | CC-MAIN-2017-51 | refinedweb | 242 | 55.95 |
A very common requirement for applications that take a lot of time to
start, is to throw up a nice looking splash window at startup. Here I will
discuss how such splash windows can be developed using the new layering APIs available
with Win2000 and later versions of Windows.
This application also shows how visually rich
modules can be developed using C++ and Win32 without MFC.
First create the image you would like to show as the splash in
your favorite image editor. Then choose the part of the image you would not
like to be rendered on the screen and give it a particular color that does not
occur anywhere else on the image. For an example if your image is a red circle
with some logo or text inside it in blue then fill all the region outside the
circle with a color like green (anything other than red and blue, I used grey 128,128,128).
Note the R, G, B values of the color.
The interface of the class is very simple. Use the overloaded constructor
and specify the bitmap file path and the color on it to be made transparent.
Then use the method ShowSplash to show the splash screen. The function
returns immediately. You can then complete all your initialization and at the end, call
CloseSplash to remove the splash window. Following is the code
ShowSplash
CloseSplash
#include "splash.h"
// Include the class header in your file
// Give the path of the image and the transparent color
CSplash splash1(TEXT(".\\Splash.bmp"), RGB(128, 128, 128));
// Display the splash
splash1.ShowSplash();
// your start up initialization code goes here
// Close the splash screen
splash1.CloseSplash();
The other alternative is to use the default constructor and then use individual calls
to set up the bitmap path and transparent color. In the sample project the
source file SplashClient.cpp contains code that demonstrates both ways of using
the CSplash class.
CSplash
The class is implemented using Win32 API calls.
The constructor stores the pointer to the layering Win32 call
SetLayeredWindowAttributes that is used to make the window
transparent from USER32.dll into a function pointer
g_pSetLayeredWindowAttributes.
SetLayeredWindowAttributes
g_pSetLayeredWindowAttributes
The SetBitmap method opens the bitmap file, loads it and
stores a handle to it in the m_hBitmap member. It also stores
its width and height in m_dwWidth and m_dwHeight.
SetBitmap
m_hBitmap
m_dwWidth
m_dwHeight
SetTransparentColor calls MakeTransparent which
sets the layering style of the window and uses the function pointer
g_pSetLayeredWindowAttributes to make
the requested color transparent for the CSplash window.
SetTransparentColor
MakeTransparent
The private function RegAndCreateWindow registers and creates
the window for the splash window. The window procedure used with this window
is an external function ExtWndProc. With the
CreateWindowEx call we pass the this pointer. This
makes the pointer to CSplash class reach the
ExtWndProc function as lparam with
the WM_CREATE message. ExtWndProc stores this
pointer and forwards all subsequent messages to the member function
CSplash::WindowProc
RegAndCreateWindow
ExtWndProc
CreateWindowEx
this
lparam
WM_CREATE
CSplash::WindowProc
CSplash::WindowProc contains code to call OnPaint
on receiving WM_PAINT message. Inside OnPaint
we select the bitmap (in m_hBitmap) to a device context
in memory and then BitBlt it to the screen device context.
And we are done displaying the splash window.
OnPaint
WM_PAINT
BitBlt
To close the window CloseSplash destroys the window and
unregisters the class associated with the | http://www.codeproject.com/Articles/7658/CSplash-A-Splash-Window-Class?PageFlow=FixedWidth | CC-MAIN-2013-20 | refinedweb | 558 | 54.73 |
Hi, folks, I was trying to install gdesklets (did on the *BSD so easily, thought it could be easy on Slackare :roll: ) but when I try to start the daemon, i got a timeout and looking in the log file it says:
So, if someone had this problem before, I am becoming desperate :drown:So, if someone had this problem before, I am becoming desperate :drown:Quote:
Traceback (most recent call last):
File "/usr/share/gdesklets/gdesklets-daemon", line 177, in ?
gdesklets_main()
File "/usr/share/gdesklets/gdesklets-daemon", line 160, in g
desklets_main
init()
File "/usr/share/gdesklets/main/__init__.py", line 59, in in
it
import gnome.ui
ImportError: No module named gnome.ui
~ | http://www.linuxforums.org/forum/applications/another-gdesklets-problem-print-29595.html | CC-MAIN-2018-26 | refinedweb | 115 | 63.53 |
Teaching Kids Coding: Searching Lists
Kids need to learn how to code all of the basics. Searching through lists is a very important task that you and your coder might want to have your program accomplish.
Linear versus binary searching algorithms
When it comes to lists, linear search is pretty straightforward. Essentially you start at the beginning of the list, and check to see if the first item is the item you’re looking for. If it is, you’re done! If it isn’t, you move to the next item in the list. And you repeat this sequence until you’ve either found your item, or you’ve reached the end of the list and know that your item isn’t in the list.
Binary searching is a little more complicated than linear searching. Luckily, you’ve probably already used the binary search algorithm in your everyday life though. For example, when you’re looking for the word “Search” in a dictionary, you probably don’t read each word in order until you get to the word “Search.”
Instead, you flip to the middle of the dictionary and check to see whether the word “Search” comes before the page you opened, or after. You know this because the dictionary is in alphabetical order! If “Search” comes after the page you’re on, you flip to a page to the right and repeat the process until you end up on the right page.
This is called binary search, also known as “divide and conquer.” When writing the code for this type of search, you literally go to the middle of the list, and then the middle of the side that you know the word is on; each time dividing your search space by half. In your everyday life you might not be exactly at half, but it’s close enough to still call it a binary search.
The only caveat to binary search is that the list must be sorted for the sorting to work. A linear search implementation doesn’t require your list to be sorted before searching.
Common application: Finding a phone number
Here, you find the code for finding a phone number in a list of names that are ordered alphabetically. This code is written in Java and, because the names are sorted but the phone numbers are not, this implementation uses the linear search algorithm.
import java.util.ArrayList; import java.util.Arrays; import java.util.List; import java.util.Scanner; public class sort2 { public static void main(String [] args) { Person sarah = new Person("Sarah", "555-7765"); Person camille = new Person("Camille", "555-9834"); Person steve = new Person("Steve", "555-2346"); Person rebecca = new Person("Rebecca", "555-1268"); List directory = Arrays.asList(sarah, camille, steve, rebecca); Scanner scanner = new Scanner(System.in); System.out.println("Please enter the phone number so I can tell you the name: "); String number = scanner.nextLine(); String nameFound = ""; for(int index = 0; index < directory.size(); index++) { Person personInDirectory = directory.get(index); String numberInDirectory = personInDirectory.getNumber(); if(numberInDirectory.equals(number)) { nameFound = personInDirectory.getName(); break; } } if(nameFound.equals("")) { System.out.println("Sorry, the number you are looking for does not belong to anyone in this directory"); } else { System.out.println("The number " + number + " belongs to " + nameFound); } } }
This code relies on another class in a file called Person.java. The code for this file is:
public class Person { String name; String number; public Person(String p_name, String p_number) { name = p_name; number = p_number; } public String getName() { return name; } public String getNumber() { return number; } } | https://www.dummies.com/programming/coding-for-kids/teaching-kids-coding-searching-lists/ | CC-MAIN-2019-47 | refinedweb | 588 | 64.1 |
In “Iterative Inorder Traversal of a Binary Tree” problem we are given a binary tree. We need to traverse it in inorder fashion “iteratively”, without the recursion.
Example
2 / \ 1 3 / \ 4 5
4 1 5 2 3
1 / \ 2 3 / \ 4 6 \ 7
4 2 1 3 6 7
Algorithm
- Initialize a variable “curNode” as the root of the binary tree
- Initialize an empty stack, containing the nodes as they are traversed
- Do the following until either the stack is not empty or curNode has not become NULL:
- While curNode is not NULL:
- Push curNode into the stack
- Keep going to the leftmost child by changing the current node as curNode = curNode->left
- Now, the top node in the stack is the leftmost node of the current subtree, so we print the value of the top node in the stack
- Assign curNode as the right child of the top node in the stack as curNode = stack.top()->right
- Keep going to the leftmost child by changing the current node as curNode = curNode->left to process right subtree of this leftmost node
- Pop an element out of the stack
Explanation
The program uses the idea the stack pops out the most recent element added, In the algorithm explained above, we simply infer that if the current node (which is initially the root of the tree) has a left child, then we keep pushing its left child into the stack until no more left children remain.
When the case arises such that the current node doesn’t have a left child, it’s clear that the top node in the stack is the “most recently leftmost node” added. So, it should come first in the remaining order of traversal. So, we start printing/adding it to our order list and pop it out of the stack. Once done, we are now clear about the fact that in the Left-curNode-Right (the inorder sequence), Left and Node part are traversed. So, the node’s right subtree is into the process!
Intuitively, since we want to apply the same process to the right subtree of the current node, we can generalize it as: curNode = curNode->right.
Implementation
C++ Program of Iterative Inorder Traversal of a Binary Tree
#include <bits/stdc++.h> using namespace std; struct treeNode { int value; treeNode *left , *right; treeNode(int x) { value = x; left = NULL; right = NULL; } }; void iterativeInorder(treeNode* root) { stack <treeNode*> ; treeNode* curNode = root;elements while(curNode != NULL || !elements.empty()) { while(curNode != NULL) { elements.push(curNode); curNode = curNode->left; } cout << elements.top()->value << " "; curNode = elements.top()->right; elements.pop(); } } int main() { treeNode* root = new treeNode(2); root->left = new treeNode(1); root->right = new treeNode(3); root->left->left = new treeNode(4); root->left->right = new treeNode(5); iterativeInorder(root); cout << endl; return 0; }
4 1 5 2 3
Java Program of Iterative Inorder Traversal of a Binary Tree
import java.util.Stack; class treeNode { int value; treeNode left, right; public treeNode(int x) { value= x; left = right = null; } } class Tree { treeNode root; void iterativeInorder() { Stack<treeNode> elements = new Stack<treeNode>(); treeNode curNode = root; while (curNode != null || !elements.empty()) { while (curNode != null) { elements.push(curNode); curNode = curNode.left; } curNode = elements.peek().right; System.out.print(elements.peek().value + " "); elements.pop(); } } public static void main(String args[]) { Tree tree = new Tree(); tree.root = new treeNode(2); tree.root.left = new treeNode(1); tree.root.left.left = new treeNode(4); tree.root.left.right = new treeNode(5); tree.root.right = new treeNode(3); tree.iterativeInorder(); } }
4 1 5 2 3
Complexity Analysis
Time Complexity of Iterative Inorder Traversal of a Binary Tree
The time complexity is O(N), as we visit each node exactly once. Here, N is the number of nodes in the binary tree.
Space Complexity of Iterative Inorder Traversal of a Binary Tree
The space complexity is O(N). Considering the worst case, where the tree can be a skewed one, space complexity will be equal to the number of nodes in the binary tree. | https://www.tutorialcup.com/interview/tree/iterative-inorder-traversal-of-a-binary-tree.htm | CC-MAIN-2021-04 | refinedweb | 669 | 59.23 |
YOUR OFFICIAL RASPBERRY PI MAGAZINE
The official Raspberry Pi magazine
Issue 48
August 2016
raspberrypi.org/magpi
WINDOWS 10
IOT
CORE
ON RASPBERRY PI Create Internet of Things devices with Microsoft's exciting new build
INTERNETPOWERED ROBOTS!
SET UP DROPBOX ON YOUR PI
Share your files in the cloud in easy steps
Battling bots controlled by Twitch.tv viewers
MAKE MUSIC WITH YOUR PI
BUILD A TWEET-OMETER
Sonic Pi creator Sam Aaron shows you how
Also inside: > BUILD THE CONTROLS FOR AN ARCADE MACHINE > NATUREBYTES WILDLIFE CAMERA KIT TESTED > NASA HELPS SEND CODE CLUB TO SPACE > CODE IN C WITH OUR EXPERT SERIES
Chart your social media success
GO FOR GOLD!
Four amazing Olympicsinspired games to make on your Raspberry pi
THE ONLY PI MAGAZINE WRITTEN BY THE RASPBERRY PI COMMUNITY
Issue 48 • Aug 2016 • £5.99
08 9 772051 998001 THE OFFICIAL PI MAGAZINE! indows is synonymous with computers. After decades of it being the most popular operating system in the home, it’s been interesting to see Microsoft experimenting with expanding the uses of Windows. Recently, the firm got behind the maker movement, which led to Windows 10 being ported to the Raspberry Pi last year, in the form of Windows 10 IoT Core. There’s been a great reaction from the maker community, and it’s currently included on NOOBS for everyone to use. Now with the Raspberry Pi 3 out, Windows 10 IoT Core has much more power to play with and the team at Microsoft are keen to show off what it can do. They’ve even made a new kit full of great project parts so you can make the most out of Windows 10 on Raspberry Pi. To celebrate this, we’ve got a whole feature on Windows 10 IoT Core – starting on page 14 – that shows you what’s in the new kit, some of the amazing things you can do with IoT Core on Raspberry Pi, and some quick lessons to get started making with Windows. We hope you enjoy the issue!
W
SEE PAGE 66 FOR DETAILS
THIS MONTH: 14 WINDOWS 10 IOT CORE
Create Raspberry Pi IoT devices with Microsoft’s OS
36 MOTORISED SKATEBOARD
Ride to school/work on this Pi-powered electric board
44 RGB LED TWEET-O-METER
See how your Tweets are doing with this smart project
68 SCRATCH OLYMPICS 2016
Rob Zwetsloot Features Editor
Go for gold with these fun sporting simulations
FIND US ONLINE raspberrypi.org/magpi EDITORIAL
DESIGN
PUBLISHING
DISTRIBUTION
SUBSCRIPTIONS
CONTRIBUTORS
Managing Editor: Russell Barnes russell@raspberrypi.org Features Editor: Rob Zwetsloot, David Crookes, Lucy Hattersley, Richard Hayler, Phil King, Simon Long, Ben Nuttall,.
August April 2016
3
Contents
raspberrypi.org/magpi
Issue 48 August 2016
COVER FEATURE
TUTORIALS > TWEET-O-METER Use an RGB LED to show the popularity of tweets
> INTRODUCING C
44 46
Create variables and perform mathematical operations
> PI BAKERY
48
Make an Olympic swimming simulator with Mike Cook
> DROPBOX ON PI Upload and download files from the cloud service
> SONIC PI Learn how to master its powerful slicing capabilities
> SCRATCH BOAT RACE GAME
54 56 58
Make a fun boat race game in Scratch on the Pi
> POETRY GENERATOR Use word lists in Scratch to generate random poems
60
> ARCADE MACHINE BUILD (PT 2) 62 Create the controls for your RaspCade arcade machine
IN THE NEWS
One Giant Leap for Coding
Young coders in Australia were asked to aim for the moon in a world record attempt
4
August 2016
6
14
WINDOWS 10 IOT CORE Learn to use IoT Core, make projects, and get more out of your Raspberry Pi
LET’S ROBOT
8
See what happens when a bunch of robots are controlled by web chat room users
Asthma Pi Nine-year-old Arnav Sharma has built an award-winning air monitoring kit for asthma sufferers
10 raspberrypi.org/magpi
Contents THE BIG FEATURE
68 A treasure trove of Pimoroni
HATs and Goodies
95 MAKE YOUR OWN JAM
SCRATCH OLYMPICS 2016
Practical tips for setting up your own Raspberry Jam from educator Cat Lamin
Go for gold by programming four fun Olympic events in Scratch: archery, weightlifting, hurdles, and synchronised swimming
YOUR PROJECTS
36
88
REGULARS > NEWS The biggest stories from the world of Raspberry Pi
06
> TECHNICAL FAQ
64
> BOOK REVIEWS
84
> THE FINAL WORD
96
Got a problem? Maybe we can help…
The best reads for coders and hackers
More musings from Raspberry Pi’s Matt Richardson
COMMUNITY
PI SKATEBOARD
> THIS MONTH IN PI
86
> EVENTS
90
Forget hoverboards – commute in style with this Pi-powered electric deck
Discoverer
34
Talk to us about the magazine or Raspberry Pi
This homage to 2001: A Space Odyssey makes for a stylish voice assistant
raspberrypi.org/magpi
92
38
Take a tour of this incredible internetconnected cityscape built from bricks
HAL 9000
Get involved in a community gathering near you
> YOUR LETTERS
Search for treasure in the back yard with this metal-detecting robot
Internet of LEGO
We round up other goings-on in the Pi community
REVIEWS 40
> WILDLIFE CAM KIT
80
> ANALOG ZERO
82
> PICO-8
83 August 2016
5
News
FEATURE
ONE GIANT LEAP FOR CODING Young coders in Australia were asked to aim for the moon in a world record attempt egardless of the pursuit, there are few more exciting things in life than an attempt at a world record. Whether you’re trying to grow an extra centimetre of fingernail in order to beat Shridar Chillai (how he types is anyone’s guess), or whether you’re looking to break the most toilet seats using only your head (46, as it stands), the will to be the best can take some dedication. Code Club Australia (CCA) knows that more than most. On 20 July, the club’s organisers looked to encourage 10,000 kids to take part in Moonhack, an online programming workshop that asked children to choose a moon-themed coding
R
Below Children aimed for the moon in their bid to set a world coding record
activity using Python, Scratch, or a combination of HTML and CSS. In doing so, it sought to set a world record for the largest number of children coding at the same time. Much to the delight of CCA general manager Kelly Tagalan, children rose to the challenge, just as Annie Parker did in 2014 when she set up Code Club Australia for the benefit of 9 to 11-yearolds and watched its presence extend to 1,000 schools. “We had over 1,000 kids straight after the announcement,” says Tagalan, who came up with the idea for setting a world record as a way of achieving something big with a simple concept. “Principals, teachers, and
parents demonstrated a real zeal for what we were hoping to achieve.”
An important anniversary But it wasn’t easy. Working with CCA board member Clive Dickens, the first task was to connect the idea to a purpose. “We discovered that the anniversary of the Apollo 11 moon landing was 20 July,” Tagalan recalls. She added that it was a perfect tie-in: due to the position of the earth when Neil Armstrong walked on the lunar surface, the Parkes Dish in New South Wales received the first images of the US astronaut and put them out for the world. “It was live-streaming long before YouTube, and [with
WALKING TALL
Image by NASA
It was rather apt for Moonhack to take place in Australia on the anniversary of the Apollo 11 moon landing. “It was through the antenna dish now at this complex [in Canberra] that the historic images of Neil Armstrong stepping onto the surface of the moon were received and relayed to the entire world,” Nagle explains.
6
August 2016
raspberrypi.org/magpi
News
ONE GIANT LEAP FOR CODING
Above The Code Club Australia website - codeclubau.org – had a dedicated Moonhack section
Right Posters could be printed and distributed, to drum up greater numbers of children
Moonhack] we were hoping to make a connection for Australian kids to a very important moment in world history that had a significant Australian contribution,” she says. It was a wise choice. The spacebased theme and CCA’s giant leap of ambition (10,000 children represents a quarter of so of the 45,000 Australian kids engaged in Code Clubs) soon brought high-profile backing, not least from the Canberra Deep Space Communication Complex (CDSCC), which maintains one of three stations in the Deep Space Network on behalf of NASA. Glen Nagle, the NASA operations support officer for the CDSCC, contacted the organisers to discuss ways of supporting the activity. “When Glen reached out to us, we were – pardon the pun – over the moon,” Tagalan says. “We were really grateful that the announcement had reached that far, and pleased to have had his expertise on the history of the moon landing. It’s a wonderful thing to have such a great organisation stand behind getting kids ready for fantastic futures through education.”
promote science and technology education to the next generation of space explorers. “The Moonhack program certainly fitted into our theme of promoting space science to schoolchildren, and showing how they don’t have to wait years to get a degree to become a scientist or engineer,” Nagle explains. As such, the CDSCC worked with Moonhack to make more teachers aware of the event and encourage them to introduce it into their schools. “We included messages about Moonhack in our teachers’ kits and spread the message through our social media channels,” Nagle continues. To encourage them further, some of the best students from Moonhack are being invited to take a look behind the scenes at Australia’s role in space exploration at the CDSCC. “It was really wonderful to have the invitation to visit the NASA CDSCC centre extended to some Code Club kids,” says Tagalan. It’s certainly been a good year for tying coding to all things space. Astro Pi, which was taken on board the International Space Station and used by British astronaut Tim Peake, attracted lots of attention. “Having students being able to run experiments on the ISS is a brilliant way to engage and excite them,” Nagle comments. “Australia is participating in these sorts of activities through the Quberider programme, which
Aim for the moon
For the CDSCC, Moonhack presented a golden opportunity. Its deep space tracking station has its own education program which encourages more than 10,000 students each year to pursue the STEM subjects, and seeks to raspberrypi.org/magpi
RINGING THE CHANGES Last December, Code Club Australia benefited from a cash injection of a million Australian dollars from the government and the Telstra Foundation, the philanthropic arm of the telecommunications giant. The Code Club and Telstra are intrinsically linked, though, since its founder Annie Parker worked for the Foundation’s startup incubator programme, muru-D. Telstra chairman Catherine Livingstone (pictured) backs the club, saying: “So many of tomorrow’s jobs are going to require some level of knowledge about computer coding.” will see students coding small instrumentation packages to perform experiments remotely on ISS.” Such activity, he says, allows for a fresh approach to the promotion of coding. “Allowing students to do more than just the theory of science, and actually having them build and operate instruments of their own design in space, can only capture their imaginations and perhaps steer them on a path to future careers in this area. The only limit to what these children can do is their imagination.” August 2016
7
News
FEATURE
Watch Let’s Robot
TWITCHCONTROLLED at letsrobot.tv!
ROBOTS What happens when you take the concept of Twitch Plays Pokémon and transpose it to stream-controlled robots doing arts and crafts?
witch has long since stopped just being a place to watch people play games. Twitch Plays Pokémon had the audience play Pokémon games. Bob Ross spent eleven days straight painting happy little trees to launch the creative channel. Now, you can control robots live via the chat with Let’s Robot.
T
CONTROLLING A ROBOT ON TWITCH Like a lot of Twitch-controlled projects, simply writing keywords in the chat causes the robot to perform various actions. The most basic commands are for movement:
“Let’s Robot began as an attempt to make the world’s first liveinteractive online show, by using robots,” Jillian Ogle, founder and CEO of Let’s Robot, tells us. “At the time I started this, I was an indie game developer and used to work for big companies like Disney. “The way it works is the robot streams a live video feed to a web page (letsrobot.tv), where the audience can see the video and use a chat room to talk to each other and send the robot commands.
The robot parses the commands and then passes the instructions to the microcontroller on board. We use a lot of Arduino-type controllers for the actual control of the robot.” The robots have got up to a lot of mischief in their time on Twitch. There have been story-based adventures through cardboard dungeons, harrowing pizza deliveries, artistic competitions, and even the robots feeding a man random items of food set out on a table for the robots to grab.
forward, back, left, right However, you can also control the LEDs on a robot using commands such as: LED All (Colour); for example, LED All Blue
LED (Number) (Colour); for example, LED 7 Blue You can also give them RGB numbers for colour. We understand that Easter eggs have been programmed into individual robots as well, so look for more interesting ways to control them.
8
August 2016
The outdoor robot is much more robust than the others, with a few Mars Rover-like features to aid it
raspberrypi.org/magpi
WATCH LET’S ROBOT
News
HIGHLIGHT REEL Catch up on some of the robot’s best adventures so far…
Feed the human
magpi.cc/2a33DUG
Poor intern Carl was used as a prop in a recent stream, tasking the chat to feed him via the robots. Apparently, they weren’t so good at the task, as shown by the mess made around Carl’s hole in the table. These robots are incredible artists, putting the likes of Jackson Pollock to shame
Mix in the Raspberry Pi
As well as Arduinos, the robots use a lot of Raspberry Pis, as Jillian explains to us: “The Raspberry Pi is the most consistent component we have in all of our robots. It’s used to capture and upload video in realtime using the camera. It also acts
wheels, and lights as well as the Twitch control aspect. They’re mostly small so it’s easier to make stuff for them, some have grippers and there’s even one with a sword. For outdoor incursions, there’s also a Mars Rover-esque machine. There’s much more to come, though, with new robots being
The robots have a few things in common; they all have cameras, wheels, and lights as a messenger between our web server and microcontrollers. “We’ve actually tested a lot of boards besides the Pi that are much more powerful; however, none of them really have the supporting tools, hardware, or online support and community that makes the Raspberry Pi so easy to work with. We also like the Pi a lot, because we would actually love for people to be able to make robots like ours and join in the fun.” The robots have a few things in common; they all have cameras, raspberrypi.org/magpi
built all the time and future plans in place for robots after that. “We have many plans,” Jillian mentions. “We’re working to make a custom interface that’s actually designed for many users to collaborate or compete for control of a single robot. We’re trying to make the audience participation more fun and engaging, and working on ways to lower the latency.” It’s all very exciting, and the cake baking challenge for the robots is coming up soon as well. We’re sure it will be a masterpiece.
Party time
youtu.be/nEbiwGP_lGo
To celebrate the one-year birthday of Aylobot, the robots decided to hold a party. This started with the robots writing out invitations, with one robot writing the address and another putting the stamps on the letters to send them out. They weren’t very legible, unfortunately.
Cup-flipping robot
magpi.cc/2a1QcRV
This adorable little robot has one function and one function only: to grab and flip a cup. Or anything that can be grabbed from the sides and thrown into the air. It’s a tough job, but some robot’s got to do it. August 2016
9
News
FARNELL SOLD
PI PARTNER PREMIER FARNELL SOLD A licensed Raspberry Pi manufacturer is acquired by Swiss firm Dätwyler hile some misleading newspaper headlines stated that the maker of Raspberry Pis had been sold, there is no need for concern! The reality is that Premier Farnell, which manufactures and distributes Pis under licence from the Raspberry Pi Foundation, is the subject of a takeover bid by Swiss conglomerate Dätwyler. With a recommended all-cash offer of 165p per share, this values the Leeds-based firm, which includes the Element14 group, at £792m. According
W
PI FACTORY Most Raspberry Pi production, including 100% of Pi Zeros, occurs at the Sony factory in Pencoed, South Wales. Manufacture of all Pi models apart from the Zero is handled by two main licensed partners, Premier Farnell and RS Components, who decide how many should be made to meet demand. Official Raspberry Pi add-ons, including the Sense HAT and Camera Module, are also made at the Pencoed factory.
10
August 2016
to a statement from Dätwyler, “Premier Farnell and Dätwyler are two leading distributors of electronic components, with complementary product ranges and geographic market presence.” While Raspberry Pi sales have been booming, with 500,000 Pi 3s sold in the new model’s launch month alone, the Premier Farnell business as a whole has been struggling with weak sales, resulting in a dividend cut and decline in the share price until the offer was announced on 14 June. Premier Farnell is one of the two key licensed Raspberry Pi manufacturers, along with RS Components. “We license the technical designs for the Pi and the various trademarks to the partners, who then decide how many to make,” explains Raspberry Pi Trading CEO Eben Upton. “They are required to keep the product in stock, and to keep the price at or below an agreed maximum ($35 in the case of the Model 3B), but beyond that the practical decision‑making is up to them.” Eben doesn’t believe Raspberry Pi production will be affected: “We’ve had a fantastic relationship with Premier Farnell over the last four and a bit years, and my expectation is that the acquisition won’t do anything to change that. I’ve already had some contact with the
COMBINED FORCE Assuming the takeover – which is scheduled to close in the fourth quarter – goes ahead as planned, the combined group will boast a workforce of over 10,000. Based in Switzerland, the Dätwyler Group already comprises more than 50 operating companies, with sales in over 100 countries, generating an annual revenue of around CH1,200m (£947m). Its two main divisions are Technical Components (electronics, automation, and ICT) and Sealing Solutions (healthcare, civil engineering, automotive, and consumer goods).
team at Dätwyler and it’s clear they understand the value of Pi to the combined business.” Indeed, Eben hopes the acquisition may have some benefits, possibly helping to increase Pi sales in other parts of Europe: “Dätwyler have strengths in B2C sales in some areas where the licensees have traditionally only had B2B channels. There is one Dätwyler business [Reichelt Elektronik] which is already a significant Raspberry Pi reseller, and I hope bringing the two businesses under the same umbrella will provide a boost.” raspberrypi.org/magpi
News
ASTHMA PI KIT
AWARD-WINNING
ASTHMA PI KIT Nine-year-old Arnav Sharma’s Pi-powered asthma management kit bags two awards rnav Sharma’s AsthmaPi kit (youtu.be/3Dniuy4-D3M) has won two Tech4Good 2016 awards: People’s and Winner of Winners. Based around a Raspberry Pi and Sense HAT, the kit can help parents of children suffering from asthma. Upon learning that asthma attacks can be triggered by dust, pollen, pollutants, and other environmental factors, keen coder Arnav set about researching the possibility of creating a detection device based on the Raspberry Pi. It took him two months to build a prototype that monitors all the main asthma triggers.
A
raspberrypi.org/magpi
The Sense HAT on the Pi measures humidity and temperature. A gas sensor, wired up to an MCP3008 ADC converter on a breadboard, monitors hazardous atmospheric gases: ammonia, carbon monoxide, and CO2. Finally, a dust sensor connected via an Arduino Uno detects particles and pollen in the air. Arnav programmed the kit using Python and C++. “Coding was easier for the Sense HAT, but was very difficult for other sensors.” As well as monitoring asthma triggers, the kit can even send email and SMS text alerts to prompt users to take their medication. Arnav has also
TECH4GOOD AWARDS Held annually since 2011, the Tech4Good awards (tech4goodawards.com) recognise organisations and individuals who use digital technology to improve the lives of others. Previous winners include the Raspberry Pi itself. A finalist for the BT Young Pioneer Award, Arnav’s AsthmaPi kit picked up the People’s Award in a public vote of over 38,000 people. It also took home the Winner of Winners Award, decided by an audience vote at the ceremony, using glow sticks! created an accompanying booklet to make understanding asthma simple. “I would love to produce a commercial version of the device,” Arnav tells us. The Asthma UK charity has already shown an interest in the kit, so who knows?
August 2016
11
Feature
WINDOWS 10 IOT CORE
"
our y f o t e ou Pi! r o y m Get aspberr R
" 14
August 2016
"
Learn to use IoT Core!
" raspberrypi.org/magpi
Feature
"
"
ver Disco ojects! ng pr amazi
WINDOWS 10
IOT CORE The Raspberry Pi 3 opens up Windows 10 IoT Core to greater uses – here’s how to get started with it... t the launch of the Raspberry Pi 3 in London, the Microsoft team were present to show off a new project they’d created to make use of the Pi and Windows 10. On the face of it, demonstrating how a feedback loop could be created to keep a wheel spinning at roughly the same speed, even with varying resistance, may not seem amazing, but a lot was going on to keep it turning.
A
raspberrypi.org/magpi
This is the strength of Windows 10 IoT Core on the Raspberry Pi 3, allowing for more power to make unique projects that connect to the Internet of Things. With a new kit on the way to help people get started with Windows 10 on Raspberry Pi 3, we thought we’d give you a head start so you can begin making more amazing and varied projects on your Raspberry Pi.
August 2016
15
Feature
WINDOWS 10 IOT CORE
RASPBERRY PI AND
WINDOWS
How Microsoft is keen to support the makers of the future with Windows 10 IoT Core
WHAT'S IN THE BOX The new kit has a wealth of great components to get you started doing some amazing things with Windows 10 IoT Core: > > 5-inch display > Grove shield (to plug
components into) > Ultrasonic range finder > Sound sensor > Temperature and humidity
sensor > Light sensor > Rotary angle sensor > LCD RGB backlight > LED bar
icrosoft is launching a new kit for the Raspberry Pi that allows you to make some amazing projects with Windows 10 IoT Core. We caught up with Asobo Mongwa and Dan Rosenstein from the Microsoft team behind the kit, to find out how we got here.
M
The MagPi: How did Windows 10 IoT Core on Raspberry Pi start? Dan Rosenstein: Raspberry Pi was taking off in the market and at the same time we were working on Windows 10 IoT Core. Since the very beginning, our belief has 16
August 2016
been that makers are the creation engine and game changer for IoT, and that the maker movement was going to be the catalyst for that change. Our design goal for Windows 10 IoT Core was to make the programming model easy and accessible to makers, and work with the large set of democratised hardware already in the maker ecosystem. Our goal was to join makers where they were. The Raspberry Pi Foundation provided the perfect mix to help launch Windows 10 IoT Core: it offered the hardware, market share, partnership, and community that we wanted to be part of.
Microsoft was at the Raspberry Pi 3 launch to show off its brand new wheel project. Find out the inner workings of it here: magpi.cc/2a8rak9
> Relay > Button > Buzzer > HDMI cable
The new Microsoft Grove kit is a specially put together package that lets you make the most of Windows 10 IoT Core on Raspberry Pi
raspberrypi.org/magpi
Feature MEET THE TEAM
Dan Rosenstein (left) is a Principal Technical Program Manager and Asobo Mongwa (right) is a Program Manager; they both work at Microsoft's Operating System Group in the Internet of Things Maker Program Management team. Here they are at the National Maker Faire in DC in June, one of many maker events that the team attend to show off the power of Windows 10 IoT Core on Raspberry Pi to the maker community.
TM: What's the reaction been like in the community? Asobo Mongwa: We’re pleased to see the excitement among the maker community, as more and more become familiar with the investment Microsoft is making in IoT, and the continuous improvements we’re bringing to Windows 10 on the Raspberry Pi. We’ve heard this excitement first-hand during the recent Maker Faires in the Bay Area and Washington DC. We can’t wait to see how makers continue to create new innovations using Windows 10 IoT Core and the Pi 2 and 3, particularly now that we’ve shared information on our Maker To Market pathways: solutions we’ve put in place to enable makers to accelerate the commercialisation of their idea in the Marketplace.
TM: What was the idea behind the new kit? AM: The upcoming Microsoft Grove Kit is in collaboration with Seeed Studio. Seeed Studio took great interest in Microsoft’s investments in the IoT and maker space, and approached us for a co-engineering engagement on
raspberrypi.org/magpi
the kit. We see it as an indicator that influencers in the industry recognise the impact in the work we’re doing, and want to partner with us in helping the ecosystem to release the next generation of solutions on the Raspberry Pi with Windows 10 IoT Core. The Grove Kit is an enabler, and it’s designed for hardware and software developers at all levels of technical expertise. What I personally like about it is the fact that it takes away complexities like soldering, jumper cables, and breadboards, and allows the user to focus on their creativity. It’s modular, so you can use the components included in this kit as well as other additional Grove components.
TM: How much more can Pi 3 do versus previous iterations? DR: There were a few fundamental changes with the Pi 3, all of which we enabled on Windows 10 IoT core: faster clock speed, upgraded SD controller, on-board WiFi, and on-board Bluetooth. Because of the faster clock speed, GPIO performance with our Lightning driver was measured at a higher throughput.
TM: How important is the relationship between Windows 10 and Raspberry Pi to Microsoft? AM: Raspberry Pi has had a significant impact in education and the maker movement. We’re excited to see the increased interest in the industrial market for designing end products. We partnered with Element14 to offer Pi 3 as one of Microsoft’s supported silicon platforms for commercialisation with Windows 10 IoT Core. The Raspberry Pi has been showcased at some of Microsoft’s major conferences like //build, and the Windows Hardware Engineering Community conference in China. These are events where we share Microsoft’s current and future engineering investments with software and hardware partners.
August 2016
17
Feature
WINDOWS 10 IOT CORE
"
DAVIDE LONGO
"
for the g n i r d A moo ern age! mod
Davide is a .NET software developer at the IT department of an airline company. He lives on the beautiful island of Sardinia, is a graduate in physics, and is passionate about learning new technology. magpi.cc/2a30GRI
CHROMOTHERAPY This futuristic project uses Windows 10 IoT on a Raspberry Pi to monitor your happiness. It then adjusts the lighting in a room to match your mood
You’ll Need > Arduino Uno arduino.cc > Strip LED RGB adafruit.com/ product/306 > Bluetooth HC-05 > Generic breadboard uk.farnell.com/ prototyping- boards > Webcam uk.farnell.com/ webcams
icrosoft has a new service that works with facial recognition,” explains .NET developer Davide Longo. “It processes and returns information in the form of a score, rating happiness, anger, contempt, disgust, fear, surprise, and sadness. The service, part of Microsoft's Cognitive Services called ‘Emotion API’, inspired Davide to build this remarkable Windows 10 IoT project. Chromotherapy monitors a user’s facial expressions and adjusts mood lighting to match.
“M
The project has a Raspberry Pi running Windows IoT 10 Core at its heart. The Raspberry Pi is hooked up to a webcam, which sends captured images of the user’s face to the Emotion API. This sends a message via Bluetooth to the in-home lighting system, simulated using an Arduino hooked up to an RGB LED strip. “Each time I start a new project, I try to create something new and innovative that uses the latest technologies,” says Davide. “I immediately thought of the importance of light in a workplace.
“In chromotherapy, there are simple colour rules that are reflected in daily experience,” he explains, “Blue is a calming and refreshing colour; green symbolises balance, peace, and renewal.” All code is written in C# and available on GitHub (magpi.cc/2a31fLp). “Sharing ideas is the cradle of future innovation,” says Davide. “Despite what you might think, all the Microsoft programs and Microsoft Azure services used here are free.”
All the components are wired together using a standard breadboard to simulate an IoT lighting system
The Raspberry Pi connects to the Chromotherapy system via this Bluetooth HC-05 adapter
An LED RGB strip is used to simulate a mood lighting system
18
August 2016
raspberrypi.org/magpi
Feature
PHILIPPE LIBIOULLE
Originally from Belgium, Philippe is now an R&D group manager working in Canada. He creates a wide range of home automation projects, and is currently investigating Thread technology using Silicon Labs products. hackster.io/ThereIsNoTry
PERSONAL
HOME SAFETY AGENT You’ll Need > Water valve magpi.cc/ 2a3uP2B > Water sensor magpi.cc/ 2a3v1io > Fire bell magpi.cc/ 2a3ukpg > SSR relay magpi.cc/ 2a3uZXY > Smoke alarm relay module magpi.cc/ 2a3v79J > 12V SPDT relay (duo and single) magpi.cc/ 2a3vlxR
raspberrypi.org/magpi
anada is a lovely place to live, but it’s a challenging environment. In winter, the temperature can go down to -30C, which means heating has to be kept on and frozen pipes can burst. “I was talking to a friend,” says Philippe Libioulle. “He had to live far from home for months, just because a dishwasher had failed. During a chat with another friend, I realised water leakages are a big problem as well. My friend had to replace all the furniture in his basement, just because a pipe broke while he was at work. “Smoke detectors are mandatory, as per the regulations, but if there’s nobody home to react, the alert is not forwarded.” So Philippe put his maker mind to work and built this Internet of Things home safety system. A Raspberry Pi runs Windows 10 IoT Core, and monitors smoke and fire alarms and water leakages. If it detects a fault, the cloud system triggers actions. These include turning on the fire bell, or closing the main water entry to the house. It can even send messages to Philippe’s mobile phone, or his neighbours. “If the system is stable enough, I would consider notifying 911 too,” says Philippe. “I get alerts through SMS, using Twilio API (twilio.com). I
C
Fire and water are a risk to any home, but one maker uses his Raspberry Pi with Windows 10 IoT Core to monitor his home and get instant alerts to any problems The system is linked to a daisy chain of smoke detectors which send alerts through Twilio API and Azure Notification Hub
An electronic water valve is hooked up to the system. If a leak is detected, it automatically shuts off the flow of water to the house
A water sensor at ground level is used to detect any leaks in the home
"
our Keep y fe! sa home
"
also used Azure Notification Hub (azure.microsoft.com) to send alerts to laptops, smartphones, and tablets. “I’m inspired by real challenges, expressed by real people,” adds Philippe. “I have a new version of this concept, more modular and with a cleaner look.” August 2016
19
Feature
WINDOWS 10 IOT CORE
JENS MARCHEWKA A senior C# software developer from Dortmund, Jens is very experienced with Windows, so it was the obvious choice for this project. magpi.cc/29ysoSK
MAGIC
MIRROR Mirror, mirror on the wall, show me the latest news and all! This Pi-powered magic mirror delivers much more than your reflection
You’ll Need > GG Mirastar observation mirror magpi.cc/ 29Fv0C0 > LCD monitor > Wood to make the frame > Cables and connectors
20
hen attempting to purchase one-way observation mirror glass for his latest project, Jens Marchewka got some strange reactions from the suppliers he phoned, such as “What do you want to do with that? No, I don't want to know, bye.” He needed it to create a ‘magic mirror’ that displays information on an LCD screen behind the reflective glass panel. Inspired by an earlier project by Michael Teeuw (magpi.cc/29y06Yk), Jens wanted a mirror that could show articles from his favourite news sources, along with weather and calendar info. Not wanting to place ugly buttons down the side, he opted instead to use one-word voice commands to control the mirror’s display. For this, Windows’ built-in speech recognition system came in useful, although Jens experienced a few teething troubles. “I was using recognition without a grammar and that didn’t work properly… it [kept] recognising words [that] no one
August 2016
Reacting to voice commands, the mirror displays useful info such as calendar events
W
One-way observation mirror glass enables the LCD monitor behind it to be seen
A custom-made wooden frame is used to enclose the mirror glass and its workings
raspberrypi.org/magpi
Feature
The interior of the fully installed mirror; at the bottom-right you can see the Pi
was speaking and even activated the radio. This is easy to avoid if a grammar is used.” The whole build process took Jens only ten hours, including the construction of a wooden case to frame the mirror glass and enclose the LCD monitor, Raspberry Pi 2, and cabling behind it. To help keep everything firmly in place, he added a toeboard border to the front of
"
is here e r u t u f e h T rnet in e t in e h t h wit r! your mirro
"
the frame. He then painted the case black: “While constructing the frame, you should remember that the paint will take [up] space, so you need to keep that in mind and not build it too close.” Jens used a 24-inch LCD Iiyama monitor for his project, but any model or size could be substituted, so long as it has a good black level to achieve a nice mirror effect. Jens also notes that the monitor case needs to be disassembled to position the LCD panel as near as possible to the mirror. While his LCD screen is fully powered 24/7, he’s working on a setup for scheduling its power via CEC. Along with news and weather, Jens’s magic mirror shows calendar events – including fixtures for his favourite football team, Schalke 04 – obtained via the iCal web calendar. “You can define more than one iCal source and assign a colour to each… that makes it possible to have multiple events shown in different colours.” Jens has also added internet radio playback to the mirror, and is planning to use face recognition for displaying personalised events and news for him and his wife.
THE MAGIC OF WINDOWS 10 IOT User interface design One of the main reasons Jens chose to use Windows 10 IoT Core for this project, along with his experience as a Windows developer, is how easy it is to design custom user interfaces that look polished.
Speech recognition Windows’ built-in speech recognition comes in handy for the magic mirror. Jens simply created a SpeechRecognizer function with a list of commands. The mirror runs a continuous speech recognition session.
Syndication client Calendar events are shown top-left, with weather top-right. Say ‘news’ to see the latest headlines
raspberrypi.org/magpi
The detailed news is sourced from RSS feeds, using the SyndicationClient class within the Windows framework. “It's a very comfortable class for parsing RSS feeds,” Jens tells us.
August 2016
21
Feature
WINDOWS 10 IOT CORE
GETTING STARTED WITH
WINDOWS 10
IOT CORE Prepare your Raspberry Pi 3 for working with Windows 10 IoT Core
B
efore we can start using Windows 10 IoT Core, we need to get it installed on the Raspberry Pi. The best and recommended way to do this is via the IoT Dashboard on Windows 10 systems; this then easily allows you to program a Windows 10 Raspberry Pi using Visual Studio. Let’s get started!
From the IoT Dashboard, you can also find sample projects and soon you’ll be able to connect to the Azure IoT cloud service.
01
write the SD card properly. Follow the on-screen instructions in the wizard.
02
>STEP-01
Prepare your PC To start with, make sure your PC or laptop is updated to the latest version of Windows 10, or at least version 10.0.10240. You can check this by clicking the Start button and typing winver. Once that’s checked and you’re properly updated, you’ll need to download and then install the IoT Dashboard from the link here: magpi.cc/2ah13dt
22
August 2016
>STEP-02
Get the right image For the Raspberry Pi 3, you need to get yourself signed up for the free Windows Insider Program (insider.windows.com). You’ll then need to find and download the latest version of the Windows 10 IoT Core Insider Preview from here (pick the highest number): magpi.cc/29Ugtoz Go to where you downloaded it and double-click on the ISO file to enter it, then click on the installer inside to get your PC set up to
>STEP-03
Write to the disk Once the installer is finished, launch the IoT Dashboard we downloaded in step 1. From here we can write to the SD card. First, click ‘Set up a new device’. Select Custom from the ‘Device type’ menu and then click Browse right below that to find the image. The image lives in: C:\Program Files (x86)\ Microsoft IoT\FFU\
raspberrypi.org/magpi
Feature >STEP-05
WINDOWS 10 ON THE RASPBERRY PI 2
Configure the board
You can set up Windows 10 IoT Core very similarly on the Raspberry Pi 2; in fact, IoT Core is available in NOOBS already. Upgrading to a Raspberry Pi 3 is the best way to make use of Windows 10 on Pi, though, thanks to its extra power.
There will be a Raspberry Pi folder in there containing the file Flash.ffu. Select that and click Install/Download and install.
03
Once you’ve found your Pi, you can configure it by clicking the pencil icon next to it. From here you can launch Windows Device Portal, which has some basic settings you can edit. As well as setting up wireless networking if you need to, it’s also a great idea to set a username and change the default password. The latter is p@ssw0rd, which you’ll need to know before changing it. It’s just good InfoSec!
HAVE A DIFFERENT SETUP? Follow the customisable Getting Started guide here: magpi.cc/2abqzAW
05
>STEP-06 Start making
>STEP-04 Networking
For our tutorials, and in general, you need to be able to connect to the Raspberry Pi over the network to quickly and easily program it. It’s recommended to connect it via Ethernet to start with if you can. Go to ‘My devices’ and you'll be able to find the Raspberry Pi, so you can configure it once it’s turned on; this includes setting up access to your wireless network to use it in the future.
You’re ready to get going! You may have noticed there’s an option to try out samples on the side column, so if you want to test out a basic project from the list, you can do so. Just select the Raspberry Pi from the drop-down list and you’re off. To program it further, we’ll connect up Visual Studio, which we’ll cover in our first tutorial over the page...
06
04
raspberrypi.org/magpi
August 2016
23
Feature
WINDOWS 10 IOT CORE
PHIL KING When not sub-editing The MagPi and writing articles, Phil loves to work on Pi projects, including his two-wheeled robot. @philking68
BUTTONACTIVATED
LED
Create a simple physical computing project using an LED and a push button
You’ll Need > 1× solderless breadboard > 1× tactile push button > 1× LED > 1× 333Ω resistor > 4× male-tofemale jumper wires
A 330Ω resistor is required for the LED, to prevent an excessive current flow
24
August 2016
The LED is connected to the 3V3 pin via a resistor and GPIO 6
Any tactile push button can be used; it’s wired to GND and GPIO 5
ou’ve downloaded Windows 10 IoT Core, installed it on an SD card, and set up your Raspberry Pi as an IoT Core device? It’s time to start coding it, using Visual Studio 2015 on a Windows 10 PC. Let’s start with a simple project that uses a push button to toggle an LED…
Y
Connect up your Pi Before commencing, you’ll need the IP address of your Raspberry Pi, as shown in My Devices in Windows 10 IoT Core Dashboard. You’ll also need Visual Studio 2015 Community installed, which you can find here: magpi.cc/2adz7Hw First, let’s wire up our electronic components on a breadboard. It’s best practice to turn the Raspberry Pi off while doing so. As shown in the diagram, we connect the shorter, negative leg of the LED to GPIO 6 on the Pi. We connect its longer, positive leg to a 333Ω resistor; the other end of the latter is connected to the Pi’s 3V3 pin. This is what’s known as an ‘active low’ configuration, as when the GPIO pin is set low, current will flow through the LED and light it; when the pin is set high, it will turn off. Next, we’ll wire up the tactile push button, which straddles the central groove of the breadboard. One pin is connected to GPIO 5, while the other (on the same side) is wired to GND. Again, this is an active low raspberrypi.org/magpi
Feature Language
In the button interrupt handler, we look at the edge of the GPIO signal to determine whether the button was pressed or released. If pressed, we toggle the state of the LED:
The simple breadboard circuit is connected to the GPIO pins on the Raspberry Pi
configuration: the signal is high by default and goes low when the button is pressed. When it’s all wired up, turn on the Raspberry Pi. Note that the Pi should be in headed mode (which is the default) for this project. If not, you’ll need to set it in PowerShell on your main Windows 10 PC – see magpi.cc/29K4XWH. Now, on your Windows 10 PC, download the code examples, including PushButton for this project, from the GitHub repo: magpi.cc/29Qwut8. Right-click the zip folder and extract all files, then open samples-develop\ PushButton\CS\PushButton.csproj (the Visual C# project file) in Visual Studio; if not done already, you’ll be prompted to enable Developer Mode on your PC. Select ARM for the target architecture. Go to Build>Build Solution. This process may take a while. When finished, select Remote Machine from the Device drop-down arrow, enter the IP address of your Raspberry Pi, and select None for the authentication type. Press F5 to deploy and debug. Once it’s successfully deployed, try pressing the push button on your circuit; the LED should light up. Each time you press the button, the LED should toggle its state.
We assign the LED pin a high value to make sure it’s turned off to begin with. When we change the drive mode to Output, it will immediately drive the latched output value onto the pin: ledPin.Write(GpioPinValue. High); ledPin. SetDriveMode(GpioPinDriveMode. Output);ode> We then set up the button pin. Since the Raspberry Pi 2 and 3 have built-in pull up resistors that we can activate, we don’t need to add an external one. if (buttonPin.IsDriveMode Supported(GpioPinDriveMode. InputPullUp)) buttonPin. SetDriveMode(GpioPinDriveMode. InputPullUp) else buttonPin. SetDriveMode(GpioPinDriveMode. Input);ode>
Let’s take a look at the code in the MainPage() element to see how this works. At the top, we open the GPIO pin resources we’ll be using…
Next, we connect the GPIO interrupt listener, which is called each time the button pin changes state. We also set the DebounceTimeout property to 50ms to filter out spurious events caused by electrical noise.
buttonPin = gpio. OpenPin(BUTTON_PIN); ledPin = gpio.OpenPin(LED_ PIN);ode>
buttonPin.DebounceTimeout = TimeSpan.FromMilliseconds(50); buttonPin.ValueChanged += buttonPin_ValueChanged;ode>
Covering the code
raspberrypi.org/magpi
>C#
FILENAME:
PushButton.csproj
DOWNLOAD:
private void buttonPin_ ValueChanged(GpioPin sender, GpioPinValueChangedEventArgs e) { if (e.Edge == GpioPinEdge. FallingEdge) { ledPinValue = (ledPinValue == GpioPinValue.Low) ? GpioPinValue.High : GpioPinValue.Low; ledPin.Write(ledPinValue); }ode>
magpi.cc/ 29Qwut8
Finally, we want to update the user interface with the current state of the pin, so we invoke an update operation on the UI thread. This tutorial has been adapted from Push Button Sample, from the Windows IoT team consisting of Daniel Kuo, Tycho’s Nose, and Liz George, from the Microsoft Projects site: magpi.cc/29K5xne Select ‘Remote Machine’ and enter the IP address or name of your Raspberry Pi
The code that controls the circuit can be found in the MainPage() element of the project
August 2016
25
Feature
WINDOWS 10 IOT CORE
ROB ZWETSLOOT
INTERNET" CONNECTED
LED You’ll Need
> 1× Solderless breadboard
ou now know how to light an LED by pressing a button on Windows 10 IoT. Let’s now use the IoT part of the software to have an LED controlled by the internet!
Y
Prepare your Pi
> 1× LED > 1× 560Ω resistor > 2× Jumper wires
Turn your Raspberry Pi off and wire up the circuit as shown in the Fritzing diagram. It’s a simple one so it shouldn’t take you very long. Next, you’ll need to download some code to get the whole setup working: MainPage.cs, which is a small part of the project. Grab it from here: magpi.cc/2a5G0rP. Open up Lesson_201.sln and then
Connec Internet t to the of Thing s!
26
August 2016
"
Take information from the internet to light an LED, connecting to the Internet of Things, and becoming part of the World Map of Makers
open up MainPage.xaml.cs. This needs to be modified to be able to run the LED script we want to write, so replace all the code with the full version from here: magpi.cc/2a7rNgL. Once that’s done, we can create the class file. From the main menu, select Project > Add Class… The Add New Item dialogue will open and default to Visual C# Class. As mentioned before, we’re calling it InternetLED.cs. To begin with, we need to add ‘using’ lines at the top of the file so the code can reference the GPIO device, web interfaces, and the system diagnostics.
The resistor bridges the positive side of the LED with the 5V input, which powers the LED
We're using pin 4, which supplies 5V, and the programmable GPIO pin 12 in this setup
Tinkerer, sometimes maker, other-times cosplayer, and all the time features editor of The MagPi. magpi.cc
The negative side of the LED is connected to GPIO 12 using a jumper cable, which controls when it turns on and off
using using using using using
System; System.Diagnostics; System.Net.Http; System.Threading.Tasks; Windows.Devices.Gpio;
namespace Lesson_201 { Inside the class brackets, add the following lines. The first three lines are for controlling the GPIO, the next line allows us to add a delay if we don’t get a response for the web API call, and the last two lines store the state of the LED (on or off). class InternetLed { private GpioController gpio; private GpioPin LedControlGPIOPin; private int LedControlPin; private const int DefaultBlinkDelay = 1000; public enum eLedState { Off, On }; private eLedState _ LedState; Now, add the class constructor code which will store the value of the pin used to control the LED, interacting with the GPIO controller section of the class. raspberrypi.org/magpi
Feature Language
Join 4,000 other makers on the World Map of Makers, and light up their LEDs too!
>C#
for the rest of the code to use.
FILENAME:
InternetLED.cs
public InternetLed(int ledControlPin) { Debug. WriteLine("InternetLed::New InternetLed"); LedControlPin = ledControlPin; } Now we add code that initialises communication with the Pi’s GPIO. It first sets up the GPIO to be used for the LED, and then allows us to see the current pin value: public void InitalizeLed() { Debug.WriteLine("InternetLed ::InitalizeLed"); gpio = GpioController. GetDefault(); LedControlGPIOPin = gpio. OpenPin(LedControlPin); LedControlGPIOPin. SetDriveMode(GpioPinDriveMode. Output); GpioPinValue startingValue = LedControlGPIOPin.Read(); _LedState = (startingValue == GpioPinValue.Low) ? eLedState.On : eLedState.Off; } This part allows us to interact with the LED, depending on its state: public eLedState LedState { get { return _LedState; } set { Debug.WriteLine("Interne tLed::LedState::set " + value. raspberrypi.org/magpi
ToString()); if (LedControlGPIOPin != null) { GpioPinValue newValue = (value == eLedState. On ? GpioPinValue.High : GpioPinValue.Low); LedControlGPIOPin. Write(newValue); _LedState = value; } } } Now add a part that allows us to blink the LED on or off. If the LED is on, it’s switched off. If it’s off, it gets switched on. Simple! public void Blink() { if (LedState == eLedState. On) { LedState = eLedState. Off; } else { LedState = eLedState.On; } } We can now actually start using information from the internet. Start out by setting up a way to read a page on the web and then execute it. Once it’s been read, output the returned string to the debug channel so you can see it while running the project. Finally, we determine what the value of the delay is and return it
const string WebAPIURL = "http:// adafruitsample. azurewebsites.net/ TimeApi"; public async Task<int> GetBlinkDelayFromWeb() { Debug.WriteLine("Interne tLed::MakeWebApiCall"); string responseString = "No response"; try { using (HttpClient client = new HttpClient()) { responseString = await client. GetStringAsync(WebAPIURL); Debug.WriteLine(String. Format("Response string: [{0}]", responseString)); } } catch (Exception e) { Debug.WriteLine(e. Message); } int delay; if (!int. TryParse(responseString, out delay)) { delay = DefaultBlinkDelay; } return delay; } } }
DOWNLOAD:
magpi.cc/2a5Fqdp
Once all the code is entered, you can build the solution and run the code on your Pi. If you’re having issues, the code listing has the full solution to check against. This tutorial has been adapted from World Map of Makers by the Windows IoT team consisting of David Shoemaker, Anthony Ngu, and Aparajita Dutta, from the Microsoft Projects site: magpi.cc/2a5GaPD
August 2016
27
Feature
WINDOWS 10 IOT CORE
ROB ZWETSLOOT Tinkerer, sometimes maker, other-times cosplayer, and all the time features editor of The MagPi. magpi.cc
TALKING LIGHT
SENSOR Once again, download everything from here: magpi.cc/2a9OEW1. Open up Lesson_204.sln in Visual Studio and open the mainpage.xaml.cs file. We need to do a bit more editing to get it working, so we’ll run over that first. Add the following lines to the top of the MainPage class, right after the { bracket. First, we need to let the code know how much voltage the chip is getting:
The main code As with the last tutorial, we'll download a project to work from.
Double-check the wiring for this part; this chip can only be wired up to specific pins on the GPIO port to work
If you make this circuit more permanent, you'll need to make sure the placement of this light sensor is good for your purpose
45
50
55
60
45
50
55
60
28
August 2016
F G H
MCP3008 A B C D E 25
20
15
10
5
A B C D E
USB 2x
1
USB 2x
30
ETHERNET
These resistors are hooked up to the analogue-to-digital converter (ADC) chip so their state can be read by the Raspberry Pi
I
J
40 40
F G H
I
J
35
CSI (CAMERA) Audio
35
Raspberry Pi Model 2 v1.1 © Raspberry Pi 2014
HDMI
30
GPIO
Power
DSI (DISPLAY)
25
> Headphones or speaker (optional)
20
> Male-to-female jumper wires
15
> 1× 560Ω resistor
I
First of all, with your Raspberry Pi off, wire up the circuit as shown in the Fritzing diagram, paying careful attention to the orientation of the chip. Refer to Fig 1 on the next page for notes on the orientation. Connect a speaker or headphones now if you want to use them.
10
> 1× Light sensor (photoresistor)
n the last two tutorials, we learnt how to use a button to turn on an LED, and then how to use a web API call to blink another LED. Let’s bring both of these concepts together and make a device that takes user inputs from two dials (variable resistors) and a light sensor, and then uses that information to tell you a result using an online text-to-speech API. Let’s play it bright, or not!
5
> 1× Solderless breadboard
1
You’ll Need
Create a customisable, talking light sensor that lets you know when the amount of light hitting it is just right
raspberrypi.org/magpi
Feature Language >C#
FILENAME:
const string HighLightString = "I need to wear shades"; This part will help record the state of the light sensor:
Credit: Iainf The variable resistor, or potentiometer, is a voltage divider that, when twisted, allows for an output that has a variable voltage
enum eState { unknown, JustRight, TooBright, TooDark}; eState CurrentState = eState. unknown;
const float ReferenceVoltage = 5.0F;
We now tell the code which ADC chip we’re using:
Then we tell it what the channels will be detecting. The third line is for the light sensor:
MCP3008 mcp3008 = new MCP3008(ReferenceVoltage);
const byte LowPotentiometerADCChannel = 0; const byte HighPotentiometerADCChannel = 1; const byte CDSADCChannel = 2; Here’s the text strings we want to use for when the Pi talks to us: const string JustRightLightString = "Ah, just right"; const string LowLightString = "I need a light";
We’re going to connect to the speech-to-text part of the Windows 10 Cognitive services, so we add: private SpeechSynthesizer synthesizer; And finally, we add a little bit so we can check the ADC at a set interval: public Timer timer; With the start of the class filled out, we can now fill out some of the MainPage constructor (public
DOWNLOAD:
magpi.cc/2ac38rr
MainPage()), right after this. InitializeComponent();. Here we add a line to start the speech synthesiser and another to initialise the ADC: synthesizer = new SpeechSynthesizer(); mcp3008.Initialize(); Now we can set up a timer in the OnNavigateTo method. After Debug.WriteLine("MainPage::OnN avigatedTo");, add the following:
"
err y Pi b p s a R A ks! that tal
"
timer = new Timer(timerCallback, this, 0, 1000); Now we have the timer callback being called, let’s fill it out. After the } following ‘return;’, add the following lines. Let’s set the new light state and assume it’s just right to start: eState newState = eState. JustRight; Next, we want to read the values of the dials from the ADC, so we use: int lowPotReadVal = mcp3008. ReadADC(LowPotentiometerADCCha nnel); int highPotReadVal = mcp3008. ReadADC(HighPotentiometerADCCh annel); int cdsReadVal = mcp3008. ReadADC(CDSADCChannel);
The setup may end up looking a bit messy once you're done, but make use of colour-coding so you know what wire is related to what part of the system
raspberrypi.org/magpi
Fig 1 The notch on the chip denotes that this is where the 'front' of the chip is. In the Fritzing diagram, this notch is on the left side of the chip
Convert these readings into voltages to make them easier to use: August 2016
29
Feature
WINDOWS 10 IOT CORE PRACTICAL USES
How could such a project be used in the real world? Well, making sure some flowers are kept in proper sunlight may be useful for certain
botanists. You could also place it near your TV to let you know when it’s time to shut the curtains to avoid massive screen glare!
it says ‘insert code’, insert the following code which first converts text input into a stream, and then plays the audio: SpeechSynthesisStream synthesisStream; synthesisStream = await synthesizer.SynthesizeTextToStr eamAsync(textToSpeak); media.AutoPlay = true; media. SetSource(synthesisStream, synthesisStream.ContentType); media.Play();
Coding the ADC); Next, we’ll make sure we can read the values when the code is running from the debug: Debug.WriteLine(String. Format("Read values {0}, {1}, {2} ", lowPotReadVal, highPotReadVal, cdsReadVal)); Debug.WriteLine(String. Format("Voltages {0}, {1}, {2} ", lowPotVoltage, highPotVoltage, cdsVoltage)); We now check if the state of the light is below the values set by the variable resistors: if (cdsVoltage < lowPotVoltage) { newState = eState.TooDark; } 30
August 2016
Next in the chain is to see if it was too high: if (cdsVoltage > highPotVoltage) { newState = eState.TooBright; }
Now we’re done with the MainPage code, it’s time to code up the ADC. Open up MCP3008.cs from the project file. We need to start by creating an initialising method to set up communication with the SPI bus the ADC chip runs on. Just after the line ‘Debug.WriteLin e(“MCP3008::Initialize”);’ add the following, starting with the configuration of the SPI bus and telling it the ADC’s clock speed:
And then we let the code know to use a different method, to determine what to do with the state of the light:
try { var settings = new SpiConnectionSettings(SPI_CHIP_ SELECT_LINE); settings.ClockFrequency = 3600000; settings.Mode = SpiMode. Mode0;
await CheckForStateChange(newState);
This line will return all SPI devices on the system:
The CheckForStateChange code has been mostly completed in this example, but you’ll need to add the following under ‘use another method to wrap the speech synthesis functionality’ to do just that:
string aqs = SpiDevice. GetDeviceSelector(); This line finds the SPI bus controller from the previous line:
await TextToSpeech(whatToSay); Now we can add the part that lets the Pi talk to us! Under async, where raspberrypi.org/magpi
Feature Language >C#
FILENAME: MCP3008.cs
DOWNLOAD:
Now we have the information for the chip, we’re able to add code so we can read it:
There's a whole host of other tutorials on GitHub where this project lies; give them a look if you want to try out more!
public int ReadADC(byte whichChannel) { byte command = whichChannel; command |= MCP3008_ SingleEnded; command <<= 4; var dis = await DeviceInformation. FindAllAsync(aqs); And the rest of the code creates an SPI device using all the information we just grabbed:; } Remember when we were turning the input into a voltage so it was easier to read? We need to add a helper method here to work with that: public float ADCToVoltage(int adc) { return (float)adc * ReferenceVoltage / (float)Max; }
magpi.cc/2a9TxOR
It’s game time Your project is ready! Run the code with the setup in a normally lit area. The output window will show you the voltages read from the ADC: the lower-boundary resistor, the higher-boundary resistor, and the light sensor. It should look something like this: MainPage::timerCallback Read values 211, 324, 179 Voltages 1.031281, 1.583578, 0.8748778 Turn the lower-boundary potentiometer, watching the value of the first number change. Adjust it so it’s just a touch lower than the light sensor reading. Do the same with the other resistor, turning it to be just above the light sensor. This is now our ‘just right’ zone of light. Now, when you put your hand over the light sensor, it will print in the output window ‘I need a light’; it will also say it to you if you have speakers or a headphone connected. Experiment with uncovering and shining a light on it to see just how sensitive it is. This tutorial has been adapted from Bright or Not? by the Windows IoT team consisting of David Shoemaker, Anthony Ngu, and Aparajita Dutta, from the Microsoft Projects site: magpi.cc/2abUvx0
The light sensor, or photoresistor, changes resistance depending on the amount of light hitting it. Like the variable resistor, this changes the voltage at the output
raspberrypi.org/magpi
August 2016
31
Projects
SHOWCASE
INGMAR STAPEL Ingmar has been building Pipowered robot cars since 2012. He’s also working on a security robot for the home. custom-build-robots.com
A WiFi router connects the Discoverer to a remote laptop for live video and steering
Built from a kit, the metal detector is mounted on a PVC pipe arm at the front
The pan-and-tilt mechanism enables the Camera Module to move with 360° of freedom
Quick Facts > The Discoverer took nearly six months to build > There’s a 350m range for remote control > It uses a Navilock NL‑602U GPS receiver > The original chassis was a smaller IKEA box > The pan-andtilt kit features two mini-servos
34
DISCOVERER
METAL-DETECTOR ROBOT This smart Raspberry Pi robot is equipped with a metal detector, along with GPS tracking and a pan-and-tilt camera e’ve seen all sorts of Pipowered robots in our time, but the Discoverer is the first with a built-in metal detector. Mounted on a PVC pipe arm in front of the four-wheeled robot, the detector emits a beep whenever it passes over a metallic object. Prolific robot maker Ingmar Stapel, from Munich, came up with the idea after watching a TV show about people trying to find gold with a sophisticated metal detector. “I was immediately inspired to build my own affordable robot-car with a metal detector in order to discover some treasure in my garden,” he tells us.
W
August 2016
The basis for the new robot was his previous cardboard car, which required some major adaptations. “First, I had to look for a different chassis that is also suitable for outdoor usage. Second, I needed a metal detector that fits to the robot-car and the Raspberry Pi to ensure remote-controlled treasure hunting.” For the chassis, he used a plastic storage box to contain all the electronics, adding PVC piping around the exterior to hold a pan-and-tilt camera and the metal detector. At first, Ingmar tried using a cheap electric-cable detector from a DIY store, but its
coil was too small to detect metal in the ground. He then came across the Gary’s Pulse-AV metal detector (magpi.cc/1XdKBeK). “With support from Gary, I was able connect the metal detector to the Raspberry Pi and to get everything working.” This involved using a step-down converter to change the detector’s 12V output to 3.3V for a GPIO pin on the Pi. As Ingmar wanted the Discoverer to stream live video to a laptop, it would need a camera. After mounting a Pi Camera Module on the front of the chassis, he found the angle of view was too limited. “I bought the pan-and-tilt kit raspberrypi.org/magpi
DISCOVERER METAL-DETECTOR ROBOT
Projects
BUILDING A METAL-DETECTING ROBOT
>STEP-01
>STEP-02
>STEP-03
The chassis is a standard plastic storage box bought from a DIY store. Four motors are attached to the bottom of the box using wall mounts for PVC pipes.
Standard L298N H-bridge motor controllers are used to trigger the DC motors. Ingmar plans to replace the latter with worm-gear motors and also use larger wheels.
A Gary’s Pulse-AV PI detector kit (magpi.cc/1XdKBeK) is used for the metal detector part, which is mounted fairly low to the ground on a PVC pipe arm.
Chassis and wheels
and mounted it in the middle of the robot and I got a much better overview.” Some trial and error was involved in getting the live video streaming to work with very low latency, since even the tiniest delay would make remote control difficult. In the end, he used MJPGstreamer (magpi.cc/24F96k6) with some patches to get it to run on the Pi 3: “The video is very fast with a very low latency… it is even possible to make some robot-car races in our apartment.” While originally just remotecontrolled, the Discoverer is now able to move autonomously using a GPS receiver, Sense HAT (for the compass), and a Python program Ingmar had already developed. “It is still in a beta version, but it is already able to import a KML file with GPS waypoints generated via Google Earth. Imported into my Python program, the robotcar is able to drive from one GPS waypoint to the next.” When it’s finalised, he plans to include a manual in his first book, due to be published (in German) this autumn. While the Discoverer is mainly for fun treasure hunting, Ingmar thinks it could also have some serious applications, such as detecting mines in war zones. raspberrypi.org/magpi
Motor drivers
Metal detector kit
Above The Discoverer is loosely based on Ingmar’s earlier cardboard car (magpi.cc/1VQv8QJ), which has live video streaming
Left Unable to fit all the components into the original IKEA plastic storage box, Ingmar replaced it with a larger one
August 2016
35
Projects
SHOWCASE
TIM MAIER Tim is currently studying for a Bachelor of Information Technology degree at Queensland University of Technology, with plans to major in Computer Science. magpi.cc/29Burv1
Quick Facts > The project cost $500 AUD (£280) to build > It currently covers 2-3km per charge > The board can currently reach up to 15km/h > Tim plans a battery and ESC upgrade > Build recipe is available on GitHub
MOTORISED SKATEBOARD A university assignment allowed one student the chance to realise his dream of building the latest in whizzy commuter gadgets
y now we’re sure you’re aware that e-boards have officially become a ‘thing’. From knee-driven mini-Segways and two-wheeled ‘hoverboards’ to standard motorised decks, the streets are filled with wheeled commuters. And while Marty McFly may have failed to deliver on the true hoverboard of our dreams, search online for an electric skateboard and you’ll find
B
The family’s 3D printer provided the Raspberry Pi casing for added protection
the next best thing, albeit with a hefty price tag. So when Queensland University of Technology student Tim Maier was assigned with the task of ‘building something with a Raspberry Pi’, he already knew what he planned to create. “Building an electric skateboard had been something on my mind for some time, as buying one was not a viable option,” Tim explains,
when we ask him if he had considered any other directions for his project. “So when we were told about the task, it all kind of linked up and I started to do my research on what to buy.” With a few requirements in mind, Tim started researching the perfect motor. He wanted to achieve an average speed of 30km/h to aid his commute, and knew the motor would easily
Tim built a smaller deck to allow him to easily carry it around campus
The board uses standard wheels and trucks, easily acquired online
36
August 2016
raspberrypi.org/magpi
Projects
MOTORISED SKATEBOARD
TAKE TO THE STREETS
>STEP-01
The perfect motor was chosen to reach speeds of up to 30km/h
be one of the most expensive components of the build. Finally, he decided upon a Turnigy Aerodrive SK3, matching it with two 2200mAh lithium polymer (LiPo) batteries and a basic electronic speed control (ESC). Despite having to rely on YouTube and assorted literature to educate him on how to utilise Python, the biggest hurdle for Tim turned out to be the drive
Spurred by the positive response, he’s provided the code and kit list on GitHub (magpi.cc/29Burv1), and plans to also create an instructional video of an upgraded design for anyone wanting to make their own. The Pi Skate 2.0 will house more batteries for longevity, a higher-quality ESC for greater speed and the ability to brake, and possibly LED lights because, well, why not? And as
The motor itself is controlled by a Pi and Wii Remote (a Wiimote to those in the know) system. “Finding a way to attach the motor mount to the skateboard truck was a huge fiddle.” He ended up creating a makeshift U-bolt system, though he plans to upgrade the mounting layout when attaching a new ESC. The motor itself is controlled by a Pi and Wii Remote (a Wiimote to those in the know). Holding the ‘1’ and ‘2’ buttons will connect the Wiimote to the Pi. The ‘B’ button activates the motor, while ‘up’ and ‘down’ on the D-pad control speed. Upon completing the build, Tim has been met with thousands of YouTube views and calls for how-to guides and board sales. raspberrypi.org/magpi
for the Wiimote? Tim hopes to move the board’s control system to a mobile phone or smartwatch, thereby reducing the bulk of the console controller. Of his future in the digital making industry, Tim suggests, “I will most probably venture into a career within the computing and engineering field, but haven’t really thought too much about what area I will specialise in. I’m keeping my options open as it’s such a broad field.” In the meantime, his continued experimentation with Raspberry Pi will lead to further YouTube videos as and when he builds projects.
The power
Two LiPo batteries are connected in parallel to the ESC, taking approximately an hour each to charge. They power both the motors and the Raspberry Pi.
>STEP-02 The control
The ESC runs 5V power to the GPIO pins, which powers the Raspberry Pi. This is where control from the Wiimote converts to a pulse-width modulation signal, sending it back to the ESC.
>STEP-03 The launch
The ESC then signals to the motor to go-go-go, and you’re on your way. Make sure to keep balanced as you’re zooming along!
August 2016
37
Projects
SHOWCASE
CORY GUYNN Cory Guynn is the creator of the Internet of LEGO and is a systems engineer at Cisco Meraki. He has a degree in Computer Electrical Engineering & Technology from Purdue University. InternetOfLEGO.com
The lights in buildings are controlled by the Raspberry Pi system, and the disco lights respond to visitors to the Internet of LEGO blog
The Raspberry Pi acts as a control centre for all the transport and buildings. It’s accessed via a web browser
The train system is fully automated and based on Transport for London’s API. Trains are tracked as they move, and passengers get real-time updates on small displays
Quick Facts > Around 20,000 LEGO bricks were used > It took oneand-a-half years to build > Over 46,000 people have interacted with the blog > 17 LEGO kits were used > Most LEGO kits were from the Creator Expert series
38
THE INTERNET OF LEGO One maker is taking a love of LEGO to a whole new level with Raspberry Pi. Take a tour of this incredible internet-connected cityscape being built brick by brick (internetoflego.com).
W
August raspberrypi.org/magpi
THE INTERNET OF LEGO We love the detail of the Internet of LEGO city, with citizens going about their daily lives
Projects
CITY PLANNING
>STEP-01
Regular LEGO
The Internet of LEGO is a Raspberry Pi-powered city built from regular LEGO bricks and kits. These are then given Internet of Things-style smarts, thanks to various electronic components.
Multiple Raspberry Pi, Arduino, and BeagleBoard devices are used to make the Internet of LEGO work
system using the Transport for London API. This system displays the schedule on an OLED screen and switches to the train track to match the destination. The trains are controlled by WiFi and an infrared transmitter attached to a tower. Infrared
crossing, train track switch, elevator, motion detector, and city lights.” Most of the orchestration of the city and its many sensors is handled by Node-RED. “This system allows me to add inputs and outputs from any of my
Cory has developed a huge amount of software to control all the structures in the Internet of LEGO sensors are used to detect incoming trains and trigger a crossing signal (with a servo controlling the arm, and LEDs for lights). “Everything is connected to an Arduino Mega, which is then USB‑tethered to the Raspberry Pi,” explains Cory. He has developed a huge amount of software to control all the structures in the Internet of LEGO. “The Johnny-five.io robotics framework made programming sensors and servos easy. I wrote a few projects, including a train raspberrypi.org/magpi.”
>STEP-02
Raspberry and Arduino A Raspberry Pi controls two Arduino boards using an MQTT broker. The Arduinos are assigned to GPIO operations, while a second Pi acts as an MQTT cluster member and runs Node.js automation scripts.
>STEP-03 A working city
The Internet of LEGO is fully automated, with reed sensors used to detect train presence. Timetable information is based on London API data. When people visit the blog, the Palace Cinema lights up and the disco starts working.
August 2016
39
Projects
SHOWCASE
DJORDJE UNGAR Djordje Ungar is a programmer by day and a digital alchemist by night. He is a hobby artist, animator, musician, game maker, hacker, and tinkerer. djordjeungar.com
Quick Facts > HAL stands for ‘Heuristically programmed ALgorithmic computer’ > HAL is rumoured to be IBM with each letter shifted one backward
HAL 9000 “Open the pod bay doors, HAL” are chilling words to anybody who has watched 2001: A Space Odyssey, apart from one maker who decided it would be a good idea to build HAL 9000 for real
> The total cost for the build was $99 (including the Raspberry Pi) > All the parts are off-theshelf computer components > The case is laser-cut acrylic covered in black paint
The case is made from 3mm black acrylic laser-cut into the right shape for HAL. It is spray-painted for a professional finish
A stripped-down web camera is fitted with a camera lens. This completes the HAL 9000 look, and the web camera also provides the device with a microphone
A USB speaker is fitted inside the device near the grille at the bottom. Jasper software provides HAL with a voice
40
August 2016
“I
raspberrypi.org/magpi
HAL 9000
Projects
BUILDING A HAL 9000
>STEP-01
Stripping the webcam A recycled Insten USB digital six-LED webcam is stripped down. A red marker is used to paint the LEDs red, giving the device’s eye the same ominous glow HAL has in the movie.
A bargain camera lens is fitted inside the laser-cut acrylic case to form HAL 9000’s famous single eye. Red LEDs complete the effect case,” he tells us. The brain is the Raspberry Pi running the Jasper project (magpi.cc/29iKyxS). raspberrypi.org/magpi.”
>STEP-02
Stuffing the case The case is laser-cut from an acrylic sheet and glued together. The lens is fitted through a round hole in the front face and the webcam fitted inside. This provides HAL 9000 with a camera and, more importantly, a microphone.
>STEP-03
Assembled HAL All the components are fitted inside, and the Raspberry Pi provides the software for human interaction. A speaker is fitted to the bottom of the device to provide the voice.
August 2016
41
Tutorial
WALKTHROUGH
ESSENTIALS
LEARN | CODE | MAKE AVAILABLE NOW: > CONQUER THE COMMAND LINE > EXPERIMENT WITH SENSE HAT > MAKE GAMES WITH PYTHON > CODE MUSIC WITH SONIC PI
ESSENTIALS
From the makers of the official Raspberry Pi magazine
Tutorial
OUT NOW IN PRINT
ONLY £3.99 from
raspberrypi.org/magpi
GET THEM DIGITALLY:
Tutorial
STEP BY STEP
THE HAYLER-GOODALLS Ozzy, Jasper, and Richard are mentors at CoderDojo Ham and gave a talk at the Raspberry Pi birthday party about their AstroPi adventures. richardhayler.blogspot.co.uk @rdhayler / coderdojoham.org
RGB LED TWEET-O-METER You’ll Need > RGB LED > Breadboard > Jumper wires > 3× 100 ohm resistors > Twitter developer account > TextBlob Python library > Twython Python library
Use the GPIO Zero Python library to control an RGB LED and see how well your tweets are doing
eeping up to date with Twitter can be very time-consuming, especially if there are lots of tweets. What if you could see at a glance what the Twittersphere thinks about a certain topic? In this tutorial we’re going to build a simple RGB LED circuit, and program it to change colour to indicate whether the tweets that include a given hashtag or keyword are using positive, negative or generally neutral language.
K
>STEP-01
Install Python libraries Update your Pi to the latest version of Raspbian and download and install the additional software you’ll need.
sudo pip3 install twython textblob
Common cathode RGB LED. The longest leg will be the cathode and should be connected to ground
The resistors limit the current flowing through the LED and prevent damage
There are two libraries that make our project really easy. Twython allows you to contact Twitter using Python and collect tweets (you’ll need to register for a Python developer account – see step 5). Then, to read the tweets in the code, we’re going to use TextBlob; there are other libraries available, but this is one of the simplest.
>STEP-02
Do you like sausages? Let’s take a look at a simple example. Open a Python 3 interpreter (either use the command line or IDLE) and type:
>>> from textblob import TextBlob >>> sentence = TextBlob('I really like sausages, they are great') >>> sentence.sentiment.polarity 0.5 Any value for polarity greater than 1 indicates a positive sentiment (like); a value less than 1 suggests negative sentiment (dislike). Try changing the sentence and see how a different phrase will give a different result. Results will be more accurate if you have more text, although a 140-character tweet is normally good enough.
>STEP-03
Select your RGB LED
You can use any size breadboard for this circuit
44
August 2016
Light-emitting diodes (LEDs) are cool. Literally. Unlike a normal incandescent bulb which has a hot filament, LEDs produce light solely by the movement of electrons in a semiconductor material. An RGB LED has three single-colour LEDs combined in one package. By varying the brightness of each component, you can produce a range of colours, just like mixing paint. There are two main types of RGB LEDs: common anode and common cathode. We’re going to use common cathode. raspberrypi.org/magpi
Tutorial
RGB LED TWEET-O-METER
tweetometer.py
Language >PYTHON 3
#Tweet-o-meter: Add your own Twitter API developer keys (lines 9-12) # and choose your own keyword/hashtag (line 56) import time, sys from textblob import TextBlob from gpiozero import RGBLED from twython import TwythonStreamer
DOWNLOAD:
magpi.cc/1WBerda
# Add Python Developer App tokens and secret keys APP_KEY ='ENTER APP KEY HERE' # <- CHANGE APP_SECRET = 'ENTER APP SECRET HERE' # <- CHANGE OAUTH_TOKEN = 'ENTER OAUTH_TOKEN HERE' # <- CHANGE OAUTH_TOKEN_SECRET = 'ENTER OAUTH_TOKEN_SECRET HERE' # <- CHANGE
Above Once you've created your Twitter app, generate the required access tokens; copy these into your code
>STEP-04
Connect up the RGB LED LEDs need to be connected the correct way round. For a common cathode RGB LED, you have a single ground wire and three anodes, one for each colour. To drive these from a Raspberry Pi, connect each anode to a GPIO pin via a current-limiting resistor. When one or more of these pins is set to HIGH (3.3V), the LED will light up the corresponding colour. Connect everything as shown in the diagram.
>STEP-05
Register as a Twitter API developer Anyone with a Twitter account can register as a developer, although you might be asked to provide a mobile phone number or other identification details. Once you’ve registered, you need to create a new application at apps.twitter.com. Click the button to create a new app and then fill in the required fields. Once that’s done, select the ‘Keys and Access Tokens’ tab and click on the ‘Create my access token’ button.
>STEP-06
Process some tweets Download or type up the code from the tweetometer.py listing. Add the Twitter API keys and tokens generated in step 5 at the appropriate places. Now pick a hashtag or keyword for testing. As this is US presidential election year, we found that using the name of one of the candidates generated more than enough data! Run the code: you should see a running count of the analysed tweets on the console, and the LED should flash with each new matching tweet. Between new tweets, the LED will remain the colour of the sentiment with the biggest count. raspberrypi.org/magpi
# Set our RGB LED pins status_led = RGBLED(14,15,18, active_high=True) # Set active_high to False for common anode RGB LED status_led.off() totals = {'pos':0,'neg':0,'neu':0} colours = {'pos':(0,1,0),'neg':(1,0,0),'neu':(0,0,1)} class MyStreamer(TwythonStreamer): def on_success(self,data): # When we get valid data if 'text' in data: # If the tweet has a text field tweet = data['text'].encode('utf-8') #print(tweet) # Uncomment to display each tweet tweet_pro = TextBlob(data['text']) # Calculate sentiment # Adjust value below to tune sentiment sensitivity if tweet_pro.sentiment.polarity > 0.1: # Positive print('Positive') status_led.blink(on_time=0.4, off_time=0.2, on_color=(0, 1, 0), n=1, background=False) totals['pos']+=1 # Adjust value below to tune sentiment sensitivity elif tweet_pro.sentiment.polarity < -0.1: # Negative print('Negative') status_led.blink(on_time=0.4, off_time=0.2, on_color=(1, 0, 0), n=1, background=False) totals['neg']+=1 else: print('Neutral') # Neutral status_led.blink(on_time=0.4, off_time=0.2, on_color=(0, 0, 1), n=1, background=False) totals['neu']+=1 overall_sentiment = max(totals.keys(),key=(lambda k: totals[k])) status_led.color = colours[overall_sentiment] print(totals) print('winning: ' + overall_sentiment) time.sleep(0.5) # Throttling def on_error(self, status_code, data): # Catch and display Twython errors print( "Error: " ) print( status_code) status_led.blink(on_time=0.5,off_time=0.5, on_color=(1,1,0),n=3) # Start processing the stream stream2 = MyStreamer(APP_KEY, APP_SECRET, OAUTH_TOKEN, OAUTH_TOKEN_SECRET) while True: # Endless loop: personalise to suit your own purposes try: stream2.statuses.filter(track='magpi') # <- CHANGE THIS KEYWORD! except KeyboardInterrupt: # Exit on Ctrl-C sys.exit() except: # Ignore other errors and keep going continue August 2016
45
Tutorial
WALKTHROUGH
SIMON LONG Works for Raspberry Pi as a software engineer, specialising in user interface design. In his spare time he writes apps for the iPhone and solves crosswords. raspberrypi.org
These are variable declarations: in C, a variable must be declared before you use it
C can print the results of calculations to the commandline in formats you choose
AN INTRODUCTION TO C
PART 02
VARIABLES & ARITHMETIC
Doing some real work in C: creating variables and doing mathematical operations on them n some languages, you can create variables as you go along and put whatever data you want into them. C isn’t like that: to use a variable in C, you need to have created it first, and at the time you create it, you have to set what type of value it is going to store. By doing this, a block of memory of the correct size can be allocated by the compiler to hold the variable. This process of creating a variable is known as declaration.
I
Integers
There are several fundamental data types in C, but we’ll start by looking at one of the most commonly used: the int type, used to store an integer value.
ARITHMETIC SHORTHAND C allows shortcuts for some common operations; for example, instead of typing a = a + 1, you can just enter a++. Or for a = a * 3, you can enter a *= 3
46
#include <stdio.h> void main (void) { int a; int b = 3; int c; a = 2; c = a + b; printf ("The result of adding %d and %d is %d\n", a, b, c); }
August 2016
The top three lines inside the main function here are declarations. They tell the compiler that we would like to use variables called a, b and c respectively, and that each one is of type int, i.e. an integer. In the second line, we see an example of an initialisation at the same time as a declaration: this stores an initial value of 3 in the variable b. Note that the values of a and c at this point are undefined; you might assume that a variable which hasn’t had a value stored in it is always 0, but that isn’t the case in C. Before reading the value from a variable, or using it in a calculation, you must store a value in it; reading a variable before initialising it is a common error in C. The next two lines do some actual work with the variables we have declared.
a = 2; This stores a value of 2 in the variable a, which will now have this value until it is changed. (The reason a is called a variable is that it can vary: you can change its value as often as you like, but only to another integer. The value of a variable can change, but its type is fixed when it is declared.)
c = a + b; This line adds a to b, and stores the result in c. raspberrypi.org/magpi
Tutorial
AN INTRODUCTION TO C DECIMAL PLACES You can set the number of decimal places to display for a floating point type-specifier in printf by putting a decimal point and the number of places between the % and the f – so %.3f will show a float value with three digits after the decimal point.
printf ("The result of adding %d and %d is %d\n", a, b, c); This is another use of the formatted print function we saw in the previous instalment. Note the three %d symbols inside the string: these are called format specifiers, and they are how you output numbers in C. When the printf function is executed, each %d is replaced by a decimal representation (d for decimal integer) of the variable in the corresponding position in the list after the string. So the first %d will be replaced by the value of a, the second with the value of b and the third with the value of c. Compile the program above and then run it. You should see this:
The result of adding 2 and 3 is 5 …in the terminal.
Floating-point numbers
So we can add two integers together; what else can we do? One thing we might want to do is to use floatingpoint numbers. These are numbers with a decimal point. These have a different type, called float. Try changing the code above so instead of:
sense. You told it to show you a floating-point number as if it were a decimal integer, and the compiler assumed that was what you wanted, even though the result was nonsense. When you are working with variables, always keep track of what values you are putting in what types, as it is easy to introduce errors by assuming a variable is of one type when it is actually another. One common error is to put the results of a calculation on floatingpoint values into an integer. Try this: make b a float as well (not forgetting to change its format specifier in the printf), but leave c as an int, and set the two floats to values with decimal points, like this:
float a; float b = 3.641; int c;
int a; …you have:
float a; This tells the compiler that a is now a floating-point value, rather than an integer. Compile and run your program. What happens? Oops! That doesn’t look right, does it? What has happened is that, while the maths is still all correct, the printf statement is now wrong, because you are telling it to print a, which is a floating-point value, as a decimal integer. To fix that, change the first %d in the printf function to %f, which is the format specifier for a floating-point number, like this:
printf ("The result of adding %f and %d is %d\n", a, b, c); That should produce something a lot more sensible when you run it. This is an important lesson about C: it will do exactly what you tell it to, even if it makes no raspberrypi.org/magpi
a = 2.897; c = a + b; printf ("The result of adding %f and %f is %d\n", a, b, c); You’ll see a result like:
Above Don’t forget to use %f instead of %d as the print specifier when changing the int values to float values in the example
MULTIPLE DECLARATIONS You can declare multiple variables of the same type in one line, separated by commas. For the example here, instead of three separate int declarations, you could type
int a, b = 3, c;
on one line.
REMEMBER PRECEDENCE
The result of adding 2.897000 to 3.641000 is 6 6? That’s not right! But it is exactly what you have asked for. What the compiler did was to add the two floating-point values together, and got the answer 6.538, but you then told the compiler to put that into c, an integer variable. So the compiler just threw away everything after the decimal point! If you change c to a float, and change the final %d to %f, you’ll find it gives the correct answer. That gives you some idea about how C handles numbers, and how you can use it for arithmetic; in the next instalment we’ll look at how to use the results of calculations to make decisions.
C obeys the common rules for operator precedence – so a = a + 2 * 3 evaluates the multiply first and then adds the result, 6, to a. You can use round brackets to change precedence –
a = (a + 2) * 3 gives 3a + 6.
August 2016
47
Tutorial
WALKTHROUGH
MIKE’S PI BAKERY You’ll Need > GP2Y0A2YK0F IR distance sensor > LM339 comparator > 26-way dual line pin header socket > 3× LEDs, coloured red, green & blue > 21×14-hole stripboard
MIKE COOK Veteran magazine author from the old days and writer of the Body Build series. Co-author of Raspberry Pi for Dummies, Raspberry Pi Projects, and Raspberry Pi Projects for Dummies. magpi.cc/1NqIdHU
OLYMPIC SWIMMING
A UNIQUE SWIMMING RACE SIMULATOR Try to beat the Olympic 50-metre freestyle record
> 50mm length of angled aluminium 15×15mm > Assorted resistors and capacitors
ast year in The MagPi #35 (July 2015), we showed you how to make a Pi Sprinting simulator, where you ran on the spot, tapping foot sensors. Now, with Olympic fever about to grip the nation, we bring you a swimming simulator. We are going to use some of the elements of the Pi Sprinter software, but introduce a new way of interacting with the Pi.
L
The idea is that an IR distance sensor is used to register imaginary swimming strokes, although the optimum stroke to use is not a graceful front crawl. In order to complete a stroke, you need to register three readings of increasing distance; the best way to do this is to do a sort of doggy paddle in midair in front of the sensor. The setup has the added
As you ‘swim’, the display shows an animated swimmer
An infrared sensor is used to register strokes performed in mid-air
48
August 2016
raspberrypi.org/magpi
OLYMPIC SWIMMING SIMULATOR
Tutorial
The finished swimming sensor interface
advantage of being totally immune to the ravages of overenthusiastic kids, because there is nothing that they have to touch.
BUILDING THE SWIM SENSOR
The hardware
The IR distance sensor we used is the GP2Y0A2YK0F, which is the same type we used in The MagPi #44 infinity mirror project. The method of converting the output into a digital reading is the same as well, only this time the resistor values have been altered to trigger the distances we need. There are
Totally immune to the ravages of enthusiastic kids also three coloured LEDs that help you know when a stroke is being registered, so you don’t drift out of the line of the sensor. We built the sensor onto a piece of stripboard that plugs directly into the Pi’s GPIO header. We used a mixture of surface-mount and through-hole techniques, although you can make it all using through-hole parts if you prefer. The full schematic of this is shown in Fig 1 (overleaf), and the finished interface can be seen above. The construction details are in the step-by-step guide. raspberrypi.org/magpi
>STEP-01
Prepare the board
Cut out a piece of 21×14-hole stripboard and make the cuts in the tracks shown above. Note that the cuts between the GPIO header are not done at a hole, but between the holes. To do this, make a cut across the board with a scalpel and then make another one in parallel with the first, as close as you can. This second cut should be angled in a bit so you remove a sliver of copper. Drill two 3mm holes for the sensor mounting bracket.
August 2016
49
Tutorial
WALKTHROUGH 5V
Graphics work
0.1uF
Distance sensor 5V GP2Y0A2YK0F 5K6
47uF GPIO 7 270R
11 + 10 -
1.3V
3 13
GPIO 23 <20cm
0.1uF
470R
1.0V
9 8
RED
GPIO 11
14
+
GPIO 24 <27cm
-
LM339 300R
270R 0.8V GREEN
7 + 6 -
1
GPIO 25 <34cm
>STEP-02
300R
GPIO 4
The big difference between this project and the Pi Sprinter one is the graphical display, which is split into two parts: the swimmer and the background. We used the Pi sprinter as the starting point for the background and added both a textured water surface in place of the track, and some Olympic sculptures on the side of the outdoor pool. The problem with the water surface was that it didn’t wrap round well: there was a discontinuity at each end. This was solved by splitting the water surface graphic into two, left and right, and then swapping the sides over. This had the effect of putting the discontinuity in the middle of the frame, where it could be made to blend together using the clone and blur tools found in any photo-processing package. The swapping of left and right halves is easier using Photoshop: use the filter in Filters>other>over. The GIMP package can be used for an all-Pi graphics solution. A repeating pattern of red and yellow rectangles make up a lane divider for the bottom of the screen.
Mounting the components
270R
0.6V
5 4
910R
2
+ -
GPIO 8 <40cm
12
BLUE Fig 1 The schematic of the project
Attach the GPIO plug and IC socket as shown below. Add the solid wire links, shown in black, and solder the resistors in place. Complete the wiring by using flexible wires where indicated. Prepare the sensor bracket by drilling 3mm holes on one side of the angle aluminium to match those in the stripboard, and in the other to match the sensor.
5K6
LM339 910R
50
August 2016
raspberrypi.org/magpi
Tutorial Pin 1 of Raspberry Pi GPIO
470R
0.1uF
300R
270R
Signal
>STEP-03
Wire up the track side
Flip the board over and add the surface-mount resistors and capacitors; through-hole parts can be substituted if you prefer. Use flexible wires to make the other connections. Mount the three LEDs and angle them by about 45° towards the sensor; note here that the flat spot on the LED denotes the cathode, which is usually the shortest leg. Fix the sensor to a bracket, attach the bracket to the board, and wire it up to the sensor.
raspberrypi.org/magpi
- + Sensor
The swimming figure was isolated frame by frame from a swimming animation, and the complete sequence for a full stroke is 24 frames long. The figure had a water level line where the colours changed slightly, as if seen underwater. This was achieved by adding a blue tint to the colours under this line. However, this still gave the impression that the swimmer was swimming on top of the water. To get round this, the part of the figure under water was given a 66% opacity
August 2016
51
Tutorial
WALKTHROUGH
value. This was done by creating two layers in the photo-editing package with the same contents. Then the out-of-water half was removed from one layer and the underwater half from the other. The underwater layer was then set to 66% opacity and the picture saved as a PNG file. This merges or flattens the image before storage. The upshot is that the lower half of the figure is translucent and you can see some of the water surface ripple through the lower part of the swimmer. However, in practice, thanks to the way the brain interprets images, this looks like you are seeing the swimmer through the water ripple. Fig 2 shows this effect halfway through a race.
The software
The software is shown in the swimmer.py listing. The background scrolls repeatedly as the swimmer makes their way from the left to the right of the screen. The action looks smooth as long as you keep paddling with your hands. The LEDs will light in the sequence red, green, blue as you make one stroke. The start sequence is like a swimming race,
The sounds were pulled from a real race video with a whistle summoning the swimmers, then the instruction to ‘take your mark’, and finally a buzzer starting the race. The sounds were pulled from a real race video; we found the best races to use were the long ones, as they did not suffer from voice-over by commentators like the sprints did. All the sound and graphics files, along with the code, are on our GitHub repository (magpi.cc/1NqJjmV).
Taking it further
You could duplicate the distance sensor to make a side-by-side real-time race. Alternatively, using two sensors, you could detect a butterfly stroke.
swimmer.py 01. 02. 03. 04. 05. 06. 07. 08. 09. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49.
import pygame, time, os, random import wiringpi2 as io pygame.init() # initialise graphics interface pygame.mixer.quit() pygame.mixer.init(frequency=22050, size=-16, channels=2, buffer=512) os.environ['SDL_VIDEO_WINDOW_POS'] = 'center' pygame.display.set_caption("The Pi Olympic Swimmer") pygame.event.set_allowed(None) pygame.event.set_allowed([pygame.KEYDOWN,pygame.QUIT]) screen = pygame.display.set_mode([1000,415],0,32) textHeight = 36 font = pygame.font.Font(None, textHeight) random.seed() compPins = [ 8,25,24,23] ledPins = [ 7, 11, 4 ] restart = True ; strokeState = 0 soundEffects = ["whistle","mark","go","end"] swimingFrames = [ pygame.image.load("images/ S"+str(frame)+".png").convert_alpha() for frame in range(0,24)] background = pygame.image.load("images/BackgroundPi.png"). convert_alpha() gameSound = [ pygame.mixer.Sound("sounds/"+soundEffects[sou nd]+".ogg") for sound in range(0,4)] def main(): global restart, strokeState initGPIO() print"The Pi Olympic Swimmer" while True: if restart: frame = 0 ; distance = 0 ; manDistance = -85 posts = 3 ; ledOff() ; strokeState = 0 restart = False ; showPicture(frame,distance,400,posts) time.sleep(2.0) ; gameSound[0].play() print"Mount" ; time.sleep(4.0) showPicture(frame,distance,manDistance,posts) print"Take your mark" ; gameSound[1].play() time.sleep(random.randint(2,5)) print"Start" ; startTime = time.time() gameSound[2].play() strokeDetect() showPicture(frame,distance,manDistance,posts) manDistance += 4 distance += 40 if distance > 3000: distance -= 2000
Fig 2 The screen halfway through a race
52
August 2016
raspberrypi.org/magpi
Tutorial
OLYMPIC SWIMMING SIMULATOR
Language >PYTHON 2.7 50. 51. 52. 53. 54. 55. 56. 57.
posts -=1 frame = frame + 1 if frame > 23: frame = 0 if posts == 0 and distance >=100 : raceTime = int(100*(time.time() - startTime)) gameSound[3].play() drawWords("Finished "+str(raceTime / 100.0)+" Seconds",400,258) pygame.display.update() print"Finished - type return for another race" while not restart: checkForEvent()
58. 59. 60. 61. 62. 63. def initGPIO(): 64. try : 65. io.wiringPiSetupGpio() 66. except : 67. print"start IDLE with 'gksudo idle' from command line"
68. 69. 70. 71.
os._exit(1) for pin in range (0,4): io.pinMode(compPins[pin],0) # mux pin to output io.pullUpDnControl(compPins[pin],2) # input enable pull up 72. for pin in range (0,3): 73. io.pinMode(ledPins[pin],1) # LED pin to output 74. io.digitalWrite(ledPins[pin],0)
99. 100. 101. 102. 103. 104. 105.
106. 107. 108. 109. 110. 111. 112. 113. 114. 115. 116. 117. def getSensor(): 118. sensor = 0 119. for i in range(0,4) : 120. sensor = (sensor << 1) |
121. 122. 123. 124. 75. 125. 76. def showPicture(frame,distance,manDistance,post): 126. 77. screen.blit(background,[-distance,0]) 127. 78. if distance > 1000 : 128. 79. screen.blit(background,[1999-distance,0]) 129. 80. drawWords(str(post),-distance+1932,220) 81. screen.blit(swimingFrames[frame],[60+manDistance,230]) 130. 131. 82. pygame.display.update() 132. 83. 133. 84. def drawWords(words,x,y) : 134. 85. textSurface = pygame.Surface((14,textHeight)) 135. 86. textRect = textSurface.get_rect() 136. 87. textRect.left = x 137. 88. textRect.top = y 138. 89. pygame.draw.rect(screen,(102,204,255), 139. (x,y,14,textHeight-10), 0) 140. 90. textSurface = font.render(words, True, 141. (255,255,255), (102,204,255)) 142. 91. screen.blit(textSurface, textRect) 143. 92. 144. 93. def strokeDetect(): 145. 94. global strokeState 146. 95. if strokeState == 0: 147. 96. while getSensor() != 1 : 148. 97. checkForEvent() 149. 98. io.digitalWrite(ledPins[2],0)
raspberrypi.org/magpi
DOWNLOAD: io.digitalWrite magpi.cc/1NqJjmV (ledPins[0],1) strokeState = 1 PROJECT return VIDEOS if strokeState == 1: Check out Mike’s while getSensor() != 3 : Bakery videos at: checkForEvent() magpi.cc/1NqJnTz io.digitalWrite(ledPi ns[0],0) io.digitalWrite(ledPins[1],1) strokeState = 2 return if strokeState == 2: while getSensor() != 7 : checkForEvent() io.digitalWrite(ledPins[1],0) io.digitalWrite(ledPins[2],1) strokeState = 0 return
io.digitalRead(compPins[i]) return sensor def ledOff(): for pin in range (0,3): io.digitalWrite(ledPins[pin],0) # LED off def terminate(): # close down the program print "Closing down please wait" for pin in range (0,3): io.pinMode(ledPins[pin],0) # LED pin to input pygame.mixer.quit() pygame.quit() # close pygame os._exit(1) def checkForEvent(): # see if we need to quit global restart event = pygame.event.poll() if event.type == pygame.QUIT : terminate() if event.type == pygame.KEYDOWN : if event.key == pygame.K_ESCAPE : terminate() if event.key == pygame.K_RETURN : restart = True print"New Race" # Main program logic: if __name__ == '__main__': main()
August 2016
53
Tutorial
STEP BY STEP
Upload files quickly via the Terminal to your Dropbox account
ROB ZWETSLOOT Tinkerer, sometime maker, other-times cosplayer, and all-the-time features editor of The MagPi. magpi.cc / @TheMagP1
You can also add Dropbox capabilities to your Python scripts – perfect for photo projects
View your Dropbox files online in your browser, or on another PC with a synced Dropbox folder
GET DROPBOX
ON RASPBERRY PI You’ll Need > Dropbox account dropbox.com > Dropbox Uploader magpi.cc/ 2aaHoJN > Your Dropbox API key
Connect to the most ubiquitous cloud service on your Raspberry Pi, perfect for uploading pictures and video in a project!
rop.
D
, 54
August 2016:
github.com/andreafabrizi/Dropbox-Uploader.git Once that’s downloaded, you’ll need to move to the folder (cd Dropbox-Uploader) to begin installing. You can start this off with:
./dropbox_uploader.sh It will ask for your API key, which is our cue to move onto to the next step. raspberrypi.org/magpi
GET DROPBOX ON RASPBERRY PI
Tutorial FREEING UP SPACE In a Bash or Python script using the function, you can always set it to delete the upload once it's sent, to free up space on the Pi.
Left Creating an app on Dropbox is easy; just make sure it has a unique name so you can get it working
>STEP-03
Find your API key You need to head to the developers’ section of Dropbox (magpi.cc/2aaQnKQ)-04
>STEP-06!
Now everything should be working and you can start uploading. Everything revolves around the dropbox_uploader file, so stay in the folder or make sure to have your code point towards the folder in the future. The code to upload is something like:
Enter your API key
. raspberrypi.org/magpi
Above From the Terminal, all you need to do is download the project to begin with: it's just a simple git request
Start uploading!
./dropbox_uploader.sh upload path/to/file dropbox_filename You can use this code in Python 3 by creating an OS call, using something like:
from subprocess import call Upload = "home/pi/Dropbox_Uploader/ dropbox_uploader.sh upload path/to/file dropbox_filename" call ([Upload], shell=True) Time to get uploading and experimenting!
OTHER COMMANDS As well as uploading, you can use it to download files. A full list of commands is at: magpi.cc/ 2aaHoJN
August 2016
55
Tutorial
WALKTHROUGH
SAM AARON Sam is the creator of Sonic Pi. By day he’s a research associate at the University of Cambridge Computer Laboratory; by night he writes code for people to dance to. sonic-pi.net
PART 12
EXPLORE THE POWER OF AMPLITUDE MODULATION Learn how to master Sonic Pi’s powerful slicing capabilities with its creator, Sam Aaron.... &&
You’ll Need
> Raspberry Pi running Raspbian
his month, we take a deep dive into one of Sonic Pi’s most powerful and flexible audio FX – the :slicer. First, listen to the deep growl of this code, which triggers the :prophet synth:
T
synth :prophet, note: :e1, release: 8, cutoff: 70 synth :prophet, note: :e1 + 4, release: 8, cutoff: 80
> Sonic Pi v2.7+ > Speakers or headphones with a 3.5mm jack > Update Sonic Pi: sudo apt-get update && sudo apt-get install sonic-pi
Now, let’s pipe it through the :slicer FX:
with_fx :slicer do synth :prophet, note: :e1, release: 8, cutoff: 70 synth :prophet, note: :e1 + 4, release: 8, cutoff: 80 end Hear how the :slicer acts like it’s muting and unmuting the audio with a regular beat. It also affects all the audio generated between the do/end blocks. You can control the speed which it turns the audio on and off with the phase: (phase duration) opt. Its default value is 0.25, which means 4 times a second at the default BPM of 60. Let’s make it faster:
with_fx :slicer, phase: 0.125 do synth :prophet, note: :e1, release: 8, cutoff: 70 synth :prophet, note: :e1 + 4, release: 8, cutoff: 80 end Now, have a play with different phase: durations. Good values to try are 0.125, 0.25, 0.5, and 1. Fig 1 shows how different phase: values alter the number of amplitude changes per beat. By default, the :slicer FX uses a square wave to manipulate the amplitude through time. Other control waves supported by :slicer are saw, triangle, and (co)sine – see Fig 2. The following code uses (co)sine as the control wave. Hear how the sound doesn’t turn on/off abruptly, but instead smoothly fades in/out:
with_fx :slicer, phase: 0.5, wave: 3 do synth :dsaw, note: :e3, release: 8, cutoff: 120 synth :dsaw, note: :e2, release: 8, cutoff: 100 end 56
August 2016
Fig 1
Play around with the different wave forms by changing the wave: opt to 0 for saw, 1 for square, 2 for triangle, and 3 for sine. See how different waves sound with different phase: opts, too. Each wave can be inverted vertically using the invert_wave: opt. The control wave can also be started at different points with the phase_offset: opt – a value between 0 and 1. By playing with phase:, wave:, invert_wave:, and phase_offset opts, you can dramatically change how the amplitude is modified through time. By default, :slicer switches between amplitude values 1 (fully loud) and 0 (silent). This can be altered with the amp_min: and amp_max: opts. Use this with the sine wave setting to create a simple tremolo effect:
with_fx :slicer, amp_min: 0.25, amp_max: 0.75, wave: 3, phase: 0.25 do synth :saw, release: 8 end This is like moving your hi-fi’s volume knob up and down just a little so the sound ‘wobbles’ in and out. raspberrypi.org/magpi
Tutorial
AMPLITUDE MODULATION
Language
One of :slicer’s powerful features is its ability to use probability to choose whether or not to turn the slicer on or off. Before the :slicer FX starts a new phase, it rolls a dice and, based on the result, either uses the selected control wave or keeps the amplitude off. Let’s take a listen:
with_fx synth synth synth end
>RUBY
:slicer, phase: 0.125, probability: 0.6 do :tb303, note: :e1, cutoff_attack: 8, release: 8 :tb303, note: :e2, cutoff_attack: 4, release: 8 :tb303, note: :e3, cutoff_attack: 2, release: 8
We now have an interesting rhythm of pulses. Try changing the probability: opt to a different value between 0 and 1. Values closer to 0 will have more space between each sound, due to the likelihood of the sound being triggered being much lower. Another thing to notice is that the probability system in the FX is just like the randomisation system accessible via fns such as rand and shuffle. They are both completely deterministic. This means that each time you hit Run, you’ll hear exactly the same rhythm of pulses for a given probability. If you would like to change things around, you can use the seed: opt to select a different starting seed. This works exactly the same as use_random_seed, but only affects that particular FX. Finally, you can change the ‘resting’ position of the control wave when the probability test fails, from 0 to any other position, with the prob_pos: opt:
with_fx :slicer, phase: 0.125, probability: 0.6, prob_pos: 1 do synth :tb303, note: :e1, cutoff_attack: 8, release: 8 synth :tb303, note: :e2, cutoff_attack: 4, release: 8 synth :tb303, note: :e3, cutoff_attack: 2, release: 8 end One really fun thing to do is to use :slicer to chop a drum beat in and out:
with_fx :slicer, phase: 0.125 do sample :loop_mika end This allows us to take any sample and create new rhythmical possibilities, which is a lot of fun. However, one thing to be careful about is to make sure that the tempo of the sample matches the current BPM in Sonic Pi, otherwise the slicing will sound totally off. For example, try swapping :loop_mika with the :loop_amen sample to hear how bad this can sound when the tempos don’t align. As we have already seen, changing the default BPM with use_bpm will make all the sleep times and synth envelope durations grow or shrink to match the beat. The :slicer FX honours this too, as the phase: opt raspberrypi.org/magpi
Fig 2
is actually measured in beats, not seconds. We can therefore fix the issue with :loop_amen above by changing the BPM to match the sample:
use_sample_bpm :loop_amen with_fx :slicer, phase: 0.125 do sample :loop_amen end Now, let’s apply all these ideas into a final example that only uses the :slicer FX to create an interesting combination. Go ahead, start changing it and make it into your own piece!
live_loop :dark_mist do co = (line 70, 130, steps: 8).tick with_fx :slicer, probability: 0.7, prob_pos: 1 do synth :prophet, note: :e1, release: 8, cutoff: co end with_fx :slicer, phase: [0.125, 0.25].choose do sample :guit_em9, rate: 0.5 end sleep 8 end live_loop :crashing_waves do with_fx :slicer, wave: 0, phase: 0.25 do sample :loop_mika, rate: 0.5 end sleep 16 end August 2016
57
Tutorial
STEP BY STEP
PHIL KING When not sub-editing The MagPi and writing articles, Phil loves to work on Pi projects, and to help his six-year-old son learn Scratch coding. @philking68
The boat crashes if it hits something brown, like this revolving gate The boat sprite is programmed to move towards the mouse pointer The timer is shown on screen, and stops when the boat reaches the yellow beach
You’ll Need > A mouse > Art assets magpi.cc/ scratch_art > Speaker (optional)
MAKE A BOAT RACE GAME IN SCRATCH
Create your own boat race game, complete with mouse control, collision detection, and on-screen timer
n this tutorial, you’ll be making your own arcade game in which the player attempts to guide a boat safely around a maze-like course, including a revolving gate, and get to the finish in as fast a time as possible. You can even design your own custom course if you like. As well as moving a sprite towards the mouse pointer, this project involves collision detection, using the touching color Sensing block to determine whether the boat has hit something. Let’s dive in and start coding…
I
This tutorial was adapted from a Code Club project (codeclub projects.org) and you can find more in Learn to Code with Scratch: magpi.cc/ Scratch-book
58
>STEP-01
Prepare your artwork First, delete the cat! You should then import the two sprites for the boat and gate. Since they’re not in the Scratch 1.4 library, you can download them (magpi.cc/scratch_art). Just click the star/folder icon above the Sprite List (bottom-right), then navigate to the folder where you’ve stored the downloaded graphics for this project. Import the Boat and Gate sprites. If you aren’t designing your own course, you can also download and import our Course backdrop: click Stage
August 2016
in the Sprite List, select the Backgrounds tab (topmiddle), then click Import and navigate to the folder.
>STEP-02
Design a course You could just edit our course. Alternatively, to create a brand new one, click on the Stage in the Sprite List, then the Backgrounds tab, and Paint. Use the paint bucket tool to fill the canvas with a blue colour for the water. Then use a brown colour, which should be the same as in the Gate sprite, to draw the walls of the course. Use a yellow colour to draw some sand for the finish. Finally, add some white arrows which will act as speed boosters. Once this is done, let’s make our Gate sprite rotate by adding the simple code in Listing 1 to its Scripts area.
>STEP-03
Controlling the boat In this game we’ll be controlling the boat with a mouse, using the code in Listing 2 in the Scripts tab of the Boat sprite. To do this, we simply point it towards ‘mouse pointer’ and move it one step at a time, within raspberrypi.org/magpi
BOAT RACE GAME
Tutorial .01
.02
Above: We used touching color Sensing blocks to detect when the boat has hit a hazard, booster, or the finish
a forever loop. To stop it from moving when near the pointer, we put the control code in an if block that only tells it to move if the distance to the pointer is greater than 5. Run the code and guide the boat: at the moment, it sails straight through barriers.
>STEP-04 Make it crash!
.03
What we need is some collision detection to check whether the boat has hit a hazard. Within your forever block, add the code from Listing 3 under your boat control code. Here, we use the touching color Sensing block to see if the boat has hit anything brown: click the colour square to get a dropper tool, then click on a brown part of the course. When it crashes, we switch the boat’s costume, say ‘Noooooo!’, then place it back at the start point in its normal costume. Let’s add two more if touching color blocks, shown in Listing 4, to our forever loop. The first checks whether the boat has reached the yellow beach, which acts as the finish line, and stops the program. The second detects the white of our booster arrows and moves the boat three steps.
.04
>STEP-05
Boosters and time To make our game a bit more exciting, we need a timer. Click the Stage and add the Listing 5 code to its Scripts area. This sets the time to zero at the start of the game, then gradually increases the time variable in line with real time; you’ll need to create the latter in Variables, and make sure it’s ticked so that it’s shown on the stage.
.05
>STEP-06
Taking it further You could easily add a sound effect for when the boat crashes, using a Sound block. You could even add background music, composing it using Sound blocks with various drums, instruments, and notes. The best time(s) could also be stored in a variable or list. raspberrypi.org/magpi
August 2016
59
Tutorial
STEP BY STEP
PHIL KING When not sub-editing The MagPi and writing articles, Phil loves to work on Pi projects, and to help his six-year-old son learn Scratch coding. @philking68
The poem is generated by selecting random words from lists When the computer is clicked, it beeps and shakes The user clicks on Ada Lovelace to start talking to her
ADA POETRY GENERATOR You’ll Need
> Art assets magpi.cc/ scratch_art > A list of words > An eye for poetry
Ada Lovelace unveils the Analytical Engine in Scratch! This early computer looks a bit primitive, but can generate random poems
n this Scratch project, the user first chats to Ada, before clicking on her computer to generate a random poem. To achieve this, we’ll be creating and using lists, found in the Variables block category, containing words of a certain type: verbs, nouns, adjectives, and adverbs. We’ll then select randomly from these lists to create the poem, which should be different each time. They can be quite amusing.
I
>STEP-01
Prepare your artwork This tutorial was adapted from a Code Club project (codeclub projects.org) and you can find more in Learn to Code with Scratch: magpi.cc/ Scratch-book
60
After deleting the cat sprite as usual, you need to import the sprites and backdrop. Since they’re not in the Scratch 1.4 library, you can download them (magpi.cc/scratch_art). As the Poetry backdrop is so simple – just a grey stripe at the bottom of a white canvas – you could paint it yourself, or just use ours by importing it from the folder where you’ve stored the downloaded graphics for this project. The same goes for the Banner sprite. Otherwise, import each sprite as usual, by clicking the star/folder icon above the Sprite List.
August 2016
>STEP-02
Ada says hello First, we’ll get our Ada sprite to interact with the user via speech bubbles and text input when clicked, using the say and ask commands. Open the Ada sprite’s Scripts tab and then type in the code from Listing 1. Note that you’ll first need to create a name variable: select the Variables block category from the top-left, then click ‘Make a variable’, ‘For this sprite only’, and enter ‘name’ in the text field. You should untick the name block to stop it showing on the stage. We can now set name to answer (the user’s text input), and then add it into Ada’s response by using the join Operator block. Make sure you put a space after ‘Hi’ to avoid it being joined together with the name. After this, we add a block to get Ada to tell the user to click the computer.
>STEP-03
Computer beeps Click the Computer sprite and select its Scripts tab. This is where we’ll add the workings of our poetry generator. To start with, type in the code from raspberrypi.org/magpi
ADA POETRY GENERATOR
Tutorial
Above: The script for Ada asks for the user’s name, then says hi before telling them to click on the computer for a poem
Listing 2. After a block to say ‘Here is your poem’ and the user’s name, we’ll use a Sound block to make our computer beep. Our Computer sprite already has the sound for this, or you can record/import a new one in its Sounds tab. We also add a repeat loop with two turn blocks to make the computer shake.
>STEP-04
Above: To add words to each list, tick it to make it appear on the stage, then click its ‘+’ icon
Create word lists
.01
You can’t make a poem without words. We’ll store ours in four lists: verbs, adverbs, nouns, and adjectives. Create each of these in Variables by clicking the ‘Make a list’ button, then ‘For this sprite only’, and typing its name. It will then appear on the stage: to add words to it, click the ‘+’ icon and type them in, one by one. When done, untick this list block to make it vanish from the stage. We used the following words for our lists:
Adjectives: happy, tired, hungry Adverbs: loudly, silently, endlessly Nouns: sea, moon, tree Verbs: laugh, dance, burp
.02
>STEP-05
Poetry in motion Now we have our word lists, we can use them to generate a random poem each time the computer is clicked by the user. Join the code from Listing 3 to the bottom of your existing script for the Computer sprite. It comprises four say blocks, each of which includes an item of Variables block; this should have ‘any’ selected from its dropdown menu, to make a random selection from the list. Test the project out a few times to check that it works properly and generates random poems.
.03
>STEP-06
Taking it further While we’ve only created short lists for this example, you could add lots more words to them for greater variation in the random poems created by the computer. More, and differently constructed, say blocks can also be added to make poems longer. If you’re not keen on blank verse, why not create lists of rhyming words? raspberrypi.org/magpi
August 2016
61
Tutorial
STEP BY STEP
WESLEY ARCHER Self-taught Raspberry Pi enthusiast, founder of Raspberry Coulis, and guide writer for Pi Supply and Cyntech. raspberrycoulis.co.uk @RaspberryCoulis
An arcade isn’t an arcade without the obligatory joystick! This one is perfect
Our buttons include NeoPixels to add some pizzazz, but this is optional
You’ll Need > Picade PCB (magpi.cc/ 29DpDCz) > Zippy ball-top arcade stick (eBay) > 2.8mm & 4.8mm arcade daisy chain wires (eBay) > Female A panel mount USB socket (eBay) > 8× 30mm arcade buttons (modmypi.com) > Various 12.7mm standoffs (modmypi.com) > 4× M4 countersunk 16mm machine screws and bolts > 3.5mm femaleto-male stereo jack extension
62
BUILD YOUR OWN
RASPCADE:
CONTROLS
In this second part of the build, we’ll be showing you how to assemble your controls as part of your RaspCade home-build! rcade controls can be mind-boggling, especially with all the wires involved. It is no surprise that people get confused, but there’s no need to panic. The joystick and buttons are essentially switches with positive and negative terminals. Today we’ll be going over wiring basics, and using Pimoroni’s Picade PCB as a brilliant way of setting up your RaspCade controls quickly and easily. If you haven’t already done so, buying dedicated arcade wiring looms or harnesses makes the whole process much easier because they are designed to be assembled quickly, without any soldering. Are you ready? Then let’s get started!
August 2016
A
>STEP-01
Plan your wiring first Before you put your controls in place, it is worth doing a test run first. Connect your wires and be sure you understand what goes where before placing them in your cabinet. The arcade wiring harnesses are perfect for this as the spade connectors simply slide on (and off) the buttons, so no soldering is required. Each button (including the joystick) has a positive and negative terminal. You can ‘daisy-chain’ the negative terminals (connect them together), which means you only have one ground wire instead of several. Once you’re happy, carefully press the buttons into place: they are a tight fit! raspberrypi.org/magpi
Tutorial
BUILD YOUR OWN ARCADE MACHINE 3 (6 total) positive wires to individual inputs on PiCade PCB|
Negative daisy-chain to ground on PiCade PCB|
It’s worth labelling your joystick when wiring. The switch is in the opposite direction, as it is activated when the joystick moves in that direction
A Fritzing diagram showing how to wire up your arcade buttons, including the daisy-chain connection
3 (6 total) positive wires to individual inputs on PiCade PCB|
>STEP-02
>STEP-04
If you’re using our arcade cabinet design, the four holes should line up with the mounting plate on the joystick. Unscrew the ball-top and slide the plastic collar off, then bolt the joystick in place using four M4 screws and bolts. You can use a countersink drill bit to tidy this up if you like, but it isn’t essential. Also, do not over-tighten the bolts as this could damage the cabinet. Once you have done this, replace the plastic collar and screw the ball-top back on. Now onto the wiring underneath...
Before wiring your buttons, carefully push them through the holes in the cabinet. They are a tight fit, so be gentle where you push to avoid snapping the cabinet. Once they are in, use the 2.8mm arcade harness and daisychain all the negative terminals (any can be used, but use the same one on each button) and strip the end again. Then connect one to each positive terminal so you end up with seven wires ready to connect to the Picade (six positive and one negative). We can now move onto the final two buttons on the front panel.
>STEP-03
>STEP-05
Connecting your controls is not as difficult as you might think. Grab your 4.8mm arcade harness and slot one connector onto each negative (black) terminal. Daisy-chain these so you have one wire connecting the four negative terminals. Next, use some wire cutters to cut and strip the end so you have a wire that can be connected to the Picade. You may need to extend this wire, but this can be done easily, even by a soldering newbie. Next, connect one wire to each of the positive (red) terminals. Each of these will be connected to the Picade, too.
The front panel consists of two arcade buttons, a USB port, and the headphone jack. The buttons pop into place the same way the others, and you also need to daisy-chain the negative terminals: you should have three wires (two positive and one negative). The USB port screws into place and the other end will be connected to our Pi. You can use this to connect a WiFi dongle, or a USB thumb drive to store all your games, making it simple to add more. The headphone jack screws into place, allowing you to play your RaspCade without disturbing everybody.
Bolt your joystick in place
Better get yourself connected!
Push the button
TIN THE WIRE ENDS If the wires don’t stay in the Picade PCB, tin their ends with a soldering iron to fatten them up.
Assembling the front panel
>STEP-06
Connecting to the Picade PCB
The Picade PCB is well worth the money. Button inputs are clearly labelled, which makes setting up your controls a breeze!
raspberrypi.org/magpi
Connecting all the wires is really simple thanks to the Picade PCB. This has several screw terminals, all nicely labelled for our arcade controls. Simply find the relevant terminal and then screw the wire into place. You should have eight wires for the buttons and four for the joystick, as well as three ground (negative) wires. It isn’t essential to put the wires in exactly the right place, so if your ‘up’ button is connected to the ‘down’ terminal, it can still be configured once in RetroPie. The Picade simply connects to the Pi via USB and that’s it!
BE GENTLE WHEN ASSEMBLING! The buttons are a tight fit and the cabinet is relatively thin. Be gentle when pushing the buttons into place.
August 2016
63
YOUR QUESTIONS ANSWERED
FREQUENTLY ASKED QUESTIONS
NEED A PROBLEM SOLVED? Email magpi@raspberrypi.org or find us on raspberrypi.org/forums to feature in a future issue.
Your technical hardware and software problems solved…
RASPBERRY PI
CABLES WHAT POWER CABLES DO I NEED FOR MY RASPBERRY PI?
Recommended power supplies If you want to use the Raspberry Pi to its fullest ability, with add-ons like the Camera Module, Sense HAT and suchlike, the official Raspberry Pi power supply provides enough juice to get it all working properly: magpi.cc/2a14pye
Mobile phone charger The Raspberry Pi is powered by a micro USB cable, the same one used by most mobile phones. The recommended minimum power for a Raspberry Pi is 2A (2.5A for a Pi 3), so check your charger to see if it will power it properly like the official power supply.
USB cable For light operations, especially if you’re running the Pi in command-line mode, you can power a Raspberry Pi from a computer’s USB port. This isn’t really recommended, though, so if you start having issues you’ll need to upgrade to a proper power supply. Right The official power supply for the Raspberry Pi provides very consistent power
WHAT AV CABLES DO I NEED? HDMI The HDMI port is your best bet for 1080p video and digital audio out from the Raspberry Pi. It’s the easiest one to use and a lot of software is preconfigured to use that by default, such as Kodi, the media centre software.
Audio out The 3.5mm jack can be used like the headphone port on music players, phones, and other PCs. However, you can also use it as an audio out to speakers that take auxiliary cables, or by using a 3.5mm-tocomposite stereo output converter.
Composite video On the Raspberry Pi A+, B+, 2, and 3, there isn’t an obvious composite video out. However, the 3.5mm jack also allows for analogue video out if you have the correct RCA converter cable to use it. The original Pi models have a dedicated composite out, and the Zero needs a port soldered to it.
HOW DO I USE WIRED NETWORKING ON THE PI? Ethernet port On most versions of the Raspberry Pi, there’s an Ethernet port next to the USB ports that you can use to connect to a wired network. In Raspbian, it will automatically connect to a network, although in rare cases you may need to specify an IP address for it.
USB Ethernet The Raspberry Pi Model A, A+, and Zero don’t include an Ethernet port. However, many USB-connected Ethernet ports can be plugged in and used in the same way as the standard Ethernet on other Pi models.
USB hubs with Ethernet These are a bit more interesting: USB hubs that also act as an Ethernet port, allowing you to connect all your USB devices to something like a Pi Zero, as well as keeping it hooked up to the wired network. 64
August 2016
raspberrypi.org/magpi
YOUR QUESTIONS ANSWERED
FROM THE RASPBERRY PI FAQ
RASPBERRYPI.ORG/HELP What displays can I use? Most Pi models have composite and HDMI outputs. So you can hook it up to an old analogue TV through the composite or a composite-to-SCART connector, or to a digital TV or to a DVI monitor (using a cheap, passive HDMIto-DVI cable for DVI). The Pi Zero uses a mini HDMI port. There’s no VGA support, but active adapters are available. Passive HDMI-to-VGA cables won’t work with the Raspberry Pi. When purchasing an active VGA adapter, make sure it comes with an external power supply. HDMI-to-VGA adapters without an external power supply often fail to work. Why is there no VGA support? The chip we use supports HDMI and composite outputs, but doesn’t support VGA. VGA is considered to be an endof-life technology, so supporting it doesn’t fit with our plans at the moment. However, if you really want to use a VGA monitor with a Raspberry Pi, then it is possible using an HDMI-to-VGA adapter.
Does the HDMI support CEC? Yes, the HDMI port on the Raspberry Pi supports the CEC standard. CEC may be called something else by your TV’s manufacturer; check the Wikipedia entry on CEC for more information. Can I add a touchscreen? The Foundation provides a 7″-inch capacitive touchscreen that utilises the Pi’s DSI port; the screen the Swag Store: MPEG-2, a very popular and widely used format to encode DVDs, video camera recordings, TV, and many others; and VC‑1, a Microsoft format found in Blu‑ray discs, Windows Media, Slingbox, and HD-DVDs.-download
August
August#48
August 2016
67
Feature
SCRATCH
OLYMPICS
2016
68
August 2016 xxxx 2016
raspberrypi.org/magpi
Feature
SCRATCH OLYMPICS 2016
Want to check your work? Go to: magpi.cc/ScratchOlympics
GO F O R GO LD WI TH T H ESE P R O J E C T S T H aT WIL L T U RN Y O U I NT O aN I NTE RNaT I ONaL C O D I NG a T H LE TE he Olympics are upon us! That sporting spectacle of incredible skill, where the world’s finest athletes come together to represent their country and show the world just how hard the human body can be pushed. If you’re not an Olympic athlete, you can always watch it on TV, but if you want to take part in some way, we have a solution. Using Scratch – the Pi version or online editor, depending on the project – we can create our own Olympic games in the comfort of our own homes and maybe, just maybe, it can prepare you for one day training up to become an Olympic athlete yourself.
T
raspberrypi.org/magpi
xxxxx 2016 August
69
Feature News Find this resource and more like it on the Code Club projects website: codeclubprojects.org
aRCHERY Let loose a few virtual arrows and fulfil your fantasy of being Robin Hood
et’s start by creating an arrow that moves around the screen for this fun archery event. Note that since this project uses a block type not present in Scratch 1.4, you’ll need to work on it in the online Scratch editor; to open the resources, go to magpi.cc/29KO9S3 and follow this tutorial there. When your game starts, broadcast a message to shoot a new arrow. Once this message has been received, set the arrow’s position and size (Fig 1). Click the green flag to test your game; you should see your arrow get bigger and move to the bottom-left of the stage.
L
.01
70
August 2016 xxxx 2016
raspberrypi.org/magpi
SCRATCH OLYMPICS 2016 .02
News Feature
Event Info name: variations:
Archery Individual, Team
introduction:
Add the code circled in Fig 2 to your arrow so that it glides randomly around the stage forever. Then test your game again, and you should see your arrow move randomly around the stage.
1900
description:
An ancient weapon-turned-sport of skill, the aim is to pierce the centre of a target for maximum points
Shooting arrows
Let’s code your arrow to shoot when you press the space bar (Fig 3). This will stop the other script, the one moving the arrow, when the key is pressed. Test
.04
When you press the space bar you should see your arrow get smaller, as if it’s moving your project again. This time, your arrow should stop moving when you press the space bar. Now we can animate your arrow by reducing its size, using a change size block inside a repeat loop. Test your game again. This time, when you press the space bar you should see your arrow get smaller, as if it’s moving towards the target.
.03
Once your arrow is at the target, you can tell the player how many points they’ve scored. For example, they could score 200 points for hitting the yellow. You can also play a sound if they hit that colour (Fig 4). Finally, you need to broadcast the new arrow message again to get a new arrow.
raspberrypi.org/magpi
xxxxx 2016 August
71
Feature News
Weightlifting Lift the heaviest thing you can to win big. Be careful with your keyboard!
I
n this activity, you will make a game in Scratch on the Pi that allows you to test your keyboard-mashing skills. By repeatedly hitting the keyboard, the character will lift their barbell high into the air. There will also be the opportunity to have a play with some physical computing, and introduce some arcade-style buttons into the game. For this project, you’re going to need a sprite and a background. You can download a zip file containing all the game assets by going here: magpi.cc/29AVkj5. Once the file has been downloaded, you can unzip the archive by right-clicking on it and choosing unzip. You should see two directories: one containing the weightlifter’s costumes and the other containing the background.
Importing the assets into Scratch
Open Scratch by clicking Menu > Programming > Scratch. Now click on the Stage icon and then drag and drop the Olympics background into the Backgrounds tab. You can delete the original background. Next, click on Sprite1 and then, one by one, drag and drop the Weightlifter costumes into the Costumes tab. You can delete the original costumes.
Testing the animation
You can check whether the animation will work using the simple script in Fig 1:
.01
Event Info name: variations:
Weightlifting Snatch, Clean and Jerk
introduction: description:
The ultimate test in strength, divided by weight class to create a more equal playing field
72
August 2016 xxxx 2016
1896
Click on the green flag and watch the weightlifter do his thing!
Capturing the speed of key presses
The progress the weightlifter makes is going to be controlled by the speed at which the player can hit the Z and X keys, so you need to create some scripts that will capture this data. You’re going to need two variables in this game. The first, called progress, will be used to record how far into the lift the weightlifter has managed to get. The second, called last_key, will be used to store the last key press the player made. Create these two variables by clicking in Variables and then clicking on Make a variable. Start your script by setting progress to be 1 and last_key to be x when the game starts. The player must switch between hitting the X key and the Z key for the progress to increase. So when the X key is pressed, your script needs to check that the last key press was Z. If it was, then progress can be increased and the last_key can be switched to x. This is called conditional selection. The action only occurs if a variable is at the correct value (Fig 2):
raspberrypi.org/magpi
News Feature
SCRATCH OLYMPICS 2016 .02
Of course, you want the same sort of thing to happen when you press the Z key. You can duplicate the script by right-clicking on it and selecting duplicate from the context menu, then change the duplicated script to switch x for z and vice versa. Test that your game works by clicking on the green flag, and then repeatedly hitting the X and Z keys on the keyboard. You should see the variable progress increasing. The faster you hit the keys, the faster it will increase.
Making the character lift
There are a total of 29 costumes in the game. The sprite’s costume can be continually set so that it’s the same as the progress variable. That way, as progress increases, the costume will change. When progress reaches 29, the game can end. You will need a forever loop for the main logic of the game. Find a forever loop in the Control section and add it to the bottom of the first/main script. Now place another conditional block within the forever loop; this time, you can use an if else block. If progress reaches 29 then the sprite will say "I win", the costume is set back to costume 1, and the script is stopped. If 29 has not yet been reached, then the costume can be set to the same value as progress. You can’t directly set a costume to a specific number in Scratch, but you can use the round Operator block to set the costume to its number (Fig 3):
.03
Have a go at testing your script. Click the green flag and then start hitting the X and Z keys alternately to watch the weightlifter go. It’s a good idea to reset the costume back to number 1 each time the script starts:
Making it a little trickier
If you stop hitting the X and Z keys, then the weightlifter just stops lifting. It would be good if he started to put the weight back down if the player’s speed on the keyboard decreases. This can be done by decreasing the value of progress every once in a while. To start with, create a new variable and call it difficulty. This can be set to -1 in the main script. Grab a new when green flag clicked block and place it into the Scripts area. You can now use a forever if loop, which will run as long as a variable is at a certain value. You want it to run as long as progress is greater than 1 (Fig 4). Inside the forever if loop, you’ll need to keep changing progress by the value of difficulty. As difficulty is a negative number, this will keep reducing progress until it reaches 1. You’ll need to use a little wait command as well; since computers are so fast, there’s no way a player could keep up with the computer otherwise. Waiting for 0.4 seconds will do to start with.
.04
raspberrypi.org/magpi
Find this resource and more like it on the Raspberry Pi resources website: magpi.cc/ 1qEg9Nh
xxxxx 2016 August
73
Feature News .05
Test your game to see the weightlifter pick up the weight as you hit the keys, but lower it again if you stop pressing them.
Looping the animation
Let’s make the game a little more interesting now. You can start off by using a little loop effect at the start of the game. Pay attention to this next part, as you’ll be using the same techniques later on. The first four costumes can be looped to make the sprite look like he’s getting ready to start. You’ll need to get a when I receive block. Click on the
See the weightlifter pick up the weight as you hit the keys, but lower it again if you stop
If you click on this block, it should have a halo of white around it, and the sprite should start looping. To finish this section off, the starting block needs to be triggered from within the main script (Fig 6) whenever progress is 1, 2, 3 or 4, or in other words,
starting. The first thing that happens in starting
<5 and >0. If it is, then starting can be triggered and the script can wait until progress is greater than 4.
is a switch to the first costume. This is followed by a
This all goes into the main script.
little black arrow and create a new broadcast called
repeat until block; you can make the code inside it
.06
run until the game begins. You’ll know that the game has begun because progress will increase above 1. Use a > operator from Operators to help you build the script (Fig 5). You can change the costume inside this loop. If the costume reaches number 4, then it needs to be reset back to costume 1 again. You’ll need a couple of wait blocks as well, so you can actually see the costume changing.
Run your script and the weightlifter should glance left and right. When you start hitting the X and Z keys, he should start to lift. If you stop hitting the keys, he’ll return the weight to the floor and then start glancing left and right again.
Adding in a strain stage
You can add a second difficulty level by playing around with the timings and how they affect the costume; go to magpi.cc/29MuJPv to find out how to complete it!
74
August 2016 xxxx 2016
raspberrypi.org/magpi
SCRATCH OLYMPICS 2016
News Feature
Hurdles What’s more exciting and challenging than running on a track? Adding some obstacles to jump over!
n this activity you will make a hurdles game using Scratch, where the speed of the runner is controlled by how fast you can hit the X and Z keys, and perfect timing is required to jump over the hurdles at exactly the right time. Download the zip file to your home directory and unzip its contents: magpi.cc/29NUFYk. Open Scratch by clicking on Menu > Programming > Scratch. Now, click on the background icon and import the new background from the assets directory. You can then delete the old background. Click on the icon to import a new sprite and then choose the run-1 image. Next, import run-2, run-3, and run-4 as additional costumes. You can then delete the old cat sprite.
I
Event Info name: variations:
Hurdles 100 metres, 110 metres, 400 metres
introduction:
1896
description:
Get to the end of the course as fast as you can, but remember to jump over the hurdles in your way
Capturing the key mashing
.01
The first step is to capture the X and Z key presses, and use the speed at which the player is pushing the keys to control the size of a variable. To do this you’ll need a variable that stores the last known key press. Create a variable called last_key and set it to z when the green flag is clicked. For the next script you’ll need a new variable called speed, so go ahead and create it now. It can be set to 0 when the game begins (Fig 1). When the X key is pressed, if the last_key is equal to z, then the speed variable can be increased and the last_key can be set to x. This will ensure that the player can’t cheat and keep hitting the X key to make the speed increase. The same can be done for the Z key. In combination, these two scripts force the player to hit the keys alternately in order to increase the speed variable. Now test your script. Click the green flag, then repeatedly press the X and Z keys and watch the speed variable increase.
Animating the hurdler
At the moment, the hurdler has four costumes as part of what’s called a walk cycle (or run cycle in this case). When these costumes are switched, the character appears to run on the spot. The time delay between costume switches should depend on the
raspberrypi.org/magpi
speed variable. The higher the speed, the quicker the costume change should be and therefore the smaller the delay. You can get this effect by dividing 1 by the speed variable to calculate a delay. If you run this script as it is, you’ll get an error, because speed starts off with a value of 0. This means
xxxxx 2016 August
75
Feature News the computer is trying to calculate 1 / 0, which it can’t do. It’s a very common error that programmers make in their code. To fix this, you can use a conditional to make sure that the calculation only occurs when speed is larger than 0 (Fig 2).
.03
.02
Test your script; it might surprise you to see that the character’s costume doesn’t change. This is because the walk cycle you set up previously is still working. You’ll need to stop this walk cycle when the character is jumping. To do this, you can use an and conditional operator to check that both speed > 0 and jumping = False for the walk cycle to work (Fig 4).
.04
Now you should be able to test your script and watch the hurdler running on the spot as you press the X and Z keys.
Get jumping
Hurdlers need to jump. You’ll need a few more costumes for this part, so look in the runner directory and import the jump-1 and jump-2 costumes for your hurdler. You’ll need a new variable for this part called jumping. This is because other scripts will need to know when the character is jumping. Create the new variable on the green flag block and set it to False. The character should jump when the space bar is pressed. The first thing that happens is the jumping variable should be set to True, then the costume can be changed to jump-1 and the character can glide upwards. Next, the costume can be changed to jump-2 and the character can glide back down again. Finally, the jumping variable can be returned to False to indicate that the jumping animation has finished (Fig 3).
76
August 2016 xxxx 2016
Now have a go and you should find your character jumps when the space bar is pressed.
Slowing down
At the moment, the more you press the X and Z keys, the faster the character runs. There needs to be a way of slowing the hurdler down, so she doesn’t win too easily. This can be done on your initial script that sets the starting variables. You just need to add an infinite loop that will check if the speed is greater than 1, and then lower it every few 100ths of a second (Fig 5).
raspberrypi.org/magpi
News Feature
SCRATCH OLYMPICS 2016
.07
.05
This can be achieved using two and logical operators, checking if jumping = False, that x position > x position of hurdle – 5, and x position < x position of hurdle + 5. If all those conditions are met, then she must have hit the hurdle and her speed can be dropped.
Making an end to the game
Adding in hurdles
For the final part of this tutorial, you can add in hurdles that the character will have to jump over. Import the hurdle.png sprite from the assets/items directory. This sprite needs to begin at the far right of the screen, then it should continually move left across the screen at a pace that’s proportional to the speed of the character. When it hits the far left of the screen, it should instantly appear on the right again (Fig 6).
.06
To finish off a completed game, you need to add in a finishing line. You can find one in the assets/ items directory. Import this as a new sprite into your Scratch game, and approximately position it into the runner’s lane. To start off, you need to use a variable to control how far the hurdler has to run. Create a new variable and call it distance. The first script to be added to the finish line will set distance to 0 when the game begins, position the finish line to the far right of the screen, and hide it. Next, distance has to be increased by the speed of the runner every second (Fig 8).
.08
Making the hurdles an obstacle
At the moment, the runner can just plough straight through the hurdles. She needs to be slowed down if she doesn’t jump. Back on the hurdler sprite, add in a new when green flag clicked block. This next part is a little complicated. The runner should be slowed down if she isn’t jumping, has an x position just before the hurdle, or has an x position just after the hurdle (Fig 7).
raspberrypi.org/magpi
Now that the finish line is ready to go, you can make it appear when the value of distance hits whatever value you desire (200 in this example). It can then begin to move across the screen towards the hurdler. When she touches the finish line, detected by touching color, all the game scripts should end – see magpi.cc/29WZV9f.
Find this resource and more like it on the Raspberry Pi resources website: magpi.cc/ 1qEg9Nh
xxxxx 2016 August
77
Feature News
Synchronised swimming Swim to the beat and wow the judges with your stylish water moves
L
Event Info name: variations: introduction:
et’s start by getting one cat swimming. Note that since this project uses a block type not present in Scratch 1.4, you’ll need to work on it in the online Scratch editor. First, let’s turn the stage blue, like a swimming pool. Click on Stage, then the Backdrops tab. Choose a blue colour from the palette; click the ‘Fill with color’ tool, then click the backdrop. You’re going to use a different cat sprite, so rightclick on the walking cat to delete it. Now click ‘Choose sprite from library’, select Animals, choose Cat1 Flying and click OK. Now let’s get the cat swimming. Click on the Cat1 flying sprite, then Scripts, and add code to make the cat rotate left and right when you press the left and right arrow keys (Fig 1). Test your code by pressing the left and right arrow keys on the keyboard, and add forward and backward movement. Test your code by swimming around the stage using the arrow keys.
Synchronised swimming Team, Duet 1984 Summer Olympics, Los Angeles, USA
description:
Originally known as water ballet, it involves team of swimmers performing a coordinated routine to music
.01
Changing costume
.02
This would look better if the cat sprite changed direction when it turns left. Click on Costumes and delete the cat1 flying-a costume. Rename the remaining costume from cat1 flying-b to right. Right‑click on the costume and choose duplicate to create a copy, then click the ‘Flip left-right’ icon to reverse the copy and name it left. Click Scripts to return to your code and add blocks to change the costume when the direction is changed (Fig 2). Test your code by swimming around the stage using the arrow keys.
Create the team
Synchronised swimming needs more than one cat! We can use create clone of to make copies of the cat sprite that behave in the same way. First, let’s add code to make sure the cat always starts in the same position when you click the green flag (Fig 3). Test your code by pressing some arrow keys and then clicking the green flag to return to the start position.
78
August 2016 xxxx 2016
raspberrypi.org/magpi
News Feature
SCRATCH OLYMPICS 2016
Find this resource and more like it on the Code Club projects website: magpi.cc/29KATgh
.03
Now we can use a repeat loop to create six clones (copies) of the cat. Loops are used to do the same thing multiple times. You don’t want all the cats to be in the same position, though, so add code to rotate 60 degrees before creating each clone. Test your code by using the arrow keys. You should be able to create some amazing synchronised swimming patterns!
.04
Putting the play sound block inside a forever loop means the music will keep repeating. You can now test your project again; remember, you can click on the red stop button to stop the music playing!
Programmed routines
Would you like to be able to perfect a routine and easily repeat it? Let’s add some moves to be performed when the space bar is pressed (Fig 5). Run your project and press the space bar to test the new routine. Try using the arrow keys to move to a different position before pressing space.
.05
Music!
A synchronised swimming routine needs music. (But if you can’t play sound then you can skip this step.) Click on the Sounds tab and then click ‘Choose new sound from library’. Select Music Loops, choose some music, and click OK. Then go back to Scripts and add the blocks to play your music (Fig 4).
raspberrypi.org/magpi
xxxxx 2016 August
79
Review
WILDLIFE CAM KIT
WILDLIFE CAM KIT
Capture wildlife photos with this weatherproof, Pi-powered camera trap
Related CAMDEN BOSS ENCLOSURE This mini, wallmountable acrylic case will protect just the Camera Module, not the Pi, and isn’t weatherproof.
£9 / $12 magpi.cc/29Hsnh0
80
ver wondered what kind of critters visit your garden whenever you’re not around to scare them off? With the Wildlife Cam Kit, you can find out. Its PIR sensor will sense any movement in the vicinity and trigger its Pi Camera Module to take a stealthy snap of whatever’s passing by. You may recall that we’ve followed the progress of this Kickstarter-funded project in previous issues of The MagPi, but now it’s finally out in the wild. Designed by Naturebytes, a trio of digital makers and wildlife enthusiasts, its aim is to give users a fascinating insight into the natural world while also enhancing their digital making skills. To this end, it comes in kit form, although no soldering is required. It takes an hour or so to put together, following the detailed online PDF instructions (magpi.cc/29HamiP).
August 2016
E
The latter are well-illustrated with plenty of photos, even if a couple were slightly misleading.
Constructing your camera A laser-cut plastic insert is provided to suit whichever Raspberry Pi model you’re using; the standard kit comes with an A+ due to its lower power usage, but it could even be used with a new Pi Zero v1.3. Screws and plastic spacers are supplied to fit most of the components – including PIR sensor, Camera Module, and Pi – to the insert, threading jumper wires through its strategically placed holes. It’s clear that much thought has gone into its design; you even get bendy ties to push through pairs of holes to secure the wires tidily. Still, it wasn’t so easy to fit the jumper wires to the Pi’s GPIO pins through the large cut-out, requiring a small
screwdriver to push them into place. A real-time clock module is also fitted to five of the pins (since the Pi doesn’t have one built-in), to enable accurate date/timestamping of photos without an internet connection. Finally, an Adafruit Powerboost is fitted to the insert and connected to a LiPo battery. This is to boost the latter’s 3.7V output to the 5.2V needed to power the Pi and other components. A power switch has been added to the Powerboost to make it easier to switch the camera on and off when outdoors. Before you can use it, however, you’ll need to charge up the battery, which takes around 17.5 hours; it might be advisable to start doing this before assembling the rest of the kit. Naturebytes claims it will provide around 30 hours of power; in our tests it lasted up to three days between charges. raspberrypi.org/magpi
Review
WILDLIFE CAM KIT magpi.cc/29O2evf
From £140/$185 Maker Says A camera that anyone can build to take stealthy high-definition images of wildlife Naturebytes
Testing it out
Before positioning the camera trap outdoors, it’s a good idea to hook it up to a TV to check everything’s working properly. The supplied SD card features a customised version of the Raspbian operating system, featuring a Naturebytes logo and desktop shortcuts for various tools, including PIR and camera tests. The main Python script for taking photos can also be tweaked. By default, it adds graphical overlays – a date/time stamp and Naturebytes logo – to each photo taken. However, we found that this was causing a lengthy delay (of up to two minutes) when writing to the supplied USB stick, possibly caused by a lack of RAM in the 256MB Pi A+ to handle the I/O process. Naturally, this is undesirable when capturing wildlife photos, as the critter may have vanished in the meantime. Fortunately, commenting out the code lines for the overlays solved the problem. Writing to an SD card is even quicker, although it’s far more convenient to simply remove the USB stick and plug it into another PC in the house to view the photos, without having to bring the whole device back inside. raspberrypi.org/magpi
Venturing outdoors, a Velcro strap is used to fasten the Wildlife Cam to a tree or fence post. The green acrylic case itself seems very tough and durable and snaps tightly shut, secured by two plastic clamps. Naturebytes has tested it in all weather conditions, and we experienced no leakage whatsoever following a downpour. On the front, a circle of clear plastic protects the Camera Module, while another translucent cover guards the PIR sensor.
Start snapping
Once in position, toggling the Powerboost switch turns on the camera and the picture-taking Python script will start running automatically. Our first attempt yielded a shot of a wood pigeon in flight, plus numerous photos of nothing: false positives triggered by the changing amount of sunlight. This effect can be reduced by repositioning the camera to avoid morning or afternoon glare, or by adjusting the sensitivity of the PIR. You can lure wildlife in front of the camera trap by placing it next to a bird table – we improvised by pouring some seeds onto an old garden table and managed to obtain a great shot of a magpie! There’s also an optional wooden arm to hang a bird-feeder in front of the camera.
Whatever tactic you use, viewing the photos at the end of the day is an exciting prospect, just to see what has been captured. While not ready at the time of writing, Naturebytes is also working on an online community hub where users can share their wildlife photos, which should add an extra dimension to the project. Ideal for educational use, the Cam Kit is also quite versatile and could be used for time-lapse photography, night-time shots (with a Pi NoIR camera and IR LED lighting), or even a live video feed.
Last word While more expensive than originally envisioned, the Wildlife Cam Kit is easy enough to assemble and fun to use. Once we’d sorted out the photo-taking delay issue, it worked well outdoors and the anticipation of seeing what it had captured was exciting. That lengthy battery recharge time is a little annoying, but it does power the unit for quite a while. Most importantly, the weatherproof case is very robust and may soon also be available as a standalone unit.
August 2016
81
Review
RASPIO ANALOG ZERO rasp.io/analogzero
£11 / $15 Maker Says Read up to eight analogue inputs at once RasPiO
RASPIO ANALOG ZERO It makes the reading of analogue sensors as easy as Pi hile its mini form factor makes the Analog Zero a perfect partner for the Pi Zero, it’s a great way to add easy-to-use analogue inputs to any Raspberry Pi model. Supplied as a kit, it’s based around the MCP3008 analogue-to-digital converter (ADC) chip, but avoids all the intricate wiring usually required when using an ADC. The great thing about using this particular chip is that it’s already supported by the GPIO Zero Python library with its own class, so it’s a doddle to start writing programs to read and compare up to eight analogue inputs at once. Just use jumper wires to hook up your analogue sensors – temperature probes, light-dependent resistors, humidity sensors, gas detectors, potentiometers etc. – to any of the eight inputs in a female header, then write a few lines of code to get instant readings. Voltages
W
Related AD/DA EXPANSION BOARD Based on the PCF8591T 8-bit ADC chip, it has four analogue inputs, WiringPi support, and can also do DAC conversion.
£9 / $12 magpi.cc/1syGYDs
82
August 2016
up to 3.3V can be read directly; if the input is higher, you’ll need to use a voltage divider made from resistors. Potential projects include a digital thermometer, voltmeter, and weather station, and kits for all of these were offered as part of the Analog Zero’s successful seven-day Kickstarter campaign. If you require greater accuracy than the MCP3008’s 10 bits, you always have the option of swapping it out for your own 12-bit MCP3208 ADC chip for extra precision, since it fits the same socket and is also supported by GPIO Zero. Even so, the MCP3008’s 1,024 steps should be enough for most projects. Although the Analog Zero makes things easier once assembled, note that you do have to do a bit of soldering beforehand, but everything’s well marked out on the board. As well as the chip socket, you’ll need to solder on the small female header for the
analogue inputs, a couple of capacitors, a jumper switch, plus a 40-way female header to connect to the Pi’s GPIO port. Handily, the board features through-holes for 25 GPIO pins, along with a mini 54-point prototyping area. There’s also the option to create a sleeker version by soldering the chip directly to the board and using surface-mount capacitors on the rear.
Last word We’d have appreciated a pre-assembled option, but once you’ve soldered the kit components onto the board, the Analog Zero really does make it much easier to use multiple analogue inputs for projects, particularly when using GPIO Zero.
raspberrypi.org/magpi
PICO-8 lexaloffle.com
Review Maker Says
£12 / $15
A fantasy console for making, sharing, and playing tiny games and other computer programs
PICO-8
Lexaloffle
Build, play, and share 8-bit games with this imaginary console with built-in code-, sprite-, and music-editing tools magine.
I
Related PYGAME
Pygame is a more powerful set of gaming modules that uses Python. It's not got the same holistic environment as PICO-8, though, but you can build some impressive games with it.
Free pygame.org
raspberrypi.org/magpi lowspec nature of the console helps newcomers get started. Code.
Community service.
Last word We had a huge amount of fun with PICO-8, and it's a natural fit for the Raspberry Pi.
August 2016
83
Review
BOOKS
RASPBERRY PI BESTSELLERS
MAKE: FUN!
Combines a static type system and functional first design with a type-inferred object system.
As the summer holidays stretch ahead, with the inevitable promise of rainy days and bored kids indoors, it’s time to turn to some quick and simple toy making. Independent toy inventor Bob Knetzger passes on his passion for making and inventing through a series of small projects to show just what’s possible when a little imagination is applied to a few everyday objects.
OCAML
OCAML FROM THE VERY BEGINNING Author: John Whitington Publisher: Coherent Press Price: £24.99 ISBN: 978-0957671102 ocaml-book.com John Whitington’s introduction to OCaml as a modern generalpurpose programming language will get any newbie up to speed, and its exercises will deepen your understanding.
THINK OCAML
Authors: Allen Downey and Nicholas Monje Publisher: Green Tea Press Price: Free download greenteapress.com
OCaml as a first programming language? Yes, with this rewrite of Think Python (see last month), for functional programming. A free download that’s well worth the time you’ll need to invest.
REAL WORLD OCAML Authors: Anil Madhavapeddy, Yaron Minsky and Jason Hickey Publisher: O’Reilly Price: £26.50 ISBN: 978-1449323912 oreil.ly/29UoLGY For confident beginners, or those with a little experience in OCaml or any other language, this is a deep dive into the possibilities of this functional programming language.
84
August 2016
Author: Bob Knetzger Publisher: Maker Media Price: £16.50 ISBN: 978-1457194122 oreil.ly/29UpdVL
Knetzger’s projects get you learning by making, giving the reader the skills and tools needed to finish a Raspberry Pi project’s casing and for physical
LEARNING PYTHON FOR FORENSICS Author: P reston Miller and Chapin Bryce Publisher: Packt Price: £38.99 ISBN: 978-1783285235 magpi.cc/29UoPX6
Continuing the theme of learning by doing, here’s an immersive coding experience for intermediate and not-quite-beginner Python programmers that teaches both forensic analysis and real-world Python scripting in tandem. Right from the opening Python intro, best practices for forensics are integrated into the learning. Python 2.7 is used, which is no longer satisfactory, and with the rise of Python 3 this is hopefully one of the last books to be written on Python 2. Nevertheless, even for relatively new learners, a
interactions. While some of the parts listed will take a little research for European readers to translate from American trade names and store references, almost everything is readily available, and no project here is beyond a keen beginner on a budget. Starting with kit-bashing and non-destructive modifications to existing toys, Knetzger ranges widely, taking in the mathematics of gnome hats and the myriad possibilities of polystyrene packaging foam. When a foundry is needed, or a strip heater to bend acrylic, Make: Fun! even shows you how to make these tools. From talking toys and 3D hacks, to edible lenses and a homemade yakitori grill, there’s something to inspire everyone’s imagination here.
Score familiarity with Python 2 is still useful for working on, understanding, and re-factoring old code. You’ll still want to supplement this book with one of the many Python 3 books we’ve reviewed, and move your forensic scripts on where possible. After chapters on working with serialised data, and with databases for dealing with large quantities of data, a framework is created for parsing embedded metadata – useful programming skills for other fields – and this is returned to for building a full forensic framework in the final chapter. Rounded off by a useful appendix on troubleshooting exceptions, this look at an unfortunately all too necessary subject is a good way of developing Python skills, albeit Python 2.
Score raspberrypi.org/magpi
Review
BOOKS
EXPLORING RASPBERRY PI Author: Derek Molloy Publisher: Wiley Price: £23.99 ISBN: 978-1119188681 exploringrpi.com
As the author notes in the introduction, the Pi’s integration of high-level Linux software with low-level electronics circuits represents a paradigm shift in embedded systems development. Molloy complements this statement with an introduction to embedded development for the Pi that takes in low-level hardware interfaces and high-level libraries, embracing every bus and interfacing option. The first section features hardware, software, and electronics tutorials to prime you with the basics you’ll need for the rest of the book, from using GitHub, through
EXPERT F# 4.0
FOUTH EDITION Authors: D on Syme, Adam Granicz and Antonio Cisternino Publisher: Apress Price: £41.50 ISBN: 978-1484207413 magpi.cc/29UpGXK
F# 4.0 is a “mature, opensource, cross-platform, functional first programming language”; if you still think of Microsoft as the ‘embrace and extend’ company driving everyone onto the Windows platform, it’s time to make a fresh assessment. Not only is F# all the things quoted and available on Raspbian (sudo apt-get install fsharp), it’s also a decent language, and well worth investigating if you’ve any work to get done on the Mono/.NET platform. Expert F# 4.0 gives you the wisdom of the language’s creator, Don Syme, aided by two maintainers raspberrypi.org/magpi
scripting language choice, to the role of pull-up resistors. Along the way you’ll read some great little intros to everything from how computers perform binary arithmetic, to crimping together custom cables for the GPIO, and plenty of useful tips to make your work easier. Weighing in at nearly 700 pages, the bulk of the text is in the second and third parts, detailing interfacing the Pi to the physical environment and the internet, rounded off with a short intro to kernel programming. From adding extra UARTs, to writing C and C++ code for the Pi’s communication buses, Molloy’s succinct and useful examples ensure that this reference will spend as much time on your workshop desk as your bookshelf.
Score of important F# projects, in a comprehensive introduction that embraces functional and imperative programming, working with data, and using the language’s strengths in diverse problem areas. Although F# is a fairly compact and expressive language, the authors’ emphasis on examples using popular libraries means the book’s 550 pages should be worked through with less skipping than occurs with some tutorials; the introduction emphasises where programmers from different backgrounds should put in the most effort, such as getting the hang of static types in early chapters if you’re a Python or Ruby coder. A little dry at times, but the best intro for .NET newbies.
Score
ESSENTIAL READING: COMMUNITY ESSENTIALS
Running a hackspace, Raspberry Jam, or user group demands soft skills and hard organisation.
Buzzing Communities Author: Richard Millington Publisher: FeverBee Price: £15.00 ISBN: 978-0988359901 feverbee.com Helps you create thriving online communities, with real-life examples, practical tips, and trusted communitybuilding methods.
The Art of Community
Author: Jono Bacon Publisher: O’Reilly Price: £26.50 ISBN: 978-1449312060 oreil.ly/2a0buA4
Comprehensive, open-source-focused guide to all aspects of community building from the Ubuntu community manager.
User Group Leadership Author: Michelle Malcher Publisher: Apress Price: £14.50 ISBN: 978-1484211168 magpi.cc/2a0bZue The essentials for those who have community leadership thrust upon them, or need to build a professional user group.
Social by Social Author: Andy Gibson et al. Publisher: Nesta Lab Price: Free Download ISBN: 978-1906496418 magpi.cc/2a0bro9 (PDF) Freely available guide to putting social technologies to use in community engagement, whose wisdom hasn’t dated.
I Hear You Author: Donny Ebenstein Publisher: Amacom Price: £18.99 ISBN: 978-0814432198 magpi.cc/2a0bZKF If it all goes wrong, skilled mediator and conflict negotiator Donny Ebenstein will get you back on track.
August 2016
85
Community
FEATURE
THE MONTH IN RASPBERRY PI Everything else that happened this month in the world of Raspberry Pi
APOLLO MISSIONS ON THE RASPBERRY PI ou may have seen in the news recently that the full source code for the Apollo space missions has been published to GitHub (magpi.cc/2abpPcb). We understand that the code has been available in some form or another to the public for a few years now (NASA has a mandate to be open in everything it does, including making its code open source where possible); however, at the time of writing, this its first official presence on GitHub. What is the code actually for? Well, it’s the Apollo Guidance Computer (AGC) source code for the Command Module and Lunar Module of Apollo 11; for those not experts on the details of the Apollo missions, the Lunar Module was the moon lander,
Y
Right The Virtual AGC running on the Raspberry Pi: it barely takes up any processing power! It’s designed to look like one of the original AGC control pads
86
August 2016
Above The code for the Apollo missions all on paper, next to the director of software engineering at MIT, Margaret Hamilton. MIT was tasked with developing the software for Apollo
which would detach from the Command Module when the craft got to the moon. The Command Module would then orbit the moon until the Lunar Module returned and redocked, then return to Earth.
Building on Pi
You may be wondering what this has to do with the Raspberry Pi? As you may be aware, the computers on the Apollo missions were not very sophisticated by today’s standards (they were similar to pocket calculators), so the Raspberry Pi should be more than capable of running the code. While still high on the success of the Astro Pi mission, Dave Honess of the Raspberry Pi Foundation found that Ron Burkey (github.com/rburkey2005) had already ported the code to Linux after its initial 2009 release and it was only a small step to get it going on the Raspberry Pi. “It takes about three minutes to build,” Dave told us as he proudly showed off the screenshot of the virtual AGC. This version not only runs Apollo 11’s code, but also Apollo 9, 13, 15, 16, and 17. Number 17 was the last Apollo mission. We’re hoping to bring you some articles about running the code on the Raspberry Pi in the next issue, giving you the chance to see how space technology worked in the Sixties and Seventies. raspberrypi.org/magpi
Community
THIS MONTH IN PI
CROWDFUND THIS! The best crowdfunding hits this month for you to check out…
RASPBERRY SHAKE SEISMOGRAPH kck.st/2a1bCPf
RASPIO GPIO ZERO RULER kck.st/2acCLB3
The RasPiO GPIO rulers have been a great little addition to the collection of Raspberry Pi accessories for a while now. As well as being a genuine 30cm ruler, the original had a layout of the GPIO pins on a Raspberry Pi and some quick bits of information on Rpi.GPIO, the original way to program GPIO pins on the Pi. Now there’s a new version you can Kickstart that will have GPIO Zero functions on there instead. This is the very easy-to-use module for Python that allows you to simply connect components to the Pi, and now you can own a quick reference guide that also explains the GPIO pins to you. It’s already hit its target, so get in there and bag yourself one!
We like this one: a seismograph add-on for your Raspberry Pi which allows you to detect seismic events (in other words: earthquakes). It’s sensitive enough to detect earthquakes of as little as magnitude 2 within a radius of 50 miles; if it’s a magnitude of 4 or higher, you can sense it from up to 300 miles away. Its maker says it will even be able to detect earthquakes further away, but you won’t get as much useful data. With the Raspberry Pi being used for more and more science experiments, we like the idea of a few geologists having one of these at home.
BEST OF THE REST Here are some other great things we saw this month creator.matrix.one
We’re going to look at this more closely in a future issue, but this is an interesting development kit for the Raspberry Pi. It has a lot of different sensors and optimisations that make it great for studying places and taking good photos and video of them.
ADVENTURE VENDING MACHINE magpi.cc/2aaY7PS
This is the first Pi-powered art project we’ve come across on a crowdfunding site while writing this section, but we’re happy it exists. When funded, this vending machine will be set up at Burning Man 2016, and will dispense a quest for you to go on. Once you complete the quest, you’ll get a coin which can be used to open a door and claim a prize. Apparently, it’s inspired by Leonardo da Vinci, and the creator’s aim is for it to inspire people to connect in meaningful ways. raspberrypi.org/magpi
MATRIX CREATOR
farmbot.io
FARMBOT
A Pi-powered robot which makes creating your own little vegetable farm a doddle thanks to superb components and code. It’s still in development and a little expensive, but it’s a really cool idea that may help out small communities.
August 2016
87
Community
INTERVIEW
WIMBLEDON
RASPBERRY JAM: AN INTERVIEW WITH CAT LAMIN Practical tips for setting up a Raspberry Jam from educator and community cheerleader Cat Lamin he MagPi caught up with Cat Lamin after her recent success at the Wimbledon Raspberry Jam to find out why she loves the Raspberry Pi and Raspberry Jams.
T
How did you get involved with the Raspberry Pi community?
CAT LAMIN Cat is a former primary school teacher, computing coordinator, maths teacher, and real-life geek girl. She’s enthusiastic about getting teachers and children interested in coding and computing, any way she can.
88
August 2016
“As a teacher, various people mentioned Raspberry Pi to me, including one of the technicians in school, but I wasn’t brave enough to get one. Eventually, the school bought me four to use with my Code Club and I sat them on a shelf admiring them, but didn’t really know where to start until I went to Picademy and had some training in how to use one! My biggest problem was understanding how to set up the Pi and what to do if something went wrong. I’m now an expert in debugging basic user errors; you’d be amazed at how many parents tell me that their Raspberry Pi isn’t working until I ask them to check the SD card is fully pushed in. “I went to Picademy in July 2014. While there, people were adding me on Twitter and Google+ and were chatting to me about projects and ideas. I got really excited about the teaching possibilities of Pi, but was very aware of my own limitations. So I started a blog to keep track of my many
errors, and to let other teachers know what they could do in their classroom. I then came up with the idea of Coding Evenings as a way to further allow sharing between teachers and community members, which meant that lots of amazing community people got involved.” What was your first Raspberry Jam? “I think the first actual Jam I attended was the third birthday party in Cambridge, when I accidentally volunteered to help out as a Jam maker, with no idea what I was letting myself in for. I went to Egham Jam when Albert Hickey asked Carrie Anne Philbin to put out a plea for a teacher to head over and do a talk about Picademy and using Pi in school. I’ve been to CamJam, Egham Jam, Peterborough Jam, and the 4th birthday party, before organising my own Jam in Wimbledon with help from Albert Hickey.” How did organising that first Jam come about? “Albert, who organises the Egham Jam, approached me to see if I was interested in helping him run the Jam in Wimbledon; he had been offered a venue and wanted me to be involved from the start. Wimbledon is close to the school
raspberrypi.org/magpi
Community
CAT LAMIN – WIMBLEDON RASPBERRY JAM Ozzy and Jasper with their home-made Astro Pi
Whack-A-Pi and its high scores
I teach in, and I knew this would be an excellent opportunity to give some of the children from school the opportunity to help develop their passions outside of school. What I really enjoyed about the Jam was seeing all of the families there, and several parents asked if we could let their children’s school know about the next one because they were keen to bring more families down! “I was really lucky with Wimbledon Jam as so many people offered their help almost straight
server, and people seemed to enjoy the content. Workshops included Crumble Bot, CEEDuniverse and micro:bit, as well as Minecraft and Sense HAT. The workshops had varying numbers of signups, but all of them ended up being full because all of the children who attended were really keen to get involved and try things out. For show-and-tell we had Brian Corteil’s Micro PiNoon, Carl Monk’s Whack-A-Pi, and then Albert’s Pi-controlled crane. The children (and adults) had so
I was really lucky with Wimbledon Jam as so many people offered their help away, and it was great having Ben [Nuttall] along as a representative from the Raspberry Pi Foundation: it added a sort of official stamp of approval to the day.” What’s your favourite thing about a Raspberry Jam? “I really like having workshops, talks, and show-and-tells going on and we were really lucky that loads of people were interested in doing everything. Our talks ranged from getting girls to code, to teaching, to using RPi as a print
raspberrypi.org/magpi
much fun playing with the games, especially as Redfern and 4Tronix kindly donated prizes for some of them! One of my highlights from the day was watching the mums creep over to Whack-A-Pi and sneak a go while their children were taking part in workshops: it was very funny! “We had 80 people sign up to the Jam and around 60 turned up, including some ‘walk ins’. Twelve of the children attending the Jam were from my own school and there were probably another twelve who came with their parents.”
SETTING UP YOUR OWN JAM: CAT’S TOP TIPS 01. ATTEND ANOTHER JAM Attend a couple of Jams somewhere else to get a feel for them – Jams are all slightly different and you need to decide what you’re interested in doing.
02. PEOPLE It’s really important that you have a reliable core of people to help out, so I was really lucky that volunteers from my Coding Evenings agreed to come along to help out.
03. ASK FOR SUPPORT Don’t be afraid to ask for support from other community members and even businesses. In the past, I’ve found that a lot of the companies involved in selling Pi-based products are really keen to support events, and very generous with their donations of stickers and prizes.
04. CHIN UP Don’t be disheartened if things don’t come together easily. I’ve been organising my second Jam in Truro in Cornwall and compared to Wimbledon, it’s felt like an uphill struggle. I’ve had support from the Foundation and from the Code Club coordinator in the South West, and I’m feeling much more happier about it now.
August 2016
89
Community
3
EVENTS
RASPBERRY PI DAY AT HOPEWORKS Camden, NJ, USA
RASPBERRY JAM EVENT CALENDAR Find out what community-organised, Raspberry Pi-themed events are happening near you…
PUT YOUR EVENT ON THE MAP Want to add your get-together? List it here:
raspberrypi.org/jam/add THE FIRST TAUNTON RASPBERRY JAM
RASPBERRY PI DAY AT HOPEWORKS
MINECRAFT PIBRARY JAM
Taunton, UK magpi.cc/2a8P41N Beginners and experts welcome at the first Taunton Jam, covering computing, robots, and more.
NJ, USA magpi.cc/2a6XvaS Learn how to set up and use the Raspberry Pi thanks to mentors from around the region.
Coventry, UK magpi.cc/2a6XWlS A Minecraft Library Jam where you’ll learn to code with Python and Java with Minecraft Pi.
EAGLE LABS RASPBERRY JAM
SOUTHEND RASPBERRY JAM 10
RASPBERRY JAM PRESTON
Campus, Birmingham, UK magpi.cc/2a8OPUo Raspberry Pi enthusiasts of all ages are welcome to explore show-andtell projects and talks about the Pi.
Southend-on-Sea, UK magpi.cc/2a8Q3PD A Jam for those who are interested in Raspberry Pi, Astro Pi, Sonic Pi, Code Club, and much more!
Preston, UK magpi.cc/2a6XXG7 Learn, create, and share the potential of the Raspberry Pi at a family-friendly event.
When: Saturday 6 August Where: Taunton Library,
When: Saturday 13 August Where: Innovation Birmingham
90
August 2016
When: Saturday 13 August Where: Camden Colab, Camden,
When: Saturday 20 August Where: Hive Enterprise Centre,
When: Thursday 25 August Where: Central Library,
When: Monday 5 September Where: Media Innovation Studio,
raspberrypi.org/magpi
Community
EVENTS
1
2
THE FIRST TAUNTON RASPBERRY JAM Taunton, UK
6
RASPBERRY JAM PRESTON Preston, UK
EAGLE LABS RASPBERRY JAM Birmingham, UK
RASPBERRY JAM LEEDS
When: Wednesday 7 September Where: Swallow Hill Community College, Leeds, UK magpi.cc/2a6Yhoz Everyone is invited for a couple of hours of computing fun, talks, demonstrations, and hands-on workshops.
CAMJAM – CAMBRIDGE RASPBERRY JAM
When: Saturday 17 September Where: Institute of Astronomy, Cambridge, UK magpi.cc/2a6Vr2W The famous CamJam is back this September, with plenty of people showing off projects and giving talks.
raspberrypi.org/magpi
7
RASPBERRY JAM LEEDS
4
SOUTHEND RASPBERRY JAM 10
8
CAMJAM – CAMBRIDGE RASPBERRY JAM
5
MINECRAFT PIBRARY JAM
Leeds, UK
Southend-on-Sea, UK
Cambridge, UK
Coventry, UK
DON’T CAMJAM – CAMBRIDGE MISS: RASPBERRY JAM When: Saturday 17 September Where: Institute of Astronomy, Cambridge, UK Tickets are now on sale for the insanely popular CamJam, the Raspberry Jam from the Raspberry Pi’s home town of Cambridge. The CamJam is back at the Institute of Astronomy and will have the usual selection of dealers’ tables, people with projects to show, wonderful workshops, and plenty of talks from excellent people within the Raspberry Pi community. Get your tickets fast before they sell out! You can find out more information on the event’s page here: magpi.cc/2a6Vr2W
August 2016
91
Community
YOUR LETTERS
YOUR LETTERS Being social
I’m a recent convert to the Raspberry Pi and I buy the magazine from the shop every month. I’ve never really had the chance to get into computing, but I’m having a lot of fun learning! Who said you couldn’t teach an old dog new tricks? I’ve also recently started to get into social media; you don’t find many people in their seventies on Twitter, I bet! I was wondering where I can follow you online?
Barry Harper
We’re on several social channels around the web! On Twitter you can find us with the user name TheMagP1 (twitter.com/TheMagP1), which is where we post a lot of updates about what’s going on with the mag and the Raspberry Pi community. We’re also on Facebook (facebook.com/MagPiMagazine) and Google+ (plus.google.com/+TheMagPi), if you prefer to use other forms of social media. We hope you enjoy the mag and get to learn much more! Above A lot of people like the mission patches that came with issue 47 – send us your pictures of you wearing them!
Astro Pi patch
I loved issue 47 of The MagPi as I’ve been following along with Astro Pi very closely since the beginning. To see some of the results (especially the pictures from the hatch window!) was very exciting, and I’m glad to hear there will be more Astro Pi in some capacity in the future. I was also delighted to see the poster and patch with the issue. I’m trying to find a space to put the poster, which I can do, but I must confess I don’t actually know how to sew the patch on. Do you have any tips for a sewing beginner to attach it to my bag?
Jake
Thank you for your kind words about the feature, Jake. We’re really looking forward to covering more Astro Pi missions in the future. As for the patch, it’s actually quite simple! You can do it with just a sewing needle and some thread. We like to pin the patch in place so it stays in position; you can easily use a safety pin to do this. We also thread our needle by getting a length of thread and folding it in half; it then goes through the eye a little better, and you can tie it around the needle by just looping it over the point. No knots required! A few stitches around the edge is all you really need to do before tying it in place. There’s plenty of videos on YouTube (like this one: youtu.be/WzL1AEXAd7Y) that can help you out.
92
August 2016
International Pi
Hi there! I know you’re a UK-based company, but I keep hearing about the great events that are put on over there and I was wondering if they’d ever come to the States? We have a budding Raspberry Pi community here and it would be great if we could attend some Jams, Picademies, and such in America. Maybe even an Astro Pi contest here as well – we do have NASA, after all!
James Alex
We do have a small but very dedicated American team working on promoting the Raspberry Pi in America, James. You can see them at Maker Faires, and recently Matt from the American team visited a big education conference called ISTE. At the time of writing, he’s just sorting out a Picademy in America and it’s not the first one he’s done either; keep an eye on the blogs and Twitter feeds in the future to find out when new ones are organised. As for Jams, these are all community-run! There are plenty that are held in America and if you take a look at our Events page in this issue, there may be some you can attend. Failing that, a full list of Jams can be found online here: magpi.cc/28Nxeff Raspberry Pi is definitely in America!
raspberrypi.org/magpi
Community
YOUR LETTERS
FROM THE FORUM: The Raspberry Pi Forum is a hotbed of conversations and problem-solving for the community - join in via raspberrypi.org/forums
I C THE PROBLEM
n the article An Introduction to C, Simon Long presents a Hello World program in C. Unfortunately, Simon presents the main function with a return type of void, which as far as I know has never been the correct signature for main in any of the C standards. While a number of compilers will accept this code and produce the expected result, I think it would be better to have an introduction to C article that presents the signature of main that conforms to the standard.
I
AndyD
We asked Simon Long about it and he had this to say:
One of Zero
Since November of last year, I’ve picked up a few Raspberry Pi Zeros when I’ve noticed them on sale and needed more. I know they’re still quite popular, but I was wondering when the one Zero per person rule might start to be lifted? I have a project coming up that could definitely use a few Raspberry Pi Zeros if they were around! Regards,
“In the example in the first episode, I have shown a simplified prototype for main. For the record, in 25 years of writing C professionally, I have yet to come across a compiler which considers main returning void to be an error, and for a lot of real-world C programming (embedded systems, standalone PC applications etc.), nothing is ever done with the returned value anyway. “Had I decided to show the strictly correct prototype for main in the first episode of an introduction to C, I would have had to explain what an int is, which would mean I would have had to explain variable types. I’d also have had to explain what an array of string pointers is (argv) and by the time I’d done that, I would a) have used up all the allotted words, and b) scared off every beginner who was reading it.” Apparently, there will be more on this in the future C tutorials in the magazine. We hope this clears up any problems!
WRITE TO US Have you got something you’d like to say? Get in touch via magpi@raspberrypi.org or on The MagPi section of the forum at: raspberrypi.org/forums
Terry P
We’ve seen a few stores that have actually begun to lift their one Zero per customer rule, but it seems it’s not everywhere yet. The Raspberry Pi Zero is still selling out basically as fast as they can be made, so it’s a tricky thing to predict – it’s always been more up to the markets than anything else! As more stores lift the restriction, though, others will follow suit and by then the supply will probably be enough to meet demand.
raspberrypi.org/magpi
August 2016
93
READ US ANYWHERE SAVE
25%
DO SCIENCE WITH THE
SENSE HAT
with a Newsstand subscription (limited time offer)
WITH OUR NEW
ESSENTIALS E-BOOK AVAILABLE ON THE MAGPI APP!
FREE: DOWNLOAD ALL 30 ORIGINAL ISSUES
Subscribe from
£2.29 £26.99 or
Magazine
Available now
for smartphones & tablets
rolling subscription
full year subscription
Download it today – it’s free! Get all 30 legacy issues free Instant downloads every month Fast rendering performance Live links & interactivity
94
August 2016
raspberrypi.org/magpi
TO CELEBRATE THEIR 4TH BIRTHDAY Pimoroni are giving away a raft of their best boards to one lucky MagPi reader
In association with
Review SHOP.PIMORONI.COM
QUESTION: WHO WAS CHINA’S MOST SUCCESSFUL PIRATE? First prize contains: 4th Birthday Pibow Display-O-Tron HAT Drum HAT Explorer HAT Pro Piano HAT Propeller HAT Skywriter HAT Unicorn HAT ESP8266 IoT pHAT Enviro pHAT Scroll pHAT pHAT DAC Zero LiPO Mini Black HAT Hack3r
Answer by 25 August for your chance to win! Simply email competition@raspberrypi.org with your name, address, and answer!
PLUS 10 RUNNER-UP PRIZES OF A BLINKT LED STRIP
Terms & Conditions Competition closes 25 August.
raspberrypi.org/magpi
August 2016
95
Column
THE FINAL WORD
MATT RICHARDSON
THE UNSUNG HERO
Matt is Raspberry Pi’s US-based product evangelist. Before that, he was co-author of Getting Started with Raspberry Pi and a contributing editor at Make: magazine.
Matt Richardson shares how software support is critical to Raspberry Pi’s success s Raspberry Pi enthusiasts, we tend to focus a lot on hardware. When a new or updated board is released, it garners a lot of attention and excitement. On one hand, that’s sensible because Raspberry Pi is a leader in pushing the boundaries of affordable hardware. On the other hand, it tends to overshadow the fact that strong software support makes an enormous contribution to Raspberry Pi’s success in education, hobby, and industrial markets. Because of that, I want to take the opportunity this month to highlight how important software is for Raspberry Pi. Whether you’re using our computer as a desktop replacement, a project platform, or a learning tool, you depend on an enormous amount of software built on top of the hardware. From the foundation of the Linux kernel, all the way up to the graphical user interface of the application you’re using, you rely on the work of many people who have spent countless hours designing, developing, and testing software. The look and feel of the desktop environment in Raspbian serves as a good signal of the progress being made to the software made specifically for Raspberry Pi. I encourage you to compare the early versions of Raspbian’s desktop environment to what you get when you download Raspbian today. Many little tweaks are made with each release, and they’ve really built up to make a huge difference in the user experience.
A
Skin deep
And keep in mind that’s only considering the desktop interface of Raspbian. The improvements to the operating system under the hood go well beyond what you might notice on screen. For Raspberry Pi, there’s been updates for firmware, more functionality, and improved hardware drivers. All of this is in addition to
96
August 2016
the ongoing improvements to the Linux kernel for all supported platforms. For those of us who are hobbyists, we have access to so many code libraries contributed by developers, so that we can create things easily with Raspberry Pi in a ton of different programming languages. As you probably know, the power of Raspberry Pi lies in its GPIO pins which make it perfect for physical computing projects, much like the ones you find in the pages of The MagPi. New Python libraries like GPIO Zero make it even easier than ever to explore physical computing. What used to take four lines of code is boiled down to just LED.blink(), for example. Not all software that helps us was made to run on Raspberry Pi directly. Take, for instance, Etcher, a wonderful program from the team at Resin.io. Etcher (etcher.io) is the easiest SD card flasher I have ever used, and takes a lot of guesswork out of flashing SD cards with Raspbian or any other operating system. Those of us who write tutorials are especially happy about this; since Etcher is cross-platform, you don’t need to have a separate set of instructions for people running Windows, Mac, and Linux. In addition, its well-designed graphical interface is a sight for sore eyes, especially for those of us who have been using command line tools for SD card flashing. The list of amazing software that supports Raspberry Pi could go on for pages, but I only have limited space here. So I’ll leave you with my favourite point about Raspberry Pi’s strong software support. When you get a Raspberry Pi today and download Raspbian, you can rest assured that, because of the rapidly improving software support, it will only get better with age. You certainly can’t say that about everything you buy.
raspberrypi.org/magpi
LEARN TO CODE WITH SCRATCH Get started today for
MAKE GAMES AND APPLICATIONS WITH YOUR
just £2.99 / $3.99
Raspberry Pi
From the makers of the official Raspberry Pi magazine
ESSENTIALS
Find it on [ LEARN TO CODE WITH SCRATCH ]
ESSENTIALS
Tabs: Click the tabs to choose between changing a sprite’s scripts, costumes, or sounds
ESSENTIALS
[ CHAPTER
digital app
ONE ]
The Blocks Palette: This is where you find the commands to control your sprites. Click the rounded buttons at the top to switch between the different types of blocks
ESSENTIALS
Scripts Area: Assemble your programs here by dragging blocks in from the Blocks Palette and joining them together
Fancy yourself as Disney or Miyamoto? Whether your inspiration is Mickey Mouse or Mario, Scratch helps you to bring your creations to life…
The Sprite List: Select your sprites here, so you can change their scripts or costumes. Click the Stage in the Sprite List to add scripts to it or change its background
The Stage: Watch your sprites move and interact here
G
LEARN TO
CODE
WITH
SCRATCH MAKE SIMPLE GAMES AND APPLICATIONS
ON YOUR
[ KEEP UP TO DATE ]
Written by
Get the latest version of Scratch by updating your operating system using: sudo aptget update && sudo apt-get upgrade
4
[ Chapter One ]
Raspberry Pi The MagPi Team
5
magpi.cc/Scratch-book 98
August 2016
raspberrypi.org/magpi | https://issuu.com/fransko/docs/magpi48 | CC-MAIN-2017-34 | refinedweb | 34,384 | 60.24 |
I'll start by saying this is an assignment for a class - so I'm not looking for a complete solution - just some guidance on a few questions. I am a C novice at best.
My assignment is to write a program using UDP/IP to send a file from one machine to another within our network. I've got the mechanics of sending a char message from one machine to the next so I'm working on figuring out how to read, parse, add the header and CRC information prior to sending.
What I have so far....
File: send.c.Code:
#include <sys/types.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netdb.h>
#include <stdio.h>
#define MSG "This is a test message!"
#define BUFMAX 100
//Command Arguments:
// [0] - send
// [1] - destination machine
// [2] - destination port number
// [3] - file to be sent
int main(int argc, char* argv[]){
int sk;
char buf[BUFMAX];
FILE *fp;
fp = fopen(argv[3],"r");
fread(buf, BUFMAX, 1, fp);
// My question is in this area...how do I get binary values?
int k;
for(k=0;k<BUFMAX;k++){
printf("The buffer contains %d\n",(char *)buf[k]);
}
struct sockaddr_in remote;
struct hostent *hp;
sk = socket(AF_INET, SOCK_DGRAM, 0);
remote.sin_family = AF_INET;
hp = gethostbyname(argv[1]);
if(hp==NULL){
printf("Can't find hostname. %s\n", argv[1]);
exit(1);
}
bcopy(hp->h_addr, &remote.sin_addr.s_addr, hp->h_length);
remote.sin_port = ntohs(atoi(argv[2]));
dlsendto(sk, MSG, strlen(MSG)+1, 0, &remote,
sizeof(remote),0);
read(sk,buf,BUFMAX);
printf("%s\n", buf);
close(sk);
}
I can read the file, but when I output the values I'm getting characters instead of binary. Is there another way to display the data so I can validate what's being passed? I've spent a great deal of time looking through articles on Binary files, text to binary, and can't seem to find an example that explains what I'm doing wrong.
Any guidance would be great. | https://cboard.cprogramming.com/networking-device-communication/135947-file-transfer-using-udp-ip-printable-thread.html | CC-MAIN-2017-22 | refinedweb | 340 | 67.45 |
Red Hat Bugzilla – Full Text Bug Listing
Description of problem:
With the latest 3.12.6 and 3.12.8 kernels, some Bluetooth RFCOMM connections fail with "Transport endpoint not connected" after a Bluetooth connection is established. Rebooting into a 3.11.10 kernel makes the problem go away.
This is regression in the kernel Bluetooth support.
This occurs with avrdude and pyserial code that connects to an Arduino over Bluetooth HC-05 adapter. Cutecom and Minicom work OK, but the connection takes many (3-5) seconds to establish, when in the past it was nearly instantaneous. The same pyserial and avrdude code works on older kernels and on Windows.
Version-Release number of selected component (if applicable):
3.12.6, 3.12.8
How reproducible:
Always
Steps to Reproduce:
1. Pair an RFCOMM device
2. rfcomm bind 0
3. Run the following Python code (requires pyserial)
#!/usr/bin/env python
from serial import Serial, SerialException
import time
try:
# Do or do not; there is no try (here we "do not" under 3.12.8)
print "Connecting..."
s = Serial("/dev/rfcomm0", 115200, timeout=.1)
print "Writing..."
s.write("foo")
print "Reading..."
s.read()
except SerialException, ex:
print ex
try:
# Repeat Yoda's words of wisdom
print "Connecting..."
s = Serial("/dev/rfcomm0", 115200, timeout=.1)
time.sleep(5) # Magic kernel-3.12.8 RFCOMM incantation
print "Writing..."
s.write("foo")
print "Reading..."
s.read()
except SerialException, ex:
print ex
Actual results:
First one fails, resulting in "write failed: [Errno 107] Transport endpoint is not connected"; the second one succeeds.
Expected results:
Both succeed. The need for the 5 second sleep is bogus. As soon as a file descriptor is returned, it should be immediately usable.
Additional info:
Please try the patch proposed in this thread:
Patch.13.5-100.fc19. Please test this kernel update and let us know if you issue has been resolved or if it is still present with the newer kernel.
If you experience different issues, please open a new bug report for those.
This is not a stale bug. This problem still exists in the 3.13.5 kernel. I still must use a 3.11 kernel if I wish to use Bluetooth SPP.
Still exists in kernel-3.13.6-100.fc19.x86_64
Anyone else paying attention to this problem except for me? Broken Bluetooth SPP/RFCOMM support for 2 months seems like it should be a bigger deal.
You cannot send ENOTCONN for a non-blocking call. The proper response is EAGAIN or EWOULDBLOCK.
***********.14.4-100.
This appears to be fixed in 3.14.4-100.fc19. Can someone point to a Kernel commit where this was fixed? I've been following the Bluetooth kernel commits and do not see one explicitly for. | https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=1060457 | CC-MAIN-2017-09 | refinedweb | 460 | 70.39 |
Created on 2003-08-13 23:21 by gregory.p.smith, last changed 2004-03-16 07:59 by gregory.p.smith. This issue is now closed.
In the old bsddb module a bsddb.btopen(..) database
would return the next available key+value on a
set_location(key) call when key did not exist in the
database. In python 2.3 (pybsddb) it raises an
exception and leaves the cursor at an unknown position
in the database.
[reported by Anthony McDonaly on comp.lang.python]
>>> import os
>>> import bsddb
>>> os.chdir('/tmp')
>>> my_data = bsddb.btopen('testing', 'c')
>>> for i in range(10):
... if i == 5:
... pass
... else:
... my_data['%d'%i] = '%d'%(i*i)
...
>>> my_data.keys()
['0', '1', '2', '3', '4', '6', '7', '8', '9']
>>> my_data.sync()
>>> my_data.set_location('5')
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File
"/space/python-2.3/lib/python2.3/bsddb/__init__.py",
line 117, in set_location
return self.dbc.set(key)
_bsddb.DBNotFoundError: (-30991, 'DB_NOTFOUND: No
matching key/data pair found')
Correct behaviour would have been to return ('6', '36')
Logged In: YES
user_id=413
Yes this is a bug. The set_location() method should have been calling set_range() rather than set() internally.
Fixing that exposed another bug: set_range() would crash when looking up a key that exists in hash or rn databases.
Both bugs have been fixed to fix this one in python 2.4 & pybsddb CVS.
This bugfix should go into python 2.3.4.
Logged In: YES
user_id=413
committed fix to head and release23-maint along with an associated fix for set_range where it could free() memory that it doesn't own on non B-Tree databases. | http://bugs.python.org/issue788421 | crawl-003 | refinedweb | 277 | 69.48 |
@lewtun Many Thanks for your prompt response. Works fine now
Quick!
hi @hennesseeee, i think you need to add the git repo via the
dependency_links argument of setup tools:
There is no script that does that directly, no (though having a config file is a very minimal requirement).
Hi @lewtun, thanks for your response and suggestion. I tried adding the following line
dependency_links=[‘’],
in the
setup.py along with
gnnbench in requirements (I also tried
gnnbench==0.1.0) but both give me the following
ERROR: Could not find a version that satisfies the requirement gnnbench (from grapht==0.0.1) (from versions: none)
ERROR: No matching distribution found for gnnbench (from grapht==0.0.1)
As a workaround I’ve changed
requirements = cfg.get(‘requirements’,’’).split()
to
requirements = cfg.get(‘requirements’,’’).split() + [‘gnnbench @’,]
which seems to work. It looks like with later versions of pip you should be able to have a single string in the requirements which is what my workaround is based (). The parser which parses
settings.ini seems to have issues if I put
gnnbench @ directly in requirements
Update: this fix doesn’t quite work… its install the base gnnbench package but not the submodules…
Thanks
I figured it out:
create_config('nbdev', user='muellerzr', path='.', cfg_name='settings.ini') cfg = Config(cfg_name='settings.ini') from nbdev.export import reset_nbdev_module, notebook2script reset_nbdev_module() notebook2script('testingNBDev.ipynb')
Hello!?
Hello!
hey @hennesseeee, i’m not sure whether this will solve your problem but have you tried upgrading setuptools as follows:
pip install setuptools -U
I’ve sometimes found that this was the main source of errors with pip install
@Andreas_Daiminger Well yes, you need to save the notebooks for the changes to take effect, the same way you would need to save your python files. Note that in command mode, you just need to hit
S to save.
@tinoka You can use the patch decorator from fastcore to achieve this.
This is remarkable. As an extension we should be able to export few selected cells from a single notebook into a script using
#export? It would be wonderful if we could put in
notebook2script() as the last cell of a notebook with necessary imports (but without all the other bells and whistles and repo cloning) and we get a .py with only those export cells as output.
That’s essentially what happens, we need a configuration file to note where everything is being done to but otherwise that’s exactly what happens
See my solution a few posts down
Tried with your approach but could not get it to work. However, here is a similar effort. I now have the code listed in the link in my project directory and then I call the
notebook2script function from the
export_nb_cells.py file using
from export_nb_cells import notebook2script.
This is a rather neat solution. Will explore this approach a bit more; conceptually, I find the ability to experiment using a notebook and once I figure out how to do things, export the selected cells from the notebook to a script very powerful.
I made sure my setuptools and pip were up to date. I think it may actually be a issue with the package because I can’t seem to install it from command line as I normally do (I have to git clone and then install from the downloaded repo).
Regarding other packages installed from github such as pygsp (which is on pypi but the github version is more up to date) I’ve managed to get it to install with the nbdev package (and pass CI) if I edit
setup.py and change the line
requirements = cfg.get(‘requirements’,’’).split()
to
requirements = cfg.get(‘requirements’,’’).split() + [‘pygsp @’]
I’m not sure if there is a cleaner was to do this by just editing
settings.ini.
Thanks for your help!
I’m getting a similar error to Marco. I’m trying to run tests after installing fastai2. Before installing, I delete my fastai2 miniconda directory, restart the terminal, and check
pip list and
conda list in my base conda environment to confirm nbdev, fastai2, and fastcore aren’t there. I tried a number of combinations (for nbdev, fastai2, and fastcore) of editable and packaged (conda/pip) installs. All tests fail because
nbdev_test_nbs can’t find
fastai2:
ModuleNotFoundError: No module named 'fastai2'
What I did to check this as cleanly as possible was:
git clone cd fastai2 conda env create -f environment.yml source activate fastai2 pip install -e '.[dev]' nbdev_test_nbs
Looks like because of MacOS’s zsh, the
.[dev] has to be in quotes.
@sgugger does this seem familiar? I’ll update if I get it working.
update:
nbdev_test_nbs works inside the nbdev root folder, after running a dev-install, via:
pip install packaging git clone pip install -e nbdev
«cd to /nbdev»
nbdev_test_nbs
So…
nbdev_test_nbs didn’t work in /fastai2 after the above, and I wasn’t able to run it again in the /nbdev root folder afterwards, though my terminal history seems to point there. It did successfull run in /nbdev/nbdev. Are the tests supposed to only work there?
update: I tried running
nbdev_test_nbs in /fastai2 with a fresh git-clone and package-install of everything in a new conda environment – no luck. Maybe… the tests aren’t supposed to be working yet? But that’d mean the nbdev I have isn’t working at all if it can’t find fastai2 … guess I’ll find out next.
No the tests are supposed to work everywhere. It looks like you have a problem in your enviromnent so I’d suggest creating a new one and reinstalling.
So… I got it working, but the way to do it was very weird. I’ll explain it here in case anyone else with a long-term conda system on MacOS has similar problems.
I’ve been reinstalling fresh conda environments before each attempt, before my last post. I later tried deleting and reinstalling Miniconda itself twice. I finally got the tests working (it’s able to import fastai2) by deleting every hidden file/folder I could find relating to jupyter, conda, or ipython; ontop of a fresh conda reinstall.
Now all
nbdev_test_nbs tests pass in /nbdev and /fastcore. 17 notebooks fail in /fastai2 → these seem more ’normal’: the result of CUDA tests on a cpu-only system, or maybe import failures from wrong versions (“
No module named 'wandb’”, “
cannot import name 'PILLOW_VERSION' from 'PIL’”, and “
Torch not compiled with CUDA enabled” for example).
This is with a total clean install of miniconda — my base env doesn’t even have Jupyter on it yet. To get it working (along with a bunch of restarts to be on the safe side) what I did was:
- delete my Miniconda root directory
- delete every hidden folder and file I could find in my Home directory that seemed relevant:
- .jupyter; anything with ‘ipython’, ‘conda’, etc.
I already commented out any conda-related lines in my .zshrc (MacOS equivalent of .bashrc or .bash_profile). Then redownload Miniconda, verify, install, and (since it isn’t configured to write to Zsh) copy its conda-initialization lines from ~/.bash_profile to ~/.zshrc. Checking
pip list and
conda list after restarting Terminal, the base conda env only has a handful of packages.
Then I installed fastai2 to its own env, along with fastcore. I don’t remember if I used packaged or editable, I think I tested both.
nbdev_test_nbs worked, with the new errors mentioned above; I dev-installed nbdev and its own tests passed.
This method deletes all Jupyter customizations and conda environments.
Where I think the issue was:
I still had errors even after deleting and reinstalling conda on my system. I noticed in Jupyter there were old ‘named’ kernels available in the dropdown menu. Kernels for deleted environments. This likely meant ipython/ipykernel had a config file pointing there, and was being used by new Jupyter & etc installations. I only had one for “fastai” and a couple unrelated kernels like Scala … so I don’t know how this tripped up
nbdev_test_nbs into being unable to import fastai2.
What contributed confusion early in this process was not knowing how “developer”/“editable” pip/conda/python installs work. I’m used to the pre-0.7 course-v1 workflow of freely editing files in a fastai folder; and I wasn’t sure if dev-installs went to the base/system python ‘env’ or to the active env (it’s the active conda env). Also, MacOS’s switch to ZShell as its default from Bash means that the
pip install -e .[dev] line doesn’t work → the brackets must be in quotes (double or single):
pip install -e '.[dev]', and I didn’t know the significance of “.[dev]” since
pip install -e . appaeared to run just fine — so I thought “.[dev]” could’ve been some informal programmer shorthand. Not the case.
Hopefully this helps anyone facing a similar problem.
Note: looks like the new PIL (7.0.0) doesn’t work with Torchvision, as of 3 January, → but the PyTorch devs are aware and plan to update (PyTorch and Torchvision) to fix it this week (the PIL & PILLOW_VERSION error). Until then this:
pip install "pillow<7" seems to work.
edit: is fastai2 currently not meant to work on CPU-only systems? After installing a few dependencies (seemingly not taken care of by the environment.yml fastai2 env repo install):
pip install wandb conda install -c fastai -c pytorch fastai pip install tensorboard
the only tests that fail, seem to all be CUDA related. Only 6 fail now:
- 00_torch_core.ipynb
- 13_learner.ipynb
- 18_callback.fp16.ipynb
- 22_tutorial.imagenette.ipynb
- 32_text.models.awdlstm.ipynb
- 45_collab.ipynb
Interesting, since Apple has dropped all CUDA support… I wonder if it’s possible to install a cuda-gpu version of PyTorch on MacOS just for code compatibility. I’ll update here if I try it out.
Hi, this seems great so far! I was just wondering:
- How do you get the [source] link to show up beside your classes and functions in the docs? I tried comparing my notebook file with the tutorial one but could not find how it was done.
- Is there a way to customize the style the way that the nbdev docs are done (the blue lines between headers etc.)
Thanks
I’ve been using nbdev since the release on my laptop, and so far it’s been great.
I’ve just pulled my repo on another computer after installing nbdev as well, and it seems the behaviour of nbdev_build_lib has changed.
in my notebooks i have a bunch of
import mylib.foo as foo,
mylib being the lib_name from settings.ini
on my laptop, (with nbdev installed very early at the release, and not updated since), the generated code is stayed untouch as
import mylib.foo as foo and it works great
on the fresh install, the code is modified to
import .foo as foo and my python (3.7) is complaining with a
SyntaxError: invalid syntax error message. It doesn’t like the .foo part.
Did you guys change how the imports are processed? How can I solve this syntax error?
This problem seems to have been introduced with fix #13 | https://forums.fast.ai/t/nbdev-discussion/59145/163 | CC-MAIN-2020-34 | refinedweb | 1,866 | 64.41 |
25 June 2012 05:00 [Source: ICIS news]
SINGAPORE (ICIS)--Here is Monday’s midday ?xml:namespace>
CRUDE: Aug WTI $80.25/bbl, up 49 cents/bbl; Aug BRENT $91.35/bbl, up 37 cents/bbl
Crude futures strengthened in Asian morning trade, supported by reduced output in the US Gulf amid concerns over an approaching storm. Interest was focussed on an upcoming summit of eurozone leaders amid expectations of moves towards a closer monetary union, and new measures to stimulate growth.
NAPHTHA: $706.00-708.00/tonne CFR Japan, up $9.50/tonne
Open-spec naphtha prices partially recovered from last week, after landing at a 21-month low on 22 June, buoyed by overnight firmer crude futures.
BENZENE: $1,015-1,035/tonne FOB
Prices firmed in tandem with firmer crude futures. Bids for August loading were at $1,000/tonne FOB
TOLUENE: $980-990/tonne FOB
Market activity was limited. Bids for August and September were heard at $970/tonne FOB
ETHYLENE: $920-950/tonne CFR NE Asia, up $20/tonne at the low end
Selling ideas were as high as $1,000/tonne CFR NE Asia for July arrival due to the ongoing
PROPYLENE: $1,240-1,260/tonne CFR NE Asia, stable
Market players stayed on the sidelines amid limited July supply and following a flurry of deals last week. No firm offers and bids were | http://www.icis.com/Articles/2012/06/25/9572224/noon-snapshot-asia-markets-summary.html | CC-MAIN-2015-11 | refinedweb | 231 | 61.97 |
This article aims to introduce container and Iterator of C++ Standard Template Library (STL) with the help of the std::vector. While covering container, this article covers some functions usage of the std::vector and while covering Iterator, this article covers predicates and function pointers.
Containers are objects that hold other objects. There are two types of containers:
1. Sequence container: It stores and allows data retrieval sequentially. E.g. the vector class defines a dynamic array, deque creates a double-ended queue, and list provides a linear list.
2. Associative container: It allows efficient retrieval of values based on keys. For example, a map provides access to values with unique keys. Thus, a map stores a key/value pair and allows a value to be retrieved given its key.
Container uses allocator and predicates.
Allocators manage memory. Each container has defined for it an allocator. Allocators manage memory allocation for a container. The default allocator is an object of class allocator, but you can define your own allocators if needed by specialized applications. For most uses, the default allocator is sufficient.
Some containers use a special type of function called a predicate. There are two variations of predicates: unary and binary. A unary predicate takes one argument, while a binary predicate has two. These functions return true/false results. But the precise conditions that make them return true or false are defined by you.
Bitset <?xml:namespace prefix = o /><o:p>
A set of bits. <o:p>
<bitset> <o:p>
Deque <o:p>
A double-ended queue. <o:p>
<deque> <o:p>
List <o:p>
A linear list. <o:p>
<list> <o:p>
Map <o:p>
Stores key/value pairs in which each key is associated with only one value. <o:p>
<map> <o:p>
Multimap <o:p>
Stores key/value pairs in which one key may be associated with two or more values. <o:p>
Multiset <o:p>
A set in which each element is not necessarily unique. <o:p>
<set> <o:p>
priority_queue <o:p>
A priority queue. <o:p>
<queue> <o:p>
Queue <o:p>
A normal queue. <o:p>
Set <o:p>
A set in which each element is unique. <o:p>
Stack <o:p>
A stack. <o:p>
<stack> <o:p>
Vector <o:p>
A dynamic array. <o:p>
<vector> <o:p>
Iterator gives ability to cycle through the elements stored in the containers. They are more or less pointers. There are five types of Iterators:
1. Random Access : Store and retrieve values. Elements may be accessed randomly.
2. Bidirectional : Store and retrieve values. Forward and backward moving.
3. Forward : Store and retrieve values. Forward moving only.
4. Input : Only retrieve, but not store values. Forward moving only.
5. Output : Only store, but not retrieve values. Forward moving only.
Apart from this STL also supports Reverse iterator. They are either bidirectional or random access. They move in the reverse direction. E.g. if it is pointing to last element then incrementing the pointer would point to the second last element.
Let’s take an example of vector template class to understand container, iterator and predicate:
Vector is a container class and it implements dynamic array which grows as per need. The template specification for vector is shown here:
template <class T, class Aallocator = Allocator <T>>class vector
In order to use vector, you need to include <vector> header file.The vector is part of the std namespace, so you need to qualify the name. This can be accomplished as shown here:
using std::vector;
vector<int> vInts;
or you can fully qualify the name like this: std::vector<int> vInts;
assign <o:p>
Erases a vector and copies the specified elements to the empty vector. <o:p>
at <o:p>
Returns a reference to the element at a specified location in the vector
back <o:p>
Returns a reference to the last element of the vector
begin <o:p>
Returns a random-access iterator to the first element in the container. <o:p>
capacity <o:p>
Returns the number of elements that the vector could contain without allocating more storage. <o:p>
clear <o:p>
Erases the elements of the vector
empty <o:p>
Tests if the vector container is empty. <o:p>
end <o:p>
Returns a random-access iterator that points just beyond the end of the vector
erase <o:p>
Removes an element or a range of elements in a vector from specified positions.
front <o:p>
Returns a reference to the first element in a vector
get_allocator <o:p>
Returns an object to the allocator class used by a vector
insert <o:p>
Inserts an element or a number of elements into the vector at a specified position.
max_size <o:p>
Returns the maximum length of the vector
pop_back <o:p>
Deletes the element at the end of the vector
push_back <o:p>
Adds an element to the end of the vector
rbegin <o:p>
Returns an iterator to the first element in a reversed vector
rend <o:p>
Returns an iterator to the end of a reversed vector
resize <o:p>
Specifies a new size for a vector
reserve <o:p>
Reserves a minimum length of storage for a vector object.
size <o:p>
Returns the number of elements in the vector
swap <o:p>
Exchanges the elements of two vectors. <o:p>
vector <o:p>
Constructs a vector of a specific size or with elements of a specific value or with a specific allocator or as a copy of some other
operator [] <o:p>
Returns a reference to the <SPAN style="FONT-FAMILY: 'Courier New'">vector element at a specified position. <o:p>
<SPAN style="FONT-FAMILY: 'Courier New'">vector
There are several constructors for the vector container. Following are most commonly used constructors
vector<int> vInt;
vector<int> vInt(10);
vector<int> vInt(10, int(0));
vector<int> vIntA(vIntB);
std::vector<CItem> m_vItem;
m_vItem.push_back(Item);
Above code constructs m_vItem vector, which would hold objects of type CItem. puch_back would add element at the end of the vector.
std::vector<CItem>::iterator m_iItem;
for (m_iItem = m_vItem.begin();m_iItem != m_vItem.end(); m_iItem++)
{
index = m_List.InsertItem(0,m_iItem->GetNumber());
m_List.SetItemText(index,1,m_iItem->GetName());
}
Above code constructs m_iItem Iterator. m_vItem.begin() would initialize Iterator at the start of the vector and looping it untill it reaches m_vItem.end(). The code also add item into the List.
std::vector<CItem>::iterator end;
end = std::remove_if(m_vItem.begin(),m_vItem.end(),IsSelected);
m_vItem.erase(end,m_vItem.end());
Above code define end iterator. remove_if removes all elements which are in the range of m_vItem.begin() to m_vItem.end() and for which predicate IsSelected returns true. erase() function erase all the removed objects from vector.
IsSelected is a unary predicate. It takes one argument (object of type CItem) and returns true or false.
This article and sample project attached with it, would be good reference for all the developers who are just putting feet in the STL. It would also solve some basic queries of even experience STL developers.
Herbert Schildt (Complete Reference C++)
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
List - A linear list. - <list>
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/16627/A-study-of-STL-container-Iterator-and-Predicates | CC-MAIN-2018-43 | refinedweb | 1,263 | 57.67 |
Start coding something amazing with our library of open source Cloud code patterns. Content provided by IBM.
Do you need to run a process every day at the exact same time like an alarm? Then Spring’s scheduled tasks are for you. Allowing you to annotate a method with
@Scheduled causes it to run at the specific time or interval that is denoted inside it. In this post, we will look at setting up a project that can use scheduled tasks as well as how to use the different methods for defining when they execute.
I will be using Spring Boot for this post, making the dependencies nice and simple due to scheduling being available to the
spring-boot-starter dependency that will be included in pretty much every Spring Boot project in some way. This allows you to use any of the other starter dependencies, as they will pull in
spring-boot-starter and all its relationships. If you want to include the exact dependency itself, use
spring-context.
You could use
spring-boot-starter.
!"); } }
There is quite a lot of code here that has no importance to running a scheduled task. As I said a minute ago, we need to use
@Scheduled on a method, and it will start running automatically. So in the above example, the
create method will start running every 1000ms (1 second) as denoted by the
fixedRate property of the annotation. If we wanted to change how often it ran, we could increase or decrease the
fixedRate time, or we could consider using the different scheduling methods available to us.
So you probably want to know what these other ways are, right? Well, here they are (I will include
fixedRate here as well):
fixedRateexecutes the method with a fixed period of milliseconds between invocations.
fixedRateStringis the same as
fixedRatebut with a string value instead.
fixedDelayexecutes the method with a fixed period of milliseconds between the end of one invocation and the start of the next.
fixedDelayStringis the same as
fixedDelaybut with a string value instead.
cronuses cron-like expressions to determine when to execute the method (we will look at this more in depth later).
There are a few other utility properties available to the
@Scheduled annotation.
zoneindicates the time zone that the cron expression will be resolved for. If no time zone is included, it will use the server’s default time zone. So if you needed it to run for a specific time zone, say Hong Kong, you could use
zone = "GMT+8:00".
initialDelay us define the seconds, minutes ,and hours the task runs at but can go even further and specify even the years that a task will run in.
Below is a breakdown of the components that build a cron expression.
Secondscan have values
0-59or the special characters
, - * /.
Minutescan have values
0-59or the special characters
, - * /.
Hourscan have values
0-59or the special characters
, - * /.
Day of monthcan have values
1-31or the special characters
, - * ? / L W C.
Monthcan have values
1-12,
JAN-DECor the special characters
, - * /.
Day of weekcan have values
1-7,
SUN-SATor the special characters
, - * ? / L C #.
Yearcan be empty, have values
1970-2099or the special characters
, - * /.
Just for some extra clarity, I have combined the breakdown into an expression consisting of the field labels.
@Scheduled(cron = "[Seconds] [Minutes] [Hours] [Day of month] [Month] [Day of week] [Year]")
Please do not include the braces in your expressions (I used them to make the expression clearer).
Before we can and 45).
Lrepresents the last day of the week or month. Remember that Saturday is the end of the week in this context, so using
Lin the day of week field will trigger on a Saturday. This can be used in conjunction with a number in the day of month field, such as
6Lto represent the last Friday of the month or an expression like
L-3denoting the third from the last day of the month. If we specify a value in the day of week field, we must use
?in the day of month field, and vice versa.
Wrepresents the nearest weekday of the month. For example,
15Wwill trigger on the 15th day of the month if it is a weekday. Otherwise, it will run on the closest weekday. This value cannot be used in a list of day values.
#specifies both the day of the week and the week that the task should trigger. For example,
5#2means the second Thursday of the month. If the day and week you specified overflows into the next month, then it will not trigger.
A helpful resource with slightly longer explanations can be found here, which helped me write this post.); } }
Here, we have a class that is querying Cassandra every 20 seconds for the average value of events in the same time period. Again, most of the code here is noise from the
@Scheduled annotation, but it can be helpful to see it in the wild. Furthermore, if you have been observant, for this use-case of running every 20 seconds, using the
fixedRate and possibly the
fixedDelay properties instead of
cron would be suitable here as we are running the task so frequently:
@Scheduled(fixedRate = 20000)
Is the
fixedRate equivalent of the cron expression used above.
The final requirement, which I alluded to earlier, is to add the
@EnableScheduling annotation to a configuration class:
@SpringBootApplication @EnableScheduling public class Application { public static void main(final String args[]) { SpringApplication.run(Application.class); } }
Being that this is a small Spring Boot application, I have attached the
@EnableScheduling annotation to the main
@SpringBootApplication class.
In conclusion, we can schedule tasks to trigger using the
@Scheduled annotation along with either a millisecond rate between executions or a cron expression for finer timings that cannot be expressed with the former. For tasks that need to run very often, using the
fixedRate or
fixedDelay properties will suffice, but once the time between executions becomes larger, it will become harder to quickly determine the defined time. When this occurs, the
cron property should be used for better clarity of the scheduled timing.
The little amount of code used in this post can be found on my GitHub.
If you found this post helpful and wish to keep up to date with my new tutorials as I write them, follow me on Twitter at @LankyDanDev.
Code something amazing with the IBM library of open source blockchain patterns. Content provided by IBM.
Published at DZone with permission of Dan Newton , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/running-on-time-with-springs-scheduled-tasks | CC-MAIN-2018-47 | refinedweb | 1,112 | 63.19 |
On Mon, 27 Mar 2017 16:26:33 +0200Peter Zijlstra <peterz@infradead.org> wrote:> On Fri, Mar 24, 2017 at 04:53:01AM +0100, luca abeni wrote:> > From: Luca Abeni <luca.abeni@santannapisa.it>> > > > Instead of decreasing the runtime as "dq = -Uact dt" (eventually> > divided by the maximum utilization available for deadline tasks),> > decrease it as "dq = -(1 - Uinact) dt", where Uinact is the> > "inactive utilization". > > > In this way, the maximum fraction of CPU time that can be reclaimed> > is given by the total utilization of deadline tasks.> > This approach solves some fairness issues that have been noticed> > with "traditional" global GRUB reclaiming. > > I think the Changelog could do with explicit enumeration of what> "some" is.Sorry, when writing the changelog I've been lazy; I'll add a link toDaniel's email showing the problem in action.> > Signed-off-by: Luca Abeni <luca.abeni@santannapisa.it>> > Tested-by: Daniel Bristot de Oliveira <bristot@redhat.com>> > ---> > kernel/sched/deadline.c | 23 ++++++++++++++++-------> > 1 file changed, 16 insertions(+), 7 deletions(-)> > > > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c> > index d70a7b9..c393c3d 100644> > --- a/kernel/sched/deadline.c> > +++ b/kernel/sched/deadline.c> > @@ -900,14 +900,23 @@ extern bool sched_rt_bandwidth_account(struct> > rt_rq *rt_rq); /*> > * This function implements the GRUB accounting rule:> > * according to the GRUB reclaiming algorithm, the runtime is> > + * not decreased as "dq = -dt", but as "dq = (1 - Uinact) dt",> > where > > Changelog had it right I think: dq = -(1 - Uinact) dtSorry about the typo... I'll fix it> > + * Uinact is the (per-runqueue) inactive utilization, computed as> > the> > + * difference between the "total runqueue utilization" and the> > runqueue> > + * active utilization.> > + * Since rq->dl.running_bw and rq->dl.this_bw contain utilizations> > + * multiplied by 2^20, the result has to be shifted right by 20.> > */> > -u64 grub_reclaim(u64 delta, struct rq *rq)> > +u64 grub_reclaim(u64 delta, struct rq *rq, u64 u)> > {> > + u64 u_act;> > +> > + if (rq->dl.this_bw - rq->dl.running_bw > (1 << 20) - u)> > + u_act = u;> > + else> > + u_act = (1 << 20) - rq->dl.this_bw +> > rq->dl.running_bw; +> > + return (delta * u_act) >> 20; > > But that's not what is done here I think, something like this instead:> > Uinact = Utot - Uact> > -t_u dt ; Uinact > (1 - t_u)> dq = {> -(1 - Uinact) dt> > > And nowhere do we have an explanation for that.Sorry about this confusion... The accounting should be dq = -(1 - Uinact)dtbut if (1 - Uinact) is too large (larger than the task's utilization)then we use the task's utilization instead (otherwise, we end upreclaiming other runqueues' time). I realized that this check wasneeded after writing the comments, and I forgot to update the commentswhen I fixed the code :(> Now, I suspect we can write that like: dq = -max{ t_u, (1 - Uinact) }> dt, which would suggest this is a sanity check on Utot, which I> suspect can be over 1. Is this what is happening?Right... I'll fix the code and comments according to your suggestion. Thanks, Luca> ;> > /*> * What we want to write is:> *> * max(BW_UNIT - u_inact, dl_se->dl_bw)> *> * but we cannot do that since Utot can be larger than 1,> * which means u_inact can be larger than 1, which would> * have the above result in negative values.> */> if (u_inact > (BW_UNIT - dl_se->dl_bw))> u_act = dl_se->dl_bw;> else> u_act = BW_UNIT - u_inact;> > return (delta * u_act) >> BW_SHIFT;> }> > Hmm? | https://lkml.org/lkml/2017/3/27/510 | CC-MAIN-2021-17 | refinedweb | 539 | 61.77 |
Getting Started with Couchbase and Spring Data Couchbase
Getting Started with Couchbase and Spring Data Couchbase
Join the DZone community and get the full member experience.Join For Free
Written by Josh Long on the Spring blog.
This blog was inspired by a talk that Laurent Doguin, a developer advocate over at Couchbase, and I gave at Couchbase Connect last year. Merci Laurent!
This is a demo of the Spring Data Couchbase integration. From the project page, Spring Data Couchbase is:
The Spring Data Couchbase project provides integration with the Couchbase Server database. Key functional areas of Spring Data Couchbase are a POJO centric model for interacting with Couchbase Buckets and easily writing a Repository style data access layer.
What is Couchbase?
Couchbase is a distributed data-store that enjoys true horizontal scaling. I like to think of it as a mix of Redis and MongoDB: you work with documents that are accessed through their keys. There are numerous client APIs for all languages. If you’re using Couchbase for your backend and using the JVM, you’ll love Spring Data Couchbase. The bullets on the project home page best enumerate its many features:
- Spring configuration support using Java based
@Configurationclasses or an XML namespace for the Couchbase driver. interfaces including support for custom finder methods (backed by Couchbase Views).
- JMX administration and monitoring
- Transparent
@Cacheablesupport to cache any objects you need for high performance access.
Running Couchbase
Use Vagrant to Run Couchbase Locally
You will need to have Couchbase installed if you don’t already (naturally). Michael Nitschinger (@daschl, also lead of the Spring Data Couchbase project), blogged about how to get a simple4-node Vagrant cluster up and running here. I’ve reproduced his example here in the
vagrantdirectory. To use it, you’ll need to install Virtual Box and Vagrant, of course, but then simply run
vagrant up in the
vagrant directory. To get the most up-to-date version of this configuration script, I went to Michael’s GitHub
vagrants project and found that, beyond this example, there are numerous other Vagrant scripts available. I have a submodule in this code’s project directory that points to that, but be sure to consult that for the latest-and-greatest. To get everything running on my machine, I chose the Ubuntu 12 installation of Couchbase 3.0.2. You can change how many nodes are started by configuring the
VAGRANT_NODES environment variable before startup:
VAGRANT_NODES=2 vagrant up
You’ll need to administer and configure Couchbase on initial setup. Point your browser to the right IP for each node. The rules for determining that IP are well described in the
README. The admin interface, in my case, was available at
192.168.105.101:8091 and
192.168.105.102:8091. For more on this process, I recommend that you follow theguidelines here for the details.
Here’s how I did it. I hit the admin interface on the first node and created a new cluster. I used
admin for the username and
password for the password. On all subsequent management pages, I simply joined the existing cluster by pointing the nodes to
192.168.105.101 and using the aforementioned
admin credential. Once you’ve joined all nodes, look for the
Rebalance button in the Server Nodes panel and trigger a cluster rebalance.
If you are done with your Vagrant cluster, you can use the
vagrant halt command to shut it down cleanly. Very handy is also
vagrant suspend, which will save the state of the nodes instead of shutting them down completely.
If you want to administer the Couchbase cluster from the command line there is the handy
couchbase-cli. You can simply use the
vagrant ssh command to get into each of the nodes (by their node-names:
node1,
node2, etc..). Once there, you can run cluster configuration commands. For example the
server-list command will enumerate cluster nodes.
/opt/couchbase/bin/couchbase-cli server-list -c 192.168.56.101-u admin -p password
It’s easy to trigger a rebalance using:
/opt/couchbase/bin/couchbase-cli rebalance -c 192.168.56.101-u admin -p password
Couchbase In the Cloud and on Cloud Foundry
Couchbase lends itself to use in the cloud. It’s horizontally scalable (like Gemfire or Cassandra) in that there’s no single point of failure. It does not employ a master-slave or active/passive system. There are a few ways to get it up and running where your applications are running. If you’re running a Cloud Foundry installation, then you can install the the Cumulogic Service Broker which then lets your Cloud Foundry installation talk to the Cumulogic platform which itself can manage Couchbase instances. Service brokers are the bit of integration code that teach Cloud Foundry how to provision, destroy and generally interact with a managed service, like Couchbase, in this case.
Using Spring Data Couchbase to Store Facebook Places
Let’s look at a simple example that reads data (in this case from the Facebook Places API using Spring Social Facebook’s
FacebookTemplate API) and then loads it into the Couchbase server.
Get a Facebook Access Token
You’ll also need a Facebook access token. The easiest way to do this is to go to the Facebook Developer Portal and create a new application and then get an application ID and an application secret. Take these two values and concatenate them with a pike character (
|). Thus, you’ll have something of the form:
appID|appSecret. The sample application uses Spring’s
Environment mechanism to resolve the
facebook.accessToken key. You can provide a value for it in the
src/main/resources/application.properties file or using any of the other supported Spring Boot property resolution mechanisms. You could even provide the value as a
-D argument:
-Dfacebook.accessToken=...|...
Telling Spring Data Couchbase About our Cluster
Data in Couchbase is stored in buckets. It’s logically the same as a database in a SQL RDBMS. It is typically replicated across nodes and has its own configuration. We’ll be using the defaultbucket, but it’s a snap to create more buckets.
Let’s look at the basic configuration required to use Spring Data Couchbase (in this case, in terms of a Spring Boot application):
@SpringBootApplication @EnableScheduling @EnableCaching public class Application { @EnableCouchbaseRepositories @Configuration static class CouchbaseConfiguration extends AbstractCouchbaseConfiguration { @Value("${couchbase.cluster.bucket}") private String bucketName; @Value("${couchbase.cluster.password}") private String password; @Value("${couchbase.cluster.ip}") private String ip; @Override protected List<String> bootstrapHosts() { return Arrays.asList(this.ip); } @Override protected String getBucketName() { return this.bucketName; } @Override protected String getBucketPassword() { return this.password; } } // more beans }
A Spring Data Couchbase Repository
Spring Data provides the notion of repositories - objects that handle typical data-access logic and provide convention-based queries. They can be used to map POJOs to data in the backing data store.
Our example simply stores the information on businesses it reads from Facebook’s Places API. To acheive this we’ve created a simple
Place entity that Spring Data Couchbase repositories will know how to persist:
@Document(expiry = 0) class Place { @Id private String id; @Field private Location location; @Field @NotNull private String name; @Field private String affilitation, category, description, about; @Field private Date insertionDate; // .. getters, constructors, toString, etc }
The
Place entity references another entity,
Location, which is basically the same.
In the case of Spring Data Couchbase, repository finder methods map to views - queries written in JavaScript - in a Couchbase server. You’ll need to setup views on the Couchbase servers. Go to any Couchbase server’s admin console and visit the Views screen, then clickCreate Development View and name it
place, as our entity will be
demo.Place (the development view name is adapted from the entity’s class name by default).
We’ll create two views, the generic
all, which is required for any Spring Data Couchbase POJO, and the
byName view, which will be used to drive the repository’s
findByName finder method. This mapping is by convention, though you can override which view is employed with the
@View annotation on the finder method’s declaration.
First,
all:
Now,
byName:
When you’re done, be sure to Publish each view!
Now you can use Spring Data repositories as you’d expect. The only thing that’s a bit different about these repositories is that we’re declaring a Spring Data Couchbase
Query type for the argument to the
findByName finder method, not a String. Using the
@Query is straightforward:
Query query = new Query(); query.setKey("Philz Coffee"); Collection<Place> places = placeRepository.findByName(query); places.forEach(System.out::println);
Where to go from Here
We’ve only covered some of the basics here. Spring Data Couchbase supports the Java bean validation API, and can be configured to honor validation constraints on its entities. Spring Data Couchbase also provides lower-level access to the
CouchbaseClient API, if you want it. Spring Data Couchbase also implements the Spring
CacheManager abstraction - you can use
@Cacheable and friends with data on service methods and it’ll be transparently persisted to Couchbase for you.
The code for this example is in my Github repository, co-developed with my pal Laurent Doguin (@ldoguin) over at Couchbase. }} | https://dzone.com/articles/getting-started-couchbase-and | CC-MAIN-2020-24 | refinedweb | 1,539 | 54.93 |
Hi,
I'm having problems with server cleanup callbacks
(on windows, with apache 2.0.49 and mod_python 3.1.3):
For instance, with
def endChild(data):
f = file("test.log","a")
try:
print >> f, "endChild", data
finally:
f.close()
firstTime = 1
def handler(req):
if firstTime:
global firstTime
req.server.register_cleanup(req,endChild,("endChild data",)
firstTime = 0
Sometimes, the endChild function runs normally,
but sometimes, I get this in apache error.log:
[notice] Child 1284: Exit event signaled. Child process is ending.
[notice] Child 1284: Released the start mutex
[notice] Child 1284: Waiting for 250 worker threads to exit.
[notice] Child 1284: All worker threads have exited.
[error] python_cleanup: Error calling cleanup object <function endChild at
0x005FFC30>
[error] exceptions.IOError: file() constructor not accessible in
restricted mode
[notice] Child 1284: Child process is exiting
[notice] Parent: Child process exited successfully.
Why is python thinking it is in restricted mode?
I'm prepared to launch my debugger, but I've no clue
where to search first, so any hint is most welcome.
Thanks in advance.
-sbi | https://modpython.org/pipermail/mod_python/2004-April/015475.html | CC-MAIN-2022-33 | refinedweb | 176 | 60.92 |
Code-Behind and XAML
Code-behind is a term used to describe the code that is joined with the code that is created by a XAML loader when a XAML page is compiled into an application. This topic describes requirements for code-behind as well as an alternative inline code mechanism for code in XAML.
This topic contains the following sections.
- Prerequisites
- Code-behind, Event Handler, and Partial Class Requirements
- x:Code
- Inline Code Limitations
- Related Topics
Prerequisites
This topic assumes that you have read the XAML Overview and have some basic knowledge of the CLR and object-oriented programming.
Code-behind, Event Handler, and Partial Class Requirements namespace identified by x:Class. You cannot qualify the name of an event handler to instruct a XAML loader cannot support all of the specific features of the WPF event system,such as certain routed event scenarios or attached events. For details, see Visual Basic and WPF Event Handling.
x:Code
x:Code is a directive element defined in XAML that can contain inline programming code. The code that is defined inline can interact with the XAML on the same page. The following example illustrates inline C# code. Notice that the code is inside the x:Code element and that the code must be surrounded by <CDATA[...]]> to escape it for XML, so that a XAML reader (interpreting either the XAML schema or the WPF schema) will not try to interpret the code literally.
<Page xmlns="" xmlns: <Button Name="button1" Click="Clicked">Click Me!</Button> <x:Code><![CDATA[ void Clicked(object sender, RoutedEventArgs e) { button1.Content = "Hello World"; } ]]></x:Code> </Page>
Inline Code Limitations
You should consider avoiding or limiting the use of inline code for a XAML based application. In terms of architecture and coding philosophy, keeping markup and code-behind separated also keeps the designer and developer roles much more distinct. On a more technical level, the code that you put inline can be awkward to write, because you are always writing into the XAML page's generated partial class, and can only use the default APIs contained within the other namespaces. You also cannot define multiple classes in the inline code, and all code entities must exist as a member or variable within the generated partial class. Other language specific programming features, such as #ifdef against global variables or build variables, are also not available. For more information, see x:Code XAML Directive Element. | http://msdn.microsoft.com/en-us/library/vstudio/aa970568(v=vs.85).aspx | CC-MAIN-2014-52 | refinedweb | 405 | 52.09 |
25 March 2010 13:56 [Source: ICIS news]
TORONTO (ICIS news)--Canada’s Greenfield Ethanol and US firm DNP Green Technology plan to build a $50m (€37m) biobased succinic acid refinery to produce de-icing products - the first commercial venture of its kind in North America, they said on Thursday.
The facility would use technology licensed from Bioamber. The preferred location was ?xml:namespace>
Biorenewable de-icer had a “negative carbon footprint” and was less corrosive than traditional de-icers, they said.
DNP Green president Jean-Francois Huc said the plant would be
Capacity details or timelines for construction and completion were not disclosed.
Under a letter of intent,
( | http://www.icis.com/Articles/2010/03/25/9345908/canadas-greenfield-to-build-50m-succinic-acid-bio.html | CC-MAIN-2014-15 | refinedweb | 109 | 54.42 |
table of contents
NAME¶
getgrent, setgrent, endgrent - get group file entry
SYNOPSIS¶
#include <sys/types.h> #include <grp.h>
struct group *getgrent(void);
void setgrent(void);
void endgrent(void);
setgrent():
|| /* Glibc since 2.19: */ _DEFAULT_SOURCE
|| /* Glibc versions <= 2.19: */ _BSD_SOURCE || _SVID_SOURCE
getgrent(), endgrent():
_XOPEN_SOURCE >= 500 ||
_DEFAULT_SOURCE
Glibc 2.21 and earlier
_XOPEN_SOURCE >= 500
|| /* Since glibc 2.12: */ _POSIX_C_SOURCE >= 200809L
|| /* Glibc versions <= 2.19: */ _BSD_SOURCE || _SVID_SOURCE
DESCRIPTION¶ VALUE¶¶
-¶
- /etc/group
- local group database file
ATTRIBUTES¶.
CONFORMING TO¶
POSIX.1-2001, POSIX.1-2008, SVr4, 4.3BSD.
SEE ALSO¶
fgetgrent(3), getgrent_r(3), getgrgid(3), getgrnam(3), getgrouplist(3), putgrent(3), group(5)
COLOPHON¶
This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at | https://dyn.manpages.debian.org/unstable/manpages-dev/getgrent.3.en.html | CC-MAIN-2022-21 | refinedweb | 136 | 54.39 |
How to.make a simple app trabalhos
We are working on our eCommerce site and we need help to create mock ups for every single product we offer, Such as business cards, banners, invitations and much more.
I have my Website already built on wordpress i just want to make it live on my [fazer login para ver a URL] server. I have multiple domain and website hosted on [fazer login para ver a URL] Thanks, Decode1!
Make a website like this one: [fazer login para ver, ch...
I need my own smtp that will replace sendgrid use in my website for sending our alert, etc
I give you a website and you test 20 pages in Gmetrix and put in an excel file. Total of 48 websites. More details to qualified bidders. Cheers! Matthew D. Golden
wordpress website data enter and make the template Arabic and English languages
i want someone to create a video ad i will send example
Fix pagination issue for WordPress and ads clicks not redirect each link.... My website www,[fazer login para ver a URL]
I want to draw a logo for my future business. I have the idea and want to see more variants and choose the one I like. I am practicing FENG SHUI and according to the symbols I want integrated in the drawing I want to make my drawing. ...)
Hello guys, i'm looking for few usa based freelancers for some virtual assistant tasks. Please apply... lowest rate preferred. suitable for new freelancers looking for good feedback. Thanks
Need heather charcoal texture put on to existing t shirt
I need a new website. I need you to design and build a website for my small business.
"Kamali Paragliding" Its about Paragliding. We fly tandem, we teach the first steps of flying but mainly we going for traveling to fly with students all over the earth. Kamali its a name of the old American indian culture and means protection/ spirit. We looking for a logo for kamali paragliding. Its should be about flying ( bird, paraglider, feder), nature can be included (mountains...
i need to build a simple gaming website
i need a logo for my company. a cover page for facebook also
This is a custom-built PHP website currently on a Virtual Private Server that is super old. The code is old, so is the server. Given how old the site is I am not sure if it requires PHP4 or even older or what and can be done - it might be such old code that a new host won't support it, I am not sure. A client using the host already is concerned a client has said they get spam email now and WANT MAKE WEBSITE FOR MY BUSNIESS IT SHOULD BE OF 8-10 PAGES PLEASE GIVE ME A THE YOU CAN DO SO WE CAN DO WORK TOGETHER THANK YOU
Create 5 simple graphics: You see the blue button with star, favorite. I need it the same with other 3, not grey, white with blue background. Additionally a white X in blue background and a white confirm button with the blue background. [fazer login para ver a URL]
i have about 60-70 questions in excel sheets i want to to be made into a quiz app, with level and everything
i need to make a face book page to a medical company to increase the marking and the interackting with coustomers and relase posts on it
hi i need to make my website responsive , its PHP website , i want someone who is expert in PHP , and start right away and make it asap. if you are serious about work , i will give you Task after Task, as permanent work. if you are ready lets start right away. " needed best offer for long terms work " thanks.
I’m starting a YouTube Channel next month and my first video will be of Disneyland Paris! I need an intro video, something mature, white and pale pink themed if possible, using my name (Laura Anne)!
Hello, i need new business cards design and price menu design. I need someone creative with experience, who can create something elegant and good quality.
We need Laravel developer to make some pages responsive with mobile and tablet. The developer has to be very accurate and attentive to details and to be able to make the page exactly as per provided design, following the same proportions, sizes, etc.
Hi, I would like to hire a programmer that can convert existing Joomla Template to have the following features : 1. Autofit on all desktop size 2. Sample website that are autofit are as the following link [fazer login para ver a URL] Autofit means when view using desktop pc, it will show only single page and will not able to scroll. Link of the existing Joomla Website that need to be conver...
To make a 5 mins video on awareness about Renewable energy & Solar power. This includes Videography & Video Editing in Mumbai to apply to this scholarship for university ASAP. I don't mind where you live but preferably in America because you have to mail the poster to them before the 14th. Info here: [fazer login para ver a URL] [fazer login para ver a URL] [fazer login para ver a URL] I am happy to pay for the mailing, but please include the fees in your bid. Thank you.
I need an Android app. I would like it designed and [fazer login para ver a URL] in the play store for the app details
Build my wordpress website and make it live on my hosting
Make 2 videos for my channel. you have to make same video as this - [fazer login para ver a URL] use same notification as shown in the video, use same starting words, use same ending words. But you have to write the content on your own and take idea from playstore and which game to put in which position, i will tell you that. Check out move videos of on same top, means top 10 gaming series, The ga...
2 Simple CSS Changes Required on core php website. Interested candidates please bid your lowest rate per hour.
Title says it all. At this point, I just need to be able to do that. Video and audio should play at correct speed when imported and exported.
from qiskit import * qr = QuantumRegister(3) cr = ClassicalRegister(3) circ = QuantumCircuit(qr,cr) circ.h(qr[0]) circ.h(qr[1]) circ.h(qr[2]) [fazer login para ver a URL](qr,cr) #============================# # # now cr[0] maybe 0 or 1 # # I want to know or copy its value to another normal variable to use it in my calculation, # something like: if(cr[0]==0) cr_val=0 else cr_val=1
///////what you should do is port the following python code to matlab import sys import base64 import requests import json # put desired file path here file_path = '[fazer login para ver a URL]' image_uri = "data:image/jpg;base64," + [fazer login para ver a URL](open(file_path, "rb").read()).decode() r = [fazer login para ver a URL]("[fazer login para ver a URL]...
I want a simple single webpage, to be made responsive using bootstrap. I need it ASAP.
Oculus App Developer Need Oculus Rift apps for the oculus store. Must have previously made a virtual reality app.
How to send data in html form and how to retreive it. I'm using a .html file to send and a .php file to receive the data, but my code must be wrong...
Transfer the data from “share with me” folders which is download locked to “My Drive” in another account
I need your expert help to make operational a previously working Exchange 2010 Server installed on a Server 2008 R2 VM. The original DC VM had problems and no longer boots. On the Server 2008 R2 VM which has the Exchange 2010 Server, I ran DCPROMO and removed it from the first Domain and removed it's AD GC Roles. I then rebooted and ran DCPROMO again and created a new domain in a new for...
I need some help with selling Leather jackets in international market place. As we are an genuine leather jacket manufacturer from India and need people who can help us in making deal with people who deals in leather jacket we do deals only in bulk quantity. Company name Lajacket, Website- [fazer login para ver a URL] I treated people so contact and wholesale price is much lesser then website pric.... T.... Short sound alert and Long ...
Hello, I am needing help from someone with a keen eye for detail with a particular psd file. I would like help improving the screen that is shown included in the attached psd (spacing, size etc) and to also create a version with only one circle (representing one way flight) and the appropriate information related to that screen. Price for this task approx $50 as they are small changes.
(Hello) so I will send you some photos of the girl i want you to make on honey select unlimited with NO MODS. Its importand to make the face amd body with good detail. | https://www.br.freelancer.com/job-search/how-to.make-a-simple-app/5/ | CC-MAIN-2019-30 | refinedweb | 1,539 | 69.41 |
21600
I think it's a bug in Tensorflow and you should open a bug thread about it.
In any case, to workaround this issue, you can use tf.identity to create a float64_ref instead of the float64 x and pass this value as the inputs parameter.
import tensorflow as tf
import numpy as np
with tf.Session() as sess:
x = tf.Variable(np.ones((2, 3)))
sess.run(tf.initialize_all_variables())
out, state = tf.nn.rnn_cell.BasicRNNCell(4)(tf.identity(x), x)
Some of the modules have changed in ...READ MORE
Some of the modules have changed in ...READ MORE
You need to download and install the ...READ MORE
Seems like the pycrypto package is not ...READ MORE
Hi @Vasgi, for the code you're trying ...READ MORE
Use the following command to install tkinter ...READ MORE
sudo apt-get install python3-tk
Then,
>> import tkinter # ...READ MORE
Hi @Hannah, replace
import SpeechRecognition as sr
with
import speech_recognition ...READ MORE
It's easy to do, but hard to ...READ MORE
Here's the short answer:
disable any antivirus or ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/21600/tensorflow-importerror-module-named-tensorflow-python-client | CC-MAIN-2020-16 | refinedweb | 188 | 70.6 |
Hi, Silly question from a new C programmer... I get a segmentation fault in the following code:
#include <stdio.h>
int main(void)
{
double YRaw[4000000]={0};
return 0;
}
Using GDB, I get the following comment:
Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_PROTECTION_FAILURE at address: 0x00007fff5dd7b148
0x0000000100000f24 in main () at talk2me.c:18
18 double YRaw[4000000]={0}; // set YRaw[memdepth] so index is 0 to memdepth-1
Everything works find if I reduce the size of the YRaw array by a factor of 10. I have 6GB of RAM in the system, so why do I get an error? Thanks, Gkk | http://cboard.cprogramming.com/c-programming/129519-problem-initializing-double-array-large-array-printable-thread.html | CC-MAIN-2014-10 | refinedweb | 105 | 72.36 |
Click on the x in upper right corner? Not sure what you are looking for but if you remove the Cancel button, clicking on the x will cancel the form.
The struggle is to be able to hit Escape and not have Cancel button.
Ok, I’m dumb. I just don’t populate the form with Abort button…
Are you saying you want to hit the Esc key on the keyboard to cancel the Eto form? Not really understanding what you are after.
You say Abort button but the form shows a Cancel button. I am assuming these are the same.
As you point out, you can just omit putting the Cancel button on your form.
Yep, I just called it Cancel.
I get really good results with my Eto forms after a really long learning curve. But then I heavily customized the functionality of the fields on my forms because I did not like some of their behaviors. It took awhile to sort thru the options and pick those that best matched my preferences.
Try handling the form’s
KeyDown event.
self.KeyDown += self.OnKeyDown
and
def OnKeyDown(self, e): if e.Key == forms.Keys.Escape: self.Close(False)
TestDialog.py (1.4 KB)
– Dale
This is exactly what I was looking for. Too noob. | https://discourse.mcneel.com/t/possible-to-escape-out-of-eto-form-without-creating-displaying-abort-button/126575 | CC-MAIN-2021-43 | refinedweb | 216 | 75.5 |
Cross-compiling to ARM with QtCreator 5.x on Windows
I have been using QtCreator 5.x to develop MS-Windows applications, but now need to develop desktop applications on Raspberry Pi2 and BeagleBone Black. I understand that I need ARM libraries and a cross-compiling tool chain.
In researching how to do this I have encountered lots of articles written about failure or articles that relate to old and deprecated versions and utilities. No success stories apart from using Linux as the development host platform.
I have managed to get QtCreataor 3.2.1 (Qt 5.3.2) running and developing on the Raspberry Pi-2 (Raspbian/Jessie) but it is too slow for any real work.
Is it possible to use QtCreator on Windows to develop, compile, deploy and debug desktop applications on Embedded Linux ARM devices? I am currently running Windows 7 64-bit.
Any tips or suggestions on how to do this are welcome.
Thanks.
Yes, it is. The best approach I could find was to compile Qt using the msys2 environment in association with the right cross compiler for the target platform. You should use the configure script not the bat file. After compiling, just add this new build manually to Qt Creator and it's done.
Here's an example of a configuration I have for an ARM Linux:
./configure -platform win32-g++ -xplatform linux-arm-gnueabi-g++ -prefix C:/Qt/5.5/arm -no-icu -plugin-sql-sqlite -nomake examples -no-compile-examples -nomake tests -openssl -release -v -qreal float -skip qtwebkit -qt-zlib -skip translations -eglfs -shared -force-debug-info -opengl
Some libraries might be missing on the host machine. In that case, you should copy them from the target system to your computer and add their paths to the command line above. For example:
-L C:/sysroot/usr/lib -L C:/sysroot/usr/local/lib
@Leonardo Thank you for your reply Leonardo. Sorry, but I'm afraid that a lot of it went over my head.
If I understand correctly, the only way is to compile QtCreator from source on the host (Windows 7) system?
Since my Pi-2 had Qt 5.3.2 working, I downloaded "qt-opensource-windows-x86-android-5.3.2.exe" from Qt. I thought that since this Qt/Windows install already had support for ARM-7, it would only require a cross-compiler installing and configuring, such as Yagarto or GCC ARM. Is this completely naive of me?
If I attempt your approach, then what compiler do you think I should I use on the host? Visual Studio Express? MIN GW?
Thank you.
Qt and QtCreator are different. QtCreator is the IDE. Qt is the set of tools and libraries, the framework. Qt has components that should run on the host machine (qmake, moc, etc) and components that should run on the target machine (libraries mostly).
You can see on the configuration line above that we have two parameters: -platform and -xplaform. The former specifies the host machine, and the latter the target machine. You can't use the android build for raspberry. Both are ARM, but that's all. Other than that, they have few things in common.
So you already have QtCreator for Windows. It's the same whatever system you're targeting. You only need to compile Qt. First of all, you need a cross compiler. I have no experience with Raspberry Pi, but I would say this one should do it:
Add it to your PATH. For the host you can use either mingw or visual studio. It's up to you. After that try to compile Qt using the msys2 environment.
./configure -platform win32-g++ -xplatform linux-arm-gnueabihf-g++ -device linux-rasp-pi2-g++ ...
-device should account for board specific settings. Qt has no mkspec for "linux-arm-gnueabihf-g++". Just duplicate the "linux-arm-gnueabi-g++" one and add the "hf" everywhere.
Indeed there are very few resources out there about Qt and cross compiling for Windows. It took me quite some time to understand it. Maybe you should get more familiar with the usual Qt compilation process first and only then try cross compiling.
@Leonardo When you explained that I cannot use Qt ARM for Raspberry Pi a light was turned on at the end of the tunnel. Of course it isn't and I see that now.
I have done some background reading and now believe that I have a better understanding, although there are still some gaps.
So, I need Qt creator on the development host (Windows). This can be downloaded.
I still don't quit understand where the MSYS2 comes in. I understand that this is a "Linux" type environment that runs on top of Windows. I understand that this will provide the minimum directory structure for compiling Qt.
Assuming I compile the Qt development libraries, does this remain in MSYS2 or is it moved into the C:\Qt directory?
Are you saying that I have to cross-compile Qt for the Raspberry Pi-2?
I just don't understand what Qt I need to compile where.
Sorry to be a pain. I promise to share this information once I get there.
Hi. I recommend the msys2 environment because the "configure.bat" that is used on Windows doesn't know how to compile to any target other than Windows. Therefore we need to use the "configure" shell script, that's used on Linux. msys2 was the best tool capable of running it that I could find for this purpose. It's also integrated with the mingw-w64 compiler, so you just need to get a cross compiler and you have all you need.
You need to cross-compile Qt's source code. See here:
No success. I have managed to overcome a number of issues, but I am stuck with the current problem and I am not sure if my overall approach is correct?
I started with a virgin install of Windows 7 (service pack 1) 64-bit, with all updates performed. This is hosted on a 4GHz Intel i7 system with 16GB RAM and 1TB 7200RPM hard drive.
Qt Creator
Qt Creator 5.5.1 was downloaded, installed and a trivial project completed to test the install. It is running 32-bit because I opted for the MINGW version.
qt-opensource-windows-x86-mingw492-5.5.1.exe
MSYS2
MSYS2 was downloaded and installed. I chose 64-bit since I am using Windows 7 64-bit. installation/
msys2-x86_64-20150916.exe
Raspberry Pi Toolchain
Just as you recommended I downloaded and installed the toolchain for Raspberry Pi.
raspberry-gcc-4.9.2.exe
I made sure that the PATH environment variable was set and checked that “arm-linux-gnueabihf-g++” was executable within MSYS2.
Qt 5.5.1
Then I downloaded the Qt 5.5.1 source. Since I was using Windows 7 I downloaded the ZIP version and used 7Zip to extract the files. These were extracted to C:\qtsource being a name consisting of simple ASCII characters.
qt-everywhere-opensource-src-5.5.1.zip
I then navigated to C:\qtsource\qtbase\mkspecs and copied the \linux-arm-gnueabi-g++ directory to \linux-arm-gnueabihf-g++. I then edited the qmake.conf file to add “hf” (hard float) to all the named files. On checking with the Raspberry Pi toolchain I noticed that the executables within C:\SysGCC\Raspberry\bin used the name format “arm-linux-gnueabihf-” so I renamed the named files to this name format. Eg:
QMAKE_CC = arm-linux-gnueabihf-gcc
Configure
So now time to configure and make. I launched a MSYS2 Shell and copied a modified version of your example ./configure command line in. At first I encountered an (e=2) file not found error. After a lot of scratching around I realised that the g++ was not being found. In the Raspberry Pi toolchain bin directory I copied arm-linux-gnueabihf-g++.exe to g++.exe and tried again.
This time there was credible activity, but terminated with:
You're almost there! It's just that the last step is wrong. You should not have copied arm-linux-gnueabihf-g++.exe to g++.exe . As I said before, some tools are build for Windows and the libraries are build for Linux. Therefore, we need two compilers. If g++ is missing, it means that you're missing MinGW-w64 in your msys2 environment. First of all, check whether you have a shortcut named "MinGW-w64 Win64 Shell" in your start menu. If you do, that's the one you should be using. If you don't, you should install the package:
pacman -S mingw-w64-x86_64-gcc
Ah!
So the Raspberry Pi toolchain is used by QtCreator to build my projects for deployment on the Raspberry Pi device, whereas MinGW is used to build the Qt library for use within QtCreator on Windows?
In which case don't I need the MinGW-32 compiler environment, since I am using QtCreator MinGW 32-bit?
I deleted the g++.exe as you suggested and installed the MinGW-64 package into MSYS2. There was encouraging extivity but again it failed with:
C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/5.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: project.o: Relocations in generic ELF (EM: 40)
project.o: error adding symbols: File in wrong format
collect2.exe: error: ld returned 1 exit status
make: *** [../bin/qmake.exe] Error 1
I don't get what you said about the compilers, but forget it for now.
It seems like you have a dirty source tree. That's why you get that error. Delete your "qtsource" directory and extract the zip file again.
Thank you.
So, I deleted the existing C:\qtsource and unpacked a new one. I then duplicated the \linux-arm-gnueabi-g++ directory to \linux-arm-gnueabihf-g++. Then I edited the qmake.conf to add "hf" to the file names.
I then removed MSYS2 and reinstalled. I used the command you suggested to install MinGW64. I also used the command pacman -S diffutils as I had been getting the error "cmp: Command not found".
I then run the configure statement:
./configure -platform win32-g++ -xplatform linux-arm-gnueabihf-g++ -device linux-rasp-pi2-g++
There was a lot of activity this time ending in the following error:
Running configuration tests...
Failed to process makespec for platform 'devices/linux-rasp-pi2-g++'
Project ERROR: CROSS_COMPILE needs to be set via -device-option CROSS_COMPILE=<p ath>
Could not read qmake configuration file C:/qtsource/qtbase/mkspecs/devices/linux -rasp-pi2-g++/qmake.conf.
Error processing project file: C:\msys64\tmp\empty-file
It's getting better. I've never used the "-device" parameter before, actually. Try something like this:
./configure -platform win32-g++ -device linux-rasp-pi-g++ -device-option CROSS_COMPILE=arm-linux-gnueabihf-
By the way, you can build it outside the source tree, so you don't need to delete it every time. Create a folder like C:\qtbuild and call the configure script from it.
/c/qtbuild> ../qtsource/configure -platform.....
Thanks for the tip building outside the source tree.
Things seem to be going backwards now. I have this tchar.h: No such file or directory error back. I am using a clean C:\qtsource and C:\qtbuild. I have been struggling with this for about an hour.
I checked and there is a device definition for "linux-rasp-pi2-g++" in C:\qtsource\qtbase\mkspecs\devices.
The configure statement is this:
../qtsource/configure -platform win32-g++ -xplatform linux-arm-gnueabihf
This is the resulting error:
C:/qtsource/qtbase/mkspecs/win32-g++/qplatformdefs.h:47:19: fatal error: tchar.h: No such file or directory
#include <tchar.h>
^
compilation terminated.
make: *** [makefiledeps.o] Error 1
Oh, I'm sorry. My mistake. You need the whole toolchain. The command line I've posted before was only for gcc. Here:
pacman -S mingw-w64-x86_64-toolchain
By the way, now you should remove the -xplatform parameter, as you're using CROSS_COMPILE.
Don't forget to clear your qtbuild folder, just to be sure.
Hi Leonardo. Sorry, I am still getting the same error:
I am using this configure statement with -xcompile omitted:
../qtsource/configure -platform win32
I installed "all" packages using the pacman command that you suggested. I also deleted and create a new C:\qtbuild directory.
Edit #1
I did some digging around and can confirm that tchar.h is located in:
C:\msys64\mingw64\x86_64-w64-mingw32\include
Edit #2
After reading more Qt documentation (why don't they date their documentation?), I suspect that this latest error is arising from not having and specifying a SYSROOT. I could be wrong of course. Am I required to have a Raspberry Pi image available on the Windows 7 platform to provide this sysroot? It seems to be a step taken in the instructions for cross-compiling Qt for RaspberryPi2 from Ubuntu.
I also struggle with is the role of the two toolchains here. There is the MinGW64 hosted within MSYS2. There is also the Win64 toolchain for Raspberry Pi downloaded from SysProgs.
Hi JaffaMicroBrain.
I have the same problem.
I try to croos compile from windows and it seems that the file tchar.h is missing.
I've followed this tuto
but still having the same problem.
Have you found any solutions to this ?
Regards,
- MikePelton
After quite some hours of "fun" I managed to get the (excellent) set of instructions at:
to work on Windows 10. The trick was to start with a completely clean version of Raspbian, and then to install QT 5.5:
sudo apt-get install qt5-default
Note that I didn't also install Qt Creator on the PI.
If you then follow the instructions to the letter, all will be well.
I found the UpdateSysroot step painful as my network was having a bad day - rather than synching /opt it will be faster to synch just /opt/vc.
The make step has taken about 6 hours.
I had originally started out with more dev stuff installed on the PI and ran into issues with missing header files as you guys have done, but this way worked. Hope it works for you too! Regards, Mike
@MikePelton I am struggling to get the cross-compiling working for the past three days. while syncing my PI with the windows PC, I have seen all the paths in the above link you have mentioned, but i could not find the /opt. Could you please suggest me, where can be the problem. I have followed exactly the same steps as in except that my debian jessie was old. | https://forum.qt.io/topic/61094/cross-compiling-to-arm-with-qtcreator-5-x-on-windows | CC-MAIN-2018-09 | refinedweb | 2,451 | 67.86 |
Hi Vince,
I had the same problems but with the help of this list and my own
perserverance I found the answers.
To get souper95 to exit cleanly set up a batch file along the lines
as suggested in the souper readme.txt and call it say getmail.bat
Once created open explorer and right click on getmail.bat and
create a shortcut on the desktop. Right click and go into properties
and click on "close on exit" . You will find that when you double
click on the shortcut it will automatically close on exit.
With regard to aborting halfway through an email download and getting
souper to automatically import it, this caused me some trouble but
I have finally got it licked :)
Firstly manually download an email message (send yourself one) by
editing getmail.bat and REMming out the import commands. Run the
getmail.bat file. This will download your email message(s) as well
as create an AREAS. file. Copy the areas. file to a safe
location on your hard drive (I have mine in the \online directory).
Once you have done this manaully run import -u from a dos prompt to
import your mail.
Your next step is to edit your getmail.bat so that it niow resembles
something like mine (shown below)
GETMAIL.BAT
@ECHO OFF
E:
CD \YARN\TEMP
SET HOME=E:\HOME
SET YARN=E:\YARN
SET NNTPSERVER=news.brisnet.org.au
souper95 -n mail.dyson.brisnet.org.au mraiteri password
SET COPYCMD=/Y
copy e:\online\areas*.* e:\yarn\temp
import -u
You will now be set. all future aborted email downloads as well as
normal downloads will automatically be imported. Works like a charm
here.
>2. Scripting - this isn't really a Souper question, I guess it's more
>of a Win95 question - under OS/2, I had some rather nice PPPDial
>scripts written that would automagically dial up my ISP, start a
>background Souper session to pull the mail/news in, import, close,
>cleanly kill the net connection, and restart BinkleyTerm.
Sorry I know absolutly nothing about scripting.
>
>Has anybody done anything like this in Win95? Are there any
>PPPDial-type or other Dialup Networking scripting utils I can get to
>accomplish the same thing? Anybody feel like posting theirs? <grin>
>
>Thanks in advance everybody.
>
>Vince
>vincew@sprintmail.com
>
Hope this partly helps.
Cheers
Mike
-- Internet: mraiteri@dyson.brisnet.org.au <Michael Raiteri> Brisbane, Queensland, Australia | http://www.vex.net/yarn/list/199705/0055.html | crawl-001 | refinedweb | 405 | 67.86 |
.
There are hundreds of thousands of mobile applications for nearly every purpose in the iOS or Android app stores. Usually they are created with Objective-C toolstacks for iOS devices and Java based for Android handsets. In this article we would like to show you two not so common ways to build native apps with Java and Xtend which help to share code between both worlds and simplify development.
Developing native iOS apps with Java and RoboVM
Mobile app developers targeting both Android and iOS face many challenges. When comparing the native development environments of these two platforms, i.e. the toolchains provided by Google and Apple respectively, one quickly finds that they differ substantially. Android development, as defined by Google, is based on the Eclipse IDE and the Java programming language. iOS development, according to Apple, on the other hand is based on the Xcode IDE and the Objective-C programming language.
These differences rule out any code reuse between the platforms. Also, not many developers are proficient in both environments. In the end almost every multi-platform app is developed using separate development teams and separate codebases for each platform.
RoboVM is a new open-source project with the ambition to solve this problem without compromising on neither developer nor app-user experience. The goal of the RoboVM project is to bring Java and other JVM languages, such as Scala, Clojure and Kotlin, to iOS devices. Unlike other similar tools, RoboVM doesn’t impose any restrictions on the Java platform features accessible to the developer, such as reflection or file I/O, and lets the developer reuse the vast ecosystem of Java 3rd party libraries. It is also unique in allowing the developer to access the full native iOS APIs through a Java to Objective-C bridge. This enables the development of apps with truly native UIs and with full hardware access, all from Java using tools familiar to Java developers such as Eclipse and Maven.
With RoboVM developing for both iOS and Android becomes less challenging; the same Java developers can build both versions of the app and a large part of the codebase can be shared.
How to get started
While RoboVM can be used in many ways, e.g. from the command line, or using Maven or Gradle, the quickest way to get started is probably using the RoboVM for Eclipse plugin.
Requirements
Before you install the RoboVM for Eclipse plugin make sure you have all the prerequisites:
- A Mac running Mac OS X 10.9.
- Oracle’s Java SE 7 JDK.
- Xcode 5.x from the Mac App Store.
Note that Eclipse MUST be run using Oracle’s Java SE 7 JDK or later. Apple’s Java 6 JVM will not work.
Install the RoboVM for Eclipse plugin
Once your system meets all the requirements installing the plugin is simply a matter of opening the Eclipse Marketplace from the Eclipse Help menu, searching for RoboVM and clicking Install Now.
Alternatively you can use the following update site:
Running a simple iOS app
We will now create a very simple iOS app. First of all create a new project by clicking File => New => Project.... Select the RoboVM iOS Project wizard in the list.
Enter IOSDemo as Project name, Main class and App name and enter org.robovm.IOSDemo as App id. Leave all other values at their defaults and click Finish.
Now let’s create a new class and call it IOSDemo with no package name. Copy and paste the code below into the newly created file replacing whatever Eclipse auto generated for you.
import org.robovm.apple.coregraphics.*; import org.robovm.apple.foundation.*; import org.robovm.apple.uikit.*; public class IOSDemo extends UIApplicationDelegateAdapter { private UIWindow window = null; private int clickCount = 0; @Override public boolean didFinishLaunching(UIApplication application, NSDictionary launchOptions) { final UIButton button = UIButton.create.colorLightGray()); window.addSubview(button); window.makeKeyAndVisible(); return true; } public static void main(String[] args) { NSAutoreleasePool pool = new NSAutoreleasePool(); UIApplication.main(args, null, IOSDemo.class); pool.close(); } }
Finally, launch the app in the iOS simulator by right clicking the project you created and select Run As... ? iOS Simulator App (iPhone). This will run the app on a simulated iPhone.
To run the app on an actual device you would use the Run As... ? iOS Device App choice instead. Please note that this requires that the device has been set up for development. The process for doing so is out of scope for this article. Please refer to Apple’s documentation for further information.
Creating an IPA for App Store distribution
Provided that you already have your App Store distribution certificate and provisioning profile all set up creating an IPA package for submittal to the App Store is as simple as right clicking the RoboVM iOS project in Eclipse, selecting RoboVM Tools ? Package for App Store/Ad-Hoc distribution… and filling in the dialog that appears.
This will create a .IPA file in the selected destination folder which can then be verified and submitted to the App Store using the Application Loader application. The Application Loader application can easily be located using Spotlight.
Apple has some great resources that describe how to enroll in the iOS developers program and create the certificates and provisioning profiles required for App Store distribution.
Under the hood
The bytecode compiler
At the heart of RoboVM is its ahead-of-time compiler. This is a tool that can be invoked either from the command line, from build tools such as Maven or Gradle or from an IDE. It takes Java bytecode and translates it into machine code for a specific operating system and CPU type. Usually this means iOS and the ARM processor type but RoboVM is also capable of generating code for Mac OS X and Linux running on x86 CPUs (32-bit).
The ahead-of-time approach is very different from how traditional JVMs, like Oracle’s Hotspot, usually work. Such JVMs typically read in Java bytecode at runtime and somehow execute the virtual machine instructions contained in the bytecode. To speed up this process the JVM employs a technique called just-in-time compilation. In simple terms this process translates the virtual machine instructions of a method to native machine code for the current physical CPU the first time the method is invoked by the program.
Due to technical restrictions that Apple has built into iOS just-in-time compilation of any kind is impossible in an iOS app. The only alternatives are to use an interpreter, which is too slow and power consuming, or use ahead-of-time compilation like in RoboVM. The ahead-of-time compilation process takes place at compile time on the developer machine so at runtime, on an iOS device, the generated machine code runs at full speed, comparable to or even faster than code compiled from Objective-C.
By consuming Java bytecode rather than Java source code the RoboVM ahead-of-time compiler can, at least in theory, be used with any JVM language that compiles down to bytecode. Scala, Clojure and Kotlin are JVM languages already known to work. Another benefit with this approach is that RoboVM can be used with 3rd party libraries in standard JAR files without any need for the original source code enabling the use of proprietary and closed-source libraries.
Incremental compilation
The first launch of a RoboVM app, even an app as simple as the IOSDemo app, takes some time. When compiling an app the RoboVM compiler starts with the app’s main class. It will then compile all classes needed by the main class and then the classes needed by those classes and so on until all classes needed by the app have been compiled. This process also compiles the standard runtime classes such as java.lang.Object and java.lang.String. This is a one-time thing only. RoboVM keeps a cache of compiled classes and only recompiles a class when it or any of its direct dependencies have changed.
The benefit of incremental compilation and caching of the object files is that it keeps down compile times. By only including the classes reachable from the main class it also keeps down the size of the produced executable. In some situations (e.g. when loading classes using reflection) the RoboVM compiler is unable to determine that a class should be compiled. Fortunately the compiler can be instructed to always include a particular class or even all classes matching a pattern.
Android-based runtime class library
Any JVM needs a runtime class library. This is the library which provides the standard packages and classes needed by any Java program such as java.lang.Object and java.lang.String. RoboVM takes its runtime class library from the Android open-source project and all non-Android specific packages have been ported over to RoboVM. This means that any Java or JVM language code that only uses classes in the standard packages provided on Android should work the same under RoboVM.
Current status
RoboVM is still a work in progress but already quite usable. Version 1.0 is scheduled to be released before the end of 2014.
There are already at least 50 apps in the App Store based on RoboVM. For an up to date list of known apps see this.
Around 50% of the iOS APIs are covered so far and can be used from within a RoboVM iOS app. The RoboVM wiki has a list of the current status of these bindings.
RoboVM is at this time known to be capable of running code written in Scala, Clojure and Kotlin.
RoboVM is currently poorly documented. The 1.0 release which is due later in 2014 will be fully documented.
There is currently no support for debugging RoboVM apps. This will be addressed later this year.
Limitations
RoboVM is restricted to loading classes which have been ahead-of-time compiled into the app. This means that bytecode cannot be created dynamically at runtime and loaded into the app using a custom classloader. This rules out using technologies which creates or modifies classes at runtime.
Further information
- The RoboVM web site has some more information about the project.
- The source code is located on GitHub.
- Apple’s Provisioning Your iOS Device for Development document describes how to setup a test device.
- Apple’s App Distribution Guide has more information about how to submit apps to the App Store.
Creating Android apps with Xtend
What is Xtend?
Xtend 1 is a statically typed programming language that compiles to readable Java source code. The language itself is a best of breed design with special focus on readability and powerful extendability but makes Java interoperability a no brainer. It advocates a functional programming style and features things like multiple dispatch, extension methods, lambda expressions and even compile-time macros. Unlike other Java-alternatives Xtend doesn’t come with its own huge standard library, but only adds a couple of extension methods to the standard JDK. Xtend also guarantees the absence of Java-interoperability problems and comes with excellent IDE support.
Why Java on Android sucks
Java code tends to be very verbose, especially on Android. The Android APIs are very low level and often insufficiently typed (int everywhere). Another annoyance is the ubiquitous use of XML files and their bindings. Java 8 doesn’t work on Android, so we have to read through anonymous classes all over the place. Unfortunately, Java just doesn’t allow to trim the code down to a readable essence, but forces us to clutter it with superfluous symbols, type information and boilerplate idioms.
Minimum Requirements for JVM Languages on Android
A Java alternative on Android must not add any runtime overhead, which basically excludes all kinds of dynamic languages. In addition you don’t want to have any kind of unnecessary indirections for converting types. E.g. the code should only use stock Java and Android types and shouldn’t require back and forth conversion to gap interoperability issues. This is not only a performance problem but also very annoying during debugging. Finally Android limits the number of methods per application to 65536. So if you think about using an alternative to Java you don’t want to add a large standard SDK to your apps, as this will decrease the number of useful methods significantly. The Groovy SDK for instance would add 8000 methods already.
Xtend - A Perfect Fit For Android?
Xtend translates to idiomatic Java source code and relies almost totally on JDK and Android classes. There is no indirection, conversion or whatsoever going on at runtime. This means that Xtend code runs as fast as Java source code. A reduced runtime lib for Android is available, that is only 275kb small and includes everything you need. The Eclipse Plug-in integrates nicely with the ADT (Android Development Tools) and there is even a Gradle plugin 2 that works well with Android’s new build system 3. So let’s see how Xtend can improve a typical Android code base.
Hello Android!
As always we start by looking at a simple example program - Hello World:
class HelloWorldActivity extends Activity { override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState) val button = new Button(this) button.text = "Say Hello!" button.onClickListener = [ Toast.makeText(context, "Hello Android from Xtend!", Toast.LENGTH_LONG).show ] val layout = new LinearLayout(this) layout.gravity = Gravity.CENTER layout.addView(button) contentView = layout } }
On a first glance it should look pretty familiar to Java developers, as the example uses a javaish programming style. Also note that we use 100% Android SDK and JDK APIs only.
The main differences are:
- no semicolons (they are optional)
- property access to call setters and getters
- proper default visibilities (e.g. classes are public by default)
- lambda expressions instead of anonymous classes.
There’s much more to discover but before digging deeper into language features, lets see how to integrate the Xtend compiler in a proper Android build.
Building With Gradle
Xtend comes with plug-ins for all three commonly used build systems: Maven, Gradle and Ant. Google has recently introduced the new build system for Android projects, which is based on Gradle, so let’s see what is needed to build our “Hello World” project with it.
It is assumed that you have installed a recent version of Gradle and the Android SDK on your system and that you have properly set the ANDROID_HOME environment variable. You should also have added Gradle’s /bin to your PATH variable.
The build script ‘build.gradle’ should be put into the root folder of your Eclipse Android project. It looks like this:
buildscript { repositories { mavenCentral() } dependencies { classpath 'com.android.tools.build:gradle:0.8.+' classpath 'org.xtend:xtend-gradle-plugin:0.1.+' } } apply plugin: 'android' apply plugin: 'xtend-android' repositories { mavenCentral() } dependencies { compile ('org.eclipse.xtend:org.eclipse.xtend.lib:2.6.+') } android { compileSdkVersion 19 buildToolsVersion "19.1.0" sourceSets { main { manifest { srcFile 'AndroidManifest.xml' } java { srcDir 'src' } res { srcDir 'res' } assets { srcDir 'assets' } resources { srcDir 'src' } aidl { srcDir 'src' } } } }
It mainly imports the Maven and Xtend build plug-in and calls them. In addition, we add the runtime library and tell the android plug-in that we are using an Eclipse project-layout. With that in place go to the root folder of your project on the command line and run ‘gradle build’. It will do the rest.
More On Xtend
Despite the syntactic sugar, Xtend comes with some very useful language features such as operator overloading, template expressions and a powerful switch expression. By combining different features one can create new features. If you for instance need to build a dynamic UI you cannot use static XML but need to write the UI declaratively. Xtend allows you to use a builder syntax. This is what the simple “Hello World” UI could look like:
import static extension com.example.helloworld.UiBuilder.* class HelloWorldActivity extends Activity { override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState) contentView = linearLayout [ gravity = Gravity.CENTER addButton("Say Hello!") [ onClickListener = [ Toast.makeText(context, "Hello Android from Xtend!", Toast.LENGTH_LONG).show ] ] ] } }
The two methods linearLayout(Context ctx, (LinearLayout)=>void initializer) and button(ViewGroup group, String name, (Button)=>void initializer) are imported as extensions on Activity. They take a lambda (block of code in squared brackets) as an argument. The single parameter passed into those lambdas is called the implicit it, which similarly to this doesn’t need to be dereferenced explicitly. As you see a combination of lambdas, extension methods and the use of the implicit it makes for a very nice builder syntax. Many other nice looking APIs can be built with Xtend that allow for expressing your code in a readable and declarative way.
Say Hello From XML Hell!
The day to day job of an Android developer involves a lot of configuration and development in several XML files, being it resources for internationalized strings or simply the declaration of views. On Android it is advised to use XML, as there are proper solutions for overcoming the large device and SDK fragmentation that this platform has to deal with. However at the end of the day an application is not only made up of static views and data. We developers need to bind all of that stuff together and put some life into it. On Android you do that by using the R-class. It is a generated class containing int-constants for the various elements declared in XML files. Imagine the following two elements declared in a view XML, where the Button should be clicked in order to update the message in the TextView:
<TextView android: <Button android: </Button>
The typical Android-way would be to use the generated constants from the R class to get a handle of the TextView and implement a method for the onClick callback called “sayHello”:
class HelloWorldActivity extends Activity { TextView messageView override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState) // set the view using the int constant contentView = R.layout.main // get a handle on the TextView messageView = findViewById(R.id.message_view) as TextView } /** * Callback automagically called by Android */ def void sayHello(View v) { messageView.text = "Hello Android from Xtend!" } }
This is typical Android code including unsafe casts and naming conventions as well as a lot of boilerplate. With Xtend we can do much better.
Hello Xtendroid!
Xtendroid 4 is a small project that adds some library and so called active annotations specifically for Android development. An active annotation is basically a compile-time macro, that allows to participate in the compilation from Xtend to Java. You can freely mutate the annotated classes, generate additional types or read and write plain text files using this hook.
So what if we had an annotation that knows which view we want to bind to and that can generate the boilerplate for us? Moreover it could provide us type safe accessors to elements and callback methods. Here’s above’s activity using the @AndroidActivity annotation from Xtendroid.
@AndroidActivity(R.layout.main) class HelloWorldActivity { /** * Type safe callback */ override void sayHello(View v) { messageView.text = "Hello Android from Xtend!" } }
Now the activity only contains the behavior we wanted to add. All the plumbing for binding and the other boilerplate like setting the content view or extending Activity is done automatically. Also note that everything is now type safe and the IDE knows what’s going on, i.e. you get proper completion proposals etc..
Xtendroid has additional goodies for working with JSON objects, resource files or SQLite databases. But note that active annotations are just library, so you can easily build your own or customize existing ones as it seems fit.
If you want to try it out yourself, just download an Eclipse from 1 and install ADT using the update site 5. The Xtendroid project contains many examples including the ones we saw in this article. Have fun!
1 Eclipse Xtend
2 Xtend Gradle Plug-in
3 Android Gradle Plug-in
4 Xtendroid
5 ADT update site
About the Authors
Niklas Therning is the founder of the RoboVM open-source project and co-founder of Trillian Mobile AB - the main contributor to the RoboVM project. He has made it his mission to bring Java to iOS and do it properly. Before starting RoboVM Niklas co-founded the SpamDrain anti-spam service and worked as a contractor, mostly doing Java EE and web app development. Niklas holds a Master of Science degree in Computer Science from Chalmers University of Technology, Gothenburg, Sweden. Twitter @robovm.
Sven Efftinge is a passionate software developer who loves kite surfing, music and good food. He's the project lead of Xtext, a framework for developing programming languages and domain-specific languages and Eclipse Xtend, a statically-typed programming language for the JVM. Sven leads a research department for itemis in Kiel..
Community comments | https://www.infoq.com/articles/unusual-ways-to-create-a-mobile-app/ | CC-MAIN-2019-30 | refinedweb | 3,470 | 54.73 |
install 1.5.2 at work, where we have Solaris 7.
I'd like to play with Tkinter. After building it the first time, I
discovered that I had built it without Tk support (from Tkinter import
*; Button().pack() explained this to me.)
So I read README and edited Modules/Setup. The Tkinter section of my
Modules/Setup is as follows:
# *** Always uncomment this (leave the leading underscore in!):
_tkinter _tkinter.c tkappinit.c -DWITH_APPINIT \
# *** Uncomment and edit to reflect where your Tcl/Tk headers are:
-I/var/tk/v8.0p2/include -I/var/tcl/v8.0p24.1.8.0 \
# *** Uncomment and edit for BLT extension only:
# -DWITH_BLT -I/usr/local/blt/blt8.0-unoff/include -lBLT8.0 \
# *** Uncomment and edit for PIL (TkImaging) extension only:
# -DWITH_PIL -I../Extensions/Imaging/libImaging tkImaging.c \
# *** Uncomment and edit for TOGL extension only:
# -DWITH_TOGL togl.c \
# *** Uncomment and edit to reflect where your Tcl/Tk libraries are:
-L/var/tk/v8.0p2/lib -L/var/tcl/v8.0p2/lib \
# *** Uncomment and edit to reflect your Tcl/Tk versions:
-ltk8.0 -ltcl8
The compile and link went successfully, but python -c 'from Tkinter
import *; Button().pack()' gives the following traceback:
Traceback (innermost last):
File "<string>", line 1, in ?
File "/var/u/sittler/lib/python1.5/lib-tk/Tkinter.py", line 1123, in __init__
Widget.__init__(self, master, 'button', cnf, kw)
File "/var/u/sittler/lib/python1.5/lib-tk/Tkinter.py", line 1078, in __init__
BaseWidget._setup(self, master, cnf)
File "/var/u/sittler/lib/python1.5/lib-tk/Tkinter.py", line 1055, in _setup
_default_root = Tk()
File "/var/u/sittler/lib/python1.5/lib-tk/Tkinter.py", line 886, in __init__
self.tk = _tkinter.create(screenName, baseName, className)
TclError: Can't find a usable init.tcl in the following directories:
/usr/local/lib/tcl8.0 ./lib/tcl8.0 ./tcl8.0/library ./library
This probably means that Tcl wasn't installed properly.
Now, tclsh8.0 and wish8.0 from /var/tk/v8.0p2/bin and
/var/tcl/v8.0p2/bin work fine (albeit quite slowly due to running from
NFS over a fractional T1).
gdb tells me Tcl_AppInit in tkappinit.c is calling Tcl_Init, which is
returning TCL_ERROR. Tcl_Init is defined in tclUnixInit.c inside of
libtcl.
Is there anything I can do, other than building my own libtcl? Do I
need to rebuild libtk too?
--
<kragen at pobox.com> Kragen Sitaker <>
The Internet stock bubble didn't burst on 1999-11-08. Hurrah!
<URL:>
The power didn't go out on 2000-01-01 either. :)
<kragen at pobox.com> Kragen Sitaker <>
The Internet stock bubble didn't burst on 1999-11-08. Hurrah!
<URL:>
The power didn't go out on 2000-01-01 either. :) | https://grokbase.com/t/python/python-list/0039jmbtmr/building-python-with-tkinter | CC-MAIN-2022-33 | refinedweb | 452 | 54.59 |
Introduction
This article describes a simplified approach to allowing communication between forms without the use of events and delegates. The approach demonstrated is adequate if the application uses a single instance of a form and allows the creation of a single instance of a dialog used to pass data to the main calling form. If you need to broadcast the data generated by a dialog to multiple form listeners, you will still need to work with events and delegates; this approach is only valid if there is a one on one relationship between the two interactive forms.Figure 1. Data entered in the Right Form immediately appears in the Left Form
Getting Started.
There is a single Win Forms application included with this download. You may open the project and examine the files and code if you wish; however, the code is quite simple and will be described in the following sections of this document. All examples were written using Visual Studio 2005 and are in C#; the same code could also be used in earlier versions of Visual Studio.
In general the application contains only two forms. Form 1 opens with the application and it contains five label controls which act as targets for data entered from Form 2. Form 1 is titled, "Originator" and Form 2 is labeled, "Data Entry".
Code: Form 1.
Form 1 contains five labels and a button. The five labels carry default text whenever an instance for Form 1 initially created. The labels are set to display "Name", "Street", "City", "State", and "Zip Code". There is also a single button contained on the form. Whenever this button is clicked, a new instance of Form 2 is created. Form 2 has an overloaded constructor and it accepts an object of type Form 1 as an argument. By passing the current Form 1 to instances of Form 2 through the constructor, it is possible to communication directly between the forms without adding events and delegates.
The entire body of code contained in Form 1 is as follows:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Text;
using System.Windows.Forms;
namespace LimitedDataXfer
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void btnOpenForm_Click(object sender, EventArgs e)
Form2 f = new Form2(this);
f.Show();
}
}
Looking over the code you will not that nothing but the default imports are included in the definition of the form class. The class definition is also in the default configuration. The only code added to the class is that used to handle the single buttons click event. In that click event handler, a new instance of Form 2 is created and it is pass "this" as an argument. After the current Form 1 is passed to the form 2 constructor as an argument, the Form 2 instance is displayed to the user.
By passing the current Form 1 object to Form 2 in this manner, Form 2 may directly access any property in Form 1 instance to include each of the labels that will be populated through Form 2. This will allow communication between the two forms without the definition an delegates or events used to handle that communication.
Code: Form 2.
The code behind Form 2 is also very simple. This form also contains only the default imports and the class declaration is also per the default configuration. The form does contain an overloaded constructor which is configured to accept an object of type Form 1 as an argument. If a Form 1 object is passed to the constructor, a local object of type Form 1 will be set to point to this passed in form.
The entire body of code contained in this class is as follows:
public partial class Form2 : Form
Form1 f;
public Form2()
InitializeComponent();
public Form2(Form1 fr1)
f = new Form1();
f = fr1;
private void Form2_Load(object sender, EventArgs e)
private void textBox1_TextChanged(object sender, EventArgs e)
f.lblName.Text = textBox1.Text;
private void textBox2_TextChanged(object sender, EventArgs e)
f.lblStreet.Text = textBox2.Text;
private void textBox3_TextChanged(object sender, EventArgs e)
f.lblCity.Text = textBox3.Text;
private void textBox4_TextChanged(object sender, EventArgs e)
f.lblState.Text = textBox4.Text;
private void textBox5_TextChanged(object sender, EventArgs e)
f.lblZip.Text = textBox5.Text;
In the code above, you will note that the overloaded constructor is configured to accept the Form 1 object as an argument and you can see that the local Form 1 object is set to this passed in Form 1 object. The remainder of the code is merely used to handle the text changed event for each of the text boxes contained in the Form 2 object. Whenever the text in any of the text boxes is changed, the current value of that text box is immediately passed to the appropriate label control in the Form 1 object.
When the application is run and the Form 2 object is created, typing into any of the text boxes will immediately update the Form 1 object's applicable label control.
Summary.
View All | http://www.c-sharpcorner.com/article/passing-data-between-forms-without-events-and-delegates/ | CC-MAIN-2017-22 | refinedweb | 844 | 53.61 |
Here the challenge of Java End-of-file HackerRank Problem is to read n lines of input until you reach EOF, then number and print all lines of content.
EOF represents — end-of-file.
To solve Java End-of-file HackerRank Problem we take a hint like this-
Hint: Java’s Scanner.hasNext() method is helpful for this problem.
HERE– hasNext() checks if there is a string remaining in the Scanner; and returns false only when nothing is left there, which is equivalent to EOF.
Input Format for Java End-of-file HackerRank Problem –
Read some unknown lines of input from stdin(System.in) until you reach EOF; each line of input contains a non-empty String.
System.in is an InputStream which is typically connected to keyboard input of console programs.
Output Format
For each line, print the line number, followed by a single space, and then the line content received as input.
Input Example
Hello world I am a file Read me until end-of-file.
Output Example
1 Hello world 2 I am a file 3 Read me until end-of-file.
How To Solve Java End-of-file HackerRank Problem –
- for count the line number use line_num as integer type.
- for EOF use hasNext() method.
- system.in is used for input operation.
THE CODE :
import java.io.*; import java.util.*; import java.text.*; import java.math.*; import java.util.regex.*; public class Solution { public static void main(String[] args) { Scanner sc = new Scanner(System.in); int line_num = 0; while(sc.hasNext()) System.out.println(++line_num + " " + sc.nextLine()); sc.close(); } } | https://coderinme.com/java-end-of-file-hackerrank-problem-coder-in-me/ | CC-MAIN-2018-39 | refinedweb | 263 | 57.98 |
CollectionDifferenceWritten by Jordan Morgan • Jan 29th, 2020
In the not so distant past, it was a foregone conclusion that developers would eventually fall back to the jackhammer when it came to table or collection views:
reloadData.
The reasons why were simple:
1) Getting a diff of what’s changed in your data was hard, and
2) Mapping that with the right index paths was even harder.
But the payoff was always worth it, a buttery smooth batch reload in your interface. And hey - you can’t make an omelet without crackin’ a few eggs.
Fast forward to today, and we can thankfully say that WWDC 2019 mercifully addressed both pain points. Today, let’s take a look at
CollectionDifference, a lightweight way to calculate the once elusive diff mentioned in reason #1 above.
The Little Struct That Could
CollectionDifference arrived in Swift 5.1 by way of SE-0240. Authors Scott Perry and Kyle Macomber wanted a way to “provide an interchange format for diffs as well as diffing/patching functionality for appropriate collection types.”
Perhaps the most telling part of their proposal, though, is where they state the following:
“Representing, manufacturing, and applying transactions between states today requires writing a lot of error-prone code.”
You don’t say.
Thankfully, they took the problem to task and what we arrive at is
CollectionDifference - a struct that houses insertions and removals that describe the delta between two ordered collections:
struct CollectionDifference<ChangeElement>
Perhaps the highest compliment I can extend it is that the API is easy on the eyes (uncommon for diffing libraries). As we’ll see, it’s typically a one or two line affair to get a diff and apply it, context depending.
Keep in mind this diffing capability is for ordered collections only. In Swift, this is any collection conforming
BidirectionalCollection.
Performance-wise, the worst you can expect is O(n * m) - where n represents the count of the first collection, and m the other. You do have some influence here. If your elements conform to
Hashable (and why the heck wouldn’t they - we got diffable data source this year which requires it) or the collection share many common elements, expect the diff to perform better.
Either way, since Swift is an ever-mutating project, the diffing performance has already been improved from its first incarnation by utilizing the Myers algorithm.
Diffin’
As an API consumer, if one simply needs to diff something and move about their day, then there are two essential functions to know about which are invoked from the collections themselves:
difference(from:)
and
applying(_)
One to generate a diff (giving us a
CollectionDifference) and one to get the result of the diff by passing it in as a parameter:
let firstDraft = "It was the best of times..." let secondDraft = "It was the worst of times..." let diff = secondDraft.difference(from:firstDraft) let finalDraft = firstDraft.applying(diff) // "It was the worst of times..." // Or, reverse that let diff = firstDraft.difference(from:secondDraft) let finalDraft = secondDraft.applying(diff) // "It was the best of times..."
Also note that if you need to finely tune the diff, you can also supply a closure to return a boolean based on your own equality standards:
let foo = [1,2,3] let bar = [1,2,3,4,5,6] let diff = bar.difference(from: foo) { oldNum, newNum in return (oldNum + newNum) % 2 == 0 }
The flow is identifying what you want to compare, and then getting the results of the diff into a data structure to operate on. If that’s all you need from
CollectionDifference, then you can hang it up and call it a day. For the curious among us, let’s look a little deeper.
Change Enum
A
CollectionDifference houses changes as represented by the
Change enum. And, since Swift’s enums are drunk with power, they house three important parts of the diff:
1) An
offset Int.
2) The
element itself.
3) An optional Int,
associatedWith, that helps you track moves.
The last one is both interesting and important. In the diff, if it moved an existing element - that’s actually a two-step dance. It’s first a removal, and then an insertion. What
associatedWith does it track the relationship between the two. This opens up some very nice UIKit-y scenarios.
This, however, requires a bit more work from a performance standpoint - thus the optional Int. We don’t get very many free lunches in programming, and doubly so when it comes to diffing. So, if we want the associations, we ask for them by invoking
inferringMoves.
For example, notice the association (represented by
move) is nil in the following print statements:
let foo = ["A", "B", "D"] let bar = ["B", "A", "D"] let diff = bar.difference(from: foo) for update in diff { switch update { case .remove(let offset, let letter, let move): print("Removed \(letter) at idx \(offset) and moved to \(String(describing: move))") case .insert(let offset, let letter, let move): print("Inserted \(letter) at idx \(offset) from \(String(describing: move))") } } /* Prints Removed A at idx 0 and moved to nil Inserted A at idx 1 from nil */ let baz = foo.applying(diff) // ["Z", "A", "C"]
The diff simply tells us that “A” at index 0 was removed, and “A” was inserted at index 1. But it doesn’t tell us about any potential moves, just the end result. This makes sense because we’re left with the true, and accurate, diff - so from an API perspective we shouldn’t opt in to that extra work if it’s not needed.
If we do need it, notice how we get the associations by way of
inferringMoves. Consider the exact code above, just with one changes in the for-loop:
for update in diff.inferringMoves { /* code */ } /* Now prints Moved A at idx 0 and moved to 1 Inserted A at idx 1 from 0 */
Now, we can safely program against the moves.
Applications
While playing around with diffing, I toyed with a few applications for UIKit.
Batch Updates
If you’re unable to move to diffable data source, or you’re just a complete glutton for pain - you can reasonably backport a diffing function with a little legwork for table and collection views. Since we know a non-nil association represents a move, we can map these over to index paths.
For a single section table view, something like this works to produce a batch update:
var deletes:[IndexPath] = [] var inserts:[IndexPath] = [] var moves:[(from:IndexPath, to:IndexPath)] = [] for update in diff.inferringMoves() { switch update { case .remove(let offset, let element, let move): if let m = move { moves.append((IndexPath(row: offset, section: 0), IndexPath(row: m, section: 0))) } else { deletes.append(IndexPath(row: offset, section: 0)) } case .insert(let offset, let element, let move): // If there's no move, it's a true insertion and not the result of a move. if move == nil { inserts.append(IndexPath(row: offset, section: 0)) } } } self.tableView.performBatchUpdates({ self.myData = self.myData.applying(diff) ?? [] self.tableView.deleteRows(at: deletes, with: .left) self.tableView.insertRows(at: inserts, with: .right) moves.forEach { move in self.tableView.moveRow(at: move.from, to: move.to) } }, completion: nil)
Trying that out on a little demo app, sure enough - I was treated to batch reloads. This process was painless compared to the hoops you had to ceremoniously jump through before, and then crash on edges cases while devolving back into our burn-it-all-down ways of
reloadData:
Fresh Interfaces
Another way to give your interface a dash of that je ne sais quoi is to accurately represent the changes occuring with interface data. Think of an inbox type scenario where the user has seen X items, but Y items just came in from a network hit:
let currentItems = [1,2,3] let newItems = [1,2,3,4,5,6] let diff = newItems.difference(from: currentItems) let newCount = diff.insertions.count print("\(newCount) new items.") // 3 new items label.text = "\(newCount) new items to view."
Final Thoughts
Swift continues to benefit from a lot of talented engineers lending their handy work to the language. There is no denying that Cupertino & Friends’© open-source initiative has led to brilliant work from engineers outside their walls to be enjoyed by the masses.
CollectionDifference is a textbook example.
Now, go forth and serve up diffs with a newfound level of equanimity as you do so.
Until next time ✌️. | https://www.swiftjectivec.com/collectiondifference/ | CC-MAIN-2020-10 | refinedweb | 1,403 | 63.59 |
Next: Local Namespace Details, Up: Local Namespace
In the local namespace socket addresses are file names. You can specify any file name you want as the address of the socket, but you must have write permission on the directory containing it. It's common to put these files in the /tmp directory.
One peculiarity of the local namespace is that the name is only used when opening the connection; once open local namespace, you should delete the
file name from the file system. Use
unlink or
remove to
do this; see Deleting Files.
The local namespace supports just one protocol for any communication
style; it is protocol number
0. | http://www.gnu.org/savannah-checkouts/gnu/libc/manual/html_node/Local-Namespace-Concepts.html#Local-Namespace-Concepts | crawl-003 | refinedweb | 109 | 61.67 |
Java has Two types of streams- Byte & Characters. For reading and writing binary data, byte stream is incorporated. The InputStream abstract class is used to read data from a file. The FileInputStream is the subclass of the InputStream abstract class. The FileInputStream is used to read data from a file.
The read() method of an InputStream returns an int which contains the byte value of the byte read. If there is no more data left in stream to read, the read() method returns -1 and it can be closed after this. One thing to note here, the value -1 is an int not a byte value.
Given below example will give you a clear idea :
The content in the Devfile.txt :
The text shown here will write to a file after run
FileInStreamDemo.java
import java.io.*; public class FileInStreamDemo { public static void main(String[] args) { // create file object File file = new File("DevFile.txt"); int ch; StringBuffer strContent = new StringBuffer(""); FileInputStream fin = null; try { fin = new FileInputStream(file); while ((ch = fin.read()) != -1) strContent.append((char) ch); fin.close(); } catch (FileNotFoundException e) { System.out.println("File " + file.getAbsolutePath() + " could not be found on filesystem"); } catch (IOException ioe) { System.out.println("Exception while reading the file" + ioe); } System.out.println("File contents :"); System.out.println(strContent); } }
The command prompt will show you the following data :
If you enjoyed this post then why not add us on Google+? Add us to your Circles
Liked it! Share this Tutorial
Discuss: Java FileInputStream Example
Post your Comment | http://www.roseindia.net/java/example/java/io/FileInputStream.shtml | CC-MAIN-2015-11 | refinedweb | 254 | 60.31 |
I have two queries..
1. i want to add onClick event in if else condition.. can u provide a sample code.
2. If i do some click event in page 1 of wix website.. i want to make changes in a container box which is in different page .. how to achieve this functionality. please guide me.
1.
@chris..
i tried the above code and its not working..i also need to make changes in my container box which is in different page..is there anyway that we can handle an element from different page..
Pushkin,
What exactly wasn't working?
For making changes that affect other pages, you will have to persist those changes somewhere, like a database, file or main memory. The most accessible location would be your website cookies. You can make a change like this.
Then, on page B where you're checking for the new change, you can place the code in the onReady function.
@chris
Thanks for your reply. Actually i am having a container box in page 2. I have not stored this container box in any database because i could not find any way to store container box in wix database.
Now in my page 1 i have a button. if i click that button the color of the container box in page 2 should change. here the container box is available only in page 2 and not in page 1. I want to control my container box properties which is available in page 2 from a button in page 1.
I will try out your above code.
Thanks.
@chris..
please see my below code..let me know if i am missing something.
code on my page A to click the button and initiate color change.
import wixUsers from 'wix-users'; import wixData from 'wix-data'; import wixLocation from 'wix-location';
import {session} from 'wix-storage';
$w.onReady(function () {
let box3 = { colour:"red", }
let submit = $w('#submit');
if (userEmail === 'abc@bridgestone.eu' || userEmail === 'def@live.com' ) {
submit.onClick ( (event, $w) =>{ $w('#box122').hide(); session.setItem("colour","green"); })
}
});
now in page B i have a container box where i m trying to push the changes stored in session.
import {session} from 'wix-storage';
$w.onReady(function () { //TODO: write your page related code here... $w("#box3").style.backgroundColor= session.setItem("colour","green");
});
now when i am clicking the button submit in page A, it is nit changing the color of container box in page B..
Please guide me if i am missing something..
Thanks.
only one small bug that I see here,
In the second page you should write
$w("#box3").style.backgroundColor = session.getItem("color")
@Moshe,
I tried with your correction still no effect.. If possible can u provide me a sample code for container box for above requirement
Thanks!.
@Chris Derrell
I was able to achieve my requirement. Really thank you for your sample code.
I am facing one small issue though!!.. If any user is changing the color then when another users are logging in they can not see the colour change. I guess since i am storing the value temporarily in wix-session so it is not showing for other log in users.. can u let me know that if any way is there where the changes made by one user can be seen to all the other users.
Thank you.
Aha! @ Pushkin, now you go into the realm of using the wix Database for persisted storage.
Before I answer, are you thinking immediate updates? Meaning as soon as I change the colour in my session here in Jamaica, it changes for you where you are in realtime?
Or is it that when you refresh the page, you'd see the change?
@Chris_Derrell
Let me refresh my requirement again. For my wix site i have given login functionality. Suppose we both are users for my site. Suppose You have made change in colour of a container box. when i will open the same site with my logging credentials i should see the changes made by you. Similarly if i make any changes to another container box you should see the change in colour. There is no refresh functionality for now. If any changes made then i have to close the site and open it again to see the latest changes made by any user.
now since you have mentioned about immediate updates, is it possible to do?
Please if you can provide any solution for my requirement.
Thanks!!.
@Pushkin,
Do you have a set list of items which are supposed to be tracked? Or is it everything?
@chris
No i dont have any data set or item list..this is straight container box .
Ok Great, I'm working on a little example for you. I'll try to post by tomorrow this time
Haven't forgotten you, having an issue with my Wix forums not loading.
Hi Chris. No problem buddy i faced the same something like bad gateway.. if you got any solution for my issue you can mail me at pushkinsngh@live.com
Thanks.!
@Chris Derrell
Just wanted to know if you have got any solution for my issue. If you can help me by today it will be of great help.
Thanks!.
Working on something here, check out your page here.
It's not ready yet though, need to persist the changes.
Hey @Pushkin Singh , the working page is live here. I hope it's still in time for your original timeline.
The Code (~97 lines including comments)
Make sure to set your database read, write and update permissions to anyone.
@Chris Derrell
Sorry was out of town for my project implementation. BTW thank you so much for your guidance and your support. I tried your logic and it works perfectly.
One word to explain... you are awesome!!!!!
I will post my final code and the requirement which I achieved by using your logic in the same forum post shortly.
Thanks again buddy!!!! | https://www.wix.com/corvid/forum/community-discussion/custom-functions | CC-MAIN-2019-47 | refinedweb | 995 | 84.78 |
fcntl(2) BSD System Calls Manual fcntl(2)
NAME
fcntl -- file control
SYNOPSIS
#include <fcntl.h> int fcntl(int fildes, int cmd, ...);
DESCRIPTION
Fcntl() provides for control over descriptors. The argument fildes is a descriptor to be operated on by cmd as follows: F_DUPFD Return a new descriptor as follows: o Lowest numbered available descriptor greater than or equal to arg. so that the descriptor remains open across an execv(2) system call. F_DUPFD_CLOEXEC Like F_DUPFD, except that the close-on-exec flag asso-_GETPATH Get the path of the file descriptor Fildes. The argu- ment must be a buffer of size MAXPATHLEN or greater. F_PREALLOCATE Preallocate file storage space. Note: upon success, the space that is allocated can be the same size or larger than the space requested. F_SETSIZE Truncate a file without zeroing space. The calling process must have root privileges. F_RDADVISE Issue an advisory read async with no copy to user. F_RDAHEAD Turn read ahead off/on. A zero value in arg disables read ahead. A non-zero value in arg turns read ahead on. F_READBOOTSTRAP Read bootstrap from disk. F_WRITEBOOTSTRAP Write bootstrap on disk. The calling process must have root privileges. F_NOCACHE Turns data caching off/on. A non-zero value in arg turns data caching off. A value of zero in arg turns data caching on. F_LOG2PHYS Get disk device information. Currently this only includes the disk device address that corresponds to the current file offset.- matically closed in the successor process image when one of the execv(2) or posix_spawn(2) family of system calls is invoked. to the O_APPEND flag of open(2). speci- fiedame(3) to retrieve a record, the lock will be lost because getpwname(3) opens, reads, and closes the password database. The data- base fcntl(2) locks may be safely used con- currently. F_PREALLOCATE command operates on the following structure: typedef struct fstore { u_int32_t fst_flags; /* IN: flags word */ int fst_posmode; /* IN: indicates offset field */ off_t fst_offset; /* IN: start of the region */ off_t fst_length; /* IN: size of the region */- mands are directly analogous, and fully interoperate with the SO_NOSIGPIPE option of setsockopt(2) and getsockopt(2) respectively.
RETURN VALUES
Upon successful completion, the value returned depends on cmd as follows: F_DUPFD.
ERRORSACCESS] The argument cmd is either F_SETSIZE or F_WRITEBOOTSTRAP and the calling process does not have root privileges. writ- ing. The argument cmd is F_PREALLOCATE and the calling process does not have file write permission. The argument cmd is F_LOG2PHYS or F_LOG2PHYS_EXT and fildes is not a valid file descriptor open for read-(2)). The argument cmd is F_GETLK, F_SETLK, or F_SETLKW and the data to which arg points is not valid, or f] Cmd is F_DUPFD and the maximum allowed number of file descriptors are currently open. [EMFILE] The argument cmd is F_DUPEDOVERFLOW] A return value would overflow its representation. For example, cmd is F_GETLK, F_SETLK, or F_SETLKW and the smallest (or, if l_len is non-zero, the largest) off- set of a byte in the requested segment will not fit in an object of type off_t. [ESRCH] Cmd is F_SETOWN and the process ID given as argument is not in use.
SEE ALSO
close(2), execve(2), flock(2), getdtablesize(2), open(2), pipe(2), socket(2), setsockopt(2), sigaction(3)
HISTORY
The fcntl() function call appeared in 4.2BSD. 4.2 Berkeley Distribution February 17, 2011 4.2 Berkeley Distribution
Mac OS X 10.9.1 - Generated Sun Jan 5 20:33:10 CST 2014 | http://www.manpagez.com/man/2/fcntl/ | CC-MAIN-2014-15 | refinedweb | 579 | 66.23 |
Using the jQuery Transform plug-in(XSLT)
Using the jQuery Transform plug-in(XSLT)
Using the jQuery Transform plug-in(XSLT....
The AJAXSLT gives the power for transforming XML/XSL from JavaScript
Downloading and installing jQuery Transform plug-in
Downloading and installing jQuery Transform plug... jQuery Transform plug-in
You can download it from the below link... with the distribution.
To download plugin + xml + xsl click here
Learn
Java Xml Transform
Java Xml Transform
... information required to transform XML into
other
forms like HTML, WML, XML.
Package
Description
javax.xml.transform
Defines
Error-
Error- Hello, I would like to know about XSD file.
I try to print XML file but I am getting error SAXException-- says
Content is not allowed in prolog.
Please help me
error
HelloWorld
Deployment Error for module: HelloWorld: Error occurred during... [HelloWorld]. TargetNamespace.1 : Espace de noms "'.. Please see server.log for more details.
please help me
Transforming XML with SAXFilters
Transforming XML with SAXFilters
This Example shows you how to Transform XML with
SAXFilters. JAXP (Java API for XML Processing) is an interface which provides
parsing of xml
deployment error - XML
getting the following error after i deployed my war file on tomcat.I suspecting my... web.xml entries to support spring and site mesh?? here is the error on my console and my web.xml .........
--------------------------------------
SEVERE: Error
XML error message: The reference to entity
XML error message: The reference to entity XML error message: The reference to entity "ai" must end with the ';' delimiter.
Im getting this error when i gotta edit my blogger template
Please advice
manjunath
xml - XML
xml hi
convert xml file to xml string in java ,after getting the xml string result and how to load xml string result?
this my problem
pls help...");
FileOutputStream output = new FileOutputStream("new.xml");
ReadXML xml = new
Transforming XML with XSLT
Transforming XML with XSLT
This Example shows you how to Transform XML with the XSLT in a DOM document. JAXP (Java
API for XML Processing) is an interface which provides
XML DOM error - Java Beginners
XML DOM error import org.w3c.dom.*;
import javax.xml.parsers....("xml Document Contains " + nodes.getLength() + " elements.");
}
else...);
}
}
}
the above program "To Count XML Element" was copied from this website (roseindia.net
Transforming an XML File with XSL
Transforming an XML File with XSL
This Example gives you a way to transform an XML File with XSL in a DOM document. JAXP (Java
API for XML Processing) is an interface which
XML into HTML
XML into HTML Hi....
please anyone tell me
What are the steps to transform XML into HTML using XSL?
Thanks
Error after restart server... - XML
Error after restart server... Hi,
Im using Oracle JDeveloper 10g and Oracle SQL database..
Yesterday, the server team said they want to restart... app run perfectly...
i got this error when i try to run my app:-
10/05/14
XML Error Checking and Locating
XML Error Checking and Locating
... in the XML document and error position.
Description of program:
The following program helps you in checking and
locating an error in XML document
XML load problem - XML
XML load problem I have load the xml document in javascript.getting error like ActiveXObject is undefined.Can you help me? Hi friend... to :
Thanks
New Document
XML validation
XML validation Hi,
I want to validate the XML file. If there is any blank or white space in the XML file, It should throw error message
xml and xsd - XML
xml and xsd
50007812
2005-03-09T17:05:59... to use.i want to use in my local system and validate xml..kindly reply soon.if...);
factory.setAttribute("
XML Error checker and locater (DOM)
XML Error checker and locater (DOM)
... and locate
(line and column number) an error in your XML document using the DOM
APIs. The XML document follows some rules to check its syntax.
XML in database - XML
XML in database Hi Deepak,
i m again facing problem with single element multiple tag in xml.
i m trying to read the tag values into my... this xml into my db(which is in mysql) i write the following code.
import
How to create XML file - XML
How to create XML file Creating a XML file, need an example. Thanks!! To create XML file you need plain text editor like note pad. After creating your file save it with .xml extension. Hi,Java programming
error log
error log hi
my requirement is validate xml and xsd in java.If there is an errors then i will log error and
store into error table. so plz if any one knows send code urgent.
error table details--------------
Table
XSLT
Transformations
Language and it is used to transform the XML data into HTML, XHTML, WML... website you
can use the same XML data and the generate different version
JSTL- xml tag library - XML
JSTL- xml tag library i was trying to try jstl-xml tags examples... an error on browser :" type Exception report
descriptionThe server encountered an internal error () that prevented it from fulfilling this request
web.xml - XML
to create like this in editplus.. and enter ctrl + B
It shows some error..
----
The XML page cannot be displayed
Cannot view XML input using style sheet. Please correct the error and then click the Refresh button, or try again later
readXML - XML
readXML hiii i face problems in reading a xml file... the path1,path2,.. from it.. i use the DOM parser and get error messages like... u parse.
Thanks''
Rajanikant
just check ur xml file it is wrong
XML- read Text Mode - Java Error in Windows - reg. - Java Beginners
XML- read Text Mode - Java Error in Windows - reg. Dear All
I'm creating the code as read the XML file in BufferedReader method. Here I pasted the code below:.
(*) First read the XML file using FileInputStream
java - XML
java how can i validate my xml file using java code plz send me de... kumar Hi friend,
Step to validate a xml file against a DTD (Document Type Definition) using the DOM APIs.
Here is the xml file "User.xml
readXML - XML
readXML hiii i face problems in reading a xml file.. pixvalue1path1... and get error messages like... enter XML file name: ");
String xmlFile = buff.readLine();
File
tag name in xml
tag name in xml How to get tag name in xml?
<?xml version="1.0" encoding="UTF-8" ?>
<xsl:stylesheet
<xsl:variable name
XML,XML Tutorials,Online XML Tutorial,XML Help Tutorials
XML Tutorials
Transforming XML with SAX Filters
This Example shows you the way to Transform XML with SAXFilters. JAXP (Java
API for XML Processing) is an interface which provides
XML Parsers
the parser encounters a new XML
tag or encounters an error, or wants...
XML Parsers
XML parser is used to read, update, create and manipulate
an XML document.
Parsing
error log and send Database
error log and send Database hi
my requirement is validate xml and xsd in java.If there is an errors then i will log error and
store into error table. so plz if any one knows send code urgent.
error table details
Binding Error in Spring - Spring
Binding Error in Spring Error:
Neither BindingResult nor plain...;
My Maping in xml...
What could be the cause of this error?
Thanks
parsing XML file to get java object - XML
parsing XML file to get java object Hello,
I'm facing a problem in parsing XML file to get the java object.
I've tried to retrieve data from XML file using SAX parser.
my XML file structure is the following
XML Related Technologies: An overview
for formatting XML documents.
XSLT
(XSL Transformations) is used to transform XML...
XML Related Technologies: An overview
Below is a list of XML-related technologies.
DTD
Jboss related linkage error
Jboss related linkage error Please check below error when i run... "org.apache.axis.message.MessageElement.getChildElements
(Ljavax/xml/namespace/QName;)Ljava/util/Iterator;"
the class... (instance of )
for interface javax/xml/soap/SOAPElement have different Class objects
Use of <x:transform> tag of JSTL
;x:parse>
tag of Xml tag library of Jstl. This tag is used to transform the specified
xml document. With the use of this tag we can display data from...:import
<x:transform xml
j2me with xml - Java Beginners
and i want to read and display data using xml file. In j2me application KXML PARSER... Toolkit 2.5.2 ", it gave me following error.
1- Error preverifying class bin.EventList
2- Class loading error: Wrong name.... Build failed
i follow all
string manipulation in xml
files present in a folder.
while executing that .sql file in xml its giving me error as ""\" unexpected character"
solution is I have to chage every
XML Tutorials
;
Transforming XML with SAXFilters
This Example shows you how to Transform... to transform an XML File with XSL in a DOM document. JAXP (Java
API for XML Processing... shows you how to Transform XML with the XSLT in a DOM document. JAXP (Java
API
Error running webservice
Error running webservice Hi,
I am getting following error:
05/10 12:45:46 ERROR org.springframework.web.context.ContextLoader - Context initialization failed
org.springframework.beans.factory.BeanCreationException: Error
XML Transformation in JSP
file to XSLT stylesheet. XML
transform tag performs transformation from XML file to XSLT file. Syntax for XML
transform tag is as below:
..., lname and age. These information is shown into
XML-XSL Transform JSP page
Java XML parser
Java XML parser Hi friends,
i am new to java XML parsing.
i need to parse the following xml response which i got from a url hit:
<response>..._THRESHOLD>100</LOW_SCORE_THRESHOLD> <MAX_ERROR_COUNT>4</MAX
XML
XML How i remove a tag from xml and update it in my xml
error
error while iam compiling iam getting expected error ignore http 500 error?
how to ignore http 500 error? In XML data parsing while connecting to many server through URL if one does not connect gives the error. Is there any process to ignore that URL connection and get data from other connection which
error
error i have 404 error in my program plz tell me yhe solution about
XML Interviews Question page18,xml Interviews Guide,xml Interviews
XML Interviews Question page18
How do I override a default XML namespace declaration?
To override the current default XML namespace, you simply declare another
Error in jdeveloper 10G
Error in jdeveloper 10G 500 Internal Server Error
javax.servlet.jsp.JspException: Exception creating bean of class view.AddressForm: {1...
<?xml version = '1.0' encoding = 'windows-1252'?>
<!DOCTYPE struts-config
XML Interviews Question page27,xml Interviews Guide,xml Interviews
XML Interviews Question page27
... technologies today. For example, in the following XML Schemas definition, the attribute... an XML document in canonical form and changes namespace prefixes in the process
error
error java.lang.unsupportedclassversionerror:bad major version at offset 6
how to solve this????
Hi,
Please check the version of framework used and also the JDK version.
This type error also comes when java file
error
error
import org.hibernate.Session error
proxool-0.8.3.jar
swarmcache-1.0rc2.jar
versioncheck.jar
xerces-2.6.2.jar
xml-apis.jar
Read
read xml using java
read xml using java <p>to read multiple attributes and elements from xml in an order....
ex :component name="csl"\layerinterfacefile="poo.c... element to b printed...
here is the xml code......................</p>
<
error!!!!!!!!!
error!!!!!!!!! st=con.createStatement();
int a=Integer.parseInt(txttrno.getText());
String b=txttname.getText();
String c=txtfrom.getText();
String d=txtto.getText
Use of Core XML tags in JSP
, forEach, if
x
Transformation
transform, param
x
Core XML...
Use of Core XML tags in JSP
... use of Core XML tag in JSP
JSTL (JSP standard tag library) XML tag can
Retrieving attribute value from XML
Retrieving attribute value from XML I have an XML as below to parse... was able to find the error. I was treating attribute as an element.
No more response...://
XML Interviews Question page26,xml Interviews Guide,xml Interviews
this as an error, or it can treat the document as one that does not use XML... application do when it encounters an error?
The XML namespaces...
XML Interviews Question page26
error
XMl
xml
XML Interviews Question page17,xml Interviews Guide,xml Interviews
error because the google prefix was never declared:
<?xml version="1.0...
XML Interviews Question page17
Where can I declare an XML namespace?
You can
XML Interviews Question page22,xml Interviews Guide,xml Interviews
XML namespaces are confusing to read and thus prone to error. They also allow...
XML Interviews Question page22
From other XML namespaces?
A: Yes and no.The
XML - XML
XML What is specific definatio of XML?
Tell us about marits and demarits of XML?
use of XML?
How we can use of XML? Hi,
XML... language much like HTML used to describe data. In XML, tags are not predefined
Produces XML file but format not correct for storing data using JSP and XML
Produces XML file but format not correct for storing data using JSP and XML hii
I have created a project using JSP and XML as database to store data entered by user in XML file ,It stores data entered in XML file
Java XML modification
Java XML modification
This section explain you how to modify the xml file..., DocumentBuilder and Document classes to get the default DOM parser and parse the xml... to generate the result. The transform() method of Transformer class takes
Validating XML document with a DTD
Validating XML document with a DTD
If an xml document is well formed i.e.... an XML document valid it must be validated,
or verified, against a DTD.
DTD..., attributes for elements etc. So your xml file must follow the rules
defined | http://www.roseindia.net/tutorialhelp/comment/88427 | CC-MAIN-2014-10 | refinedweb | 2,293 | 66.54 |
LiveCycle Mosaic ES2.5 has some very useful features for allowing users to customize their environment. Some of the most useful revolve around the view object and its related view context. With a bit of creative programming we can take advantage of the view object and allow users to create their own, customized design.
In this post I’ll look at a fairly common request for a customizable Mosaic app and see how it can be done while using many of the view features.
The source code used in this post can be found here.
First we need to build the four tiles that make up the bulk of the application. I won’t go too much into the general tile development, but instead I’ll concentrate on the “special” things I need to do to make the other requirements work.
Basically these are small Flex apps that happen to use the Mosaic API. In my case I built them as ModuleTile files since they are all built on the same version of the Flex SDK. Since this is a sample, I’m just going to read the data from a couple of local XML files using an HTTP Service. No need to get too fancy here.
- The
Requirement: When a user selects an item from the request list the other tiles will update with relevant request/customer information.
Mosaic has several methods to exchange data between tiles (see video for more), but there are a few requirements that lend themselves to using the view context for
The Request List tile will be broadcasting data to the other tiles. When a user clicks on an item from the data grid (id=”grid”) two data attributes will be set – the caseNumber and the customerName. These will be picked up by the other tiles.
To send view data to the other tiles in the current view you would use the parentView.context.setAttribute method. The first parameter is the attribute name, the second is its value. Here is the code from the Request List tile:
private function selectRecord():void{
var caseNumber:String = grid.selectedItem.caseNumber;
var customerName:String = grid.selectedItem.customer;
parentView.context.setAttribute(“caseNumber”,caseNumber);
parentView.context.setAttribute(“customerName”,customerName);
}
Receiving Data
The other three tiles need to receive the data and then do something with it. In this case they will get related XML data and put it on the screen.
To meet the requirements the tile must check for the data when it loads (thus meeting the requirement to open the last request). It must also watch for changes to the data so the tile is updated when the user selects a different request.
To get the value of a view data attribute I will use the parentView.context.getAttribute method. The only parameter is the attribute name and the result is the contents of the data.
Mosaic also has a listener that can watch for changes in a view context attribute. The parentView.context.addAttributeWatcher function works using the standard Flex event model in that it will fire a function when the attribute data changes. You can then use the getAttribute function to get the data.
The following code is from the Request Details tile, but the other two tiles use similar techniques for getting the view context data. The init function is called on the creationComplete event of the tile:
import mx.rpc.events.FaultEvent;
import mx.rpc.events.ResultEvent;
import mx.events.PropertyChangeEvent;
[Bindable] private var currentRequest:XML;
[Bindable] private var caseNum:String;
private function init():void{
parentView.context.addAttributeWatcher(“caseNumber”,onChange); //watch for changes to the data
onChange(); //check for data on load
}
private function onChange(event:PropertyChangeEvent=null):void{
caseNum = parentView.context.getAttribute(“caseNumber”); //get the data
if (caseNum != null) //if there is data, then do something with it
HttpService.send();
}
Adding Tiles to a View
Requirements: User’s can customize their app to include any or all of the tiles
User’s can open multiple copies of the work space to work on different requests.
This requirement implies that the application needs a way to allow the user to create a new view and to add available tile to that view as they want. Creating a new view is easy, as Mosaic has a built in button on the view skin to add a new view (the “+” button in the default skin).
Adding tiles to the view takes a bit more effort. Mosaic 9.5 doesn’t have an out of the box way to do this, but it does provide the APIs so it can be coded. I need somewhere to put this code and the user controls. It doesn’t make much sense to put it in any of the four tiles, as the user may not have add that tile to their view. For this example I’ll add another tile called Header (Header.mxml).
I’ll need something to hold a list of the available tiles. To be more flexible, I will also add the catalog name to that list as well. That will allow me to add tiles from other catalogs into the view later on. I created a simple class TileInfo to hold the tile and catalog name for each tile. Then I created an array collection of the TileInfo objects and I populate that array collection when the Header tile is initialized (creation complete):
private function init():void{
//set the list of tiles (and their catalogs) that a user can add
tileCollection.addItem(new TileInfo(“RequestList”,”ViewDemo_Catalog”));
tileMenuCollection.addItem(“Request List”);
tileCollection.addItem(new TileInfo(“RequestDetails”,”ViewDemo_Catalog”));
tileMenuCollection.addItem(“Request Details” );
tileCollection.addItem(new TileInfo(“CustomerDetails”,”ViewDemo_Catalog”));
tileMenuCollection.addItem(“Customer Details”);
tileCollection.addItem(new TileInfo( “CustomerHistory”,”ViewDemo_Catalog”));
tileMenuCollection.addItem(“Customer History”)
}
One of Header’s jobs will be to have a control that a user can use to add tiles. I used a simple drop down (bound to the array collection) with the tile names and a button for the user to add the selected tile.
When the user clicks on the Add Tile button a function executes that will add the tile. When the function fires the code must first do the following:
- Locate the proper catalog
- Locate the proper tile in the catalog
- Find the view and panel that the user is looking at
- Add the tile to the view
As part of step 3 we should make sure that there is both a view and panel into which we can put the tile. If either of these are missing, we can pull a pre-configured one out of the catalog. By creating a pre-configured view and panel template, the layout can be defined ahead of time. This will save a bit of coding. In this case I created a view template called addView and a panel template called addPanel in the catalog.
The button handler in the Header tile looks like:
protected function addTile_clickHandler(event:MouseEvent):void {
if (tileDropDown.selectedIndex != -1){
var tileInfo:TileInfo = tileCollection.getItemAt(tileDropDown.selectedIndex) as TileInfo ;
//get the tile
var cat:ICatalog = mosaicApp.getCatalog(tileInfo.catalogNm);
var tileToAdd:ITile = cat.getTile(tileInfo.tileNm);
var view:IView = this.currentView();
var panel:IPanel = this.currentPanel(view);
panel.addTile(tileToAdd);
}
}
This calls two functions to find (or add) the current view (currentView) and panel (currentPanel). To determine if a view or panel is the one that the user is using, I check each view/panel for the displayed flag (true means that it is the current one).
The following is the function for determining the current view. The panel function is similar (you can see it in the source code).
/**
* find the current view. If not found, then load one from the catalog
**/
protected function currentView():IView{
var currentView:IView;
//find the current view by looking at the displayed flag in each view
var viewArray:Array = mosaicApp.views;
for each (var searchView:IView in viewArray){
if (searchView.displayed){
//found view
currentView = searchView;
break;
}
}
//not found, so go into the catalog and get a default view
if (currentView == null){
var catalog:ICatalog = mosaicApp.getCatalog(“ViewDemo_Catalog”);
currentView = catalog.getView(“addView”);
mosaicApp.addView(currentView);
currentView.display();
}
return currentView;
}
Saving and Loading Views
Requirements:.
The view skin has a control that allows users to save a view at any time:
. Views can also be saved using the IView.save API. Views are saved on the server and are tagged to the user’s account. In this case I’ll let the user save the view using the built in controls.
Views can be loaded into an application either by using the organizer (added to the application’s xml file using the organizer element), or by using the API. The organizer is an element added to the application’s xml definition. It shows a list of the user’s saved Views and allows a user to add the view to the application at any time.
The requirement, however, state that the app should load the user’s view and if that does not exist, load the default view. This will require loading the correct view using the API. To add a view using the API you first have to find the view in the mosaicApp.userViews array, then add it to the application using the mosaicApp.addView method. I could put the default view, with its panels and tiles, directly into the application’s XML file. The problem with that is the default view will always load because Mosaic loads the application contents when the user accesses the app. There will be no way to intercept it and show the user’s saved view. In this case its better to open the view (default or saved) using the API.
This means that the application’s XML file will not have any of the four tiles. It will have the Header tile, which will contain the API calls to load the proper view.
The application xml file looks like:
<?xml version=”1.0″ encoding=”UTF-8″?>
<app:Application name=”ITRequests” label=”IT Requests”
xmlns:view=””
xmlns:catalog=””
xmlns:tile=””
xmlns:crx=””
xmlns:app=””
xmlns:xsi=””
xsi:schemaLocation=””>
<crx:Metadata>
<crx:Description>ITRequests</crx:Description>
</crx:Metadata>
<app:Shell name=”ITRequests” label=”IT Requests”>
<catalog:CatalogReference name=”cat” uri=”ViewDemo_Catalog”/>
<view:Organizer visible=”false”/>
<tile:TileReference catalog=”cat” name=”Header” label=”Header” width=”100%” height=”80″/>
<view:ViewManager width=”100%” height=”100%”>
</view:ViewManager>
</app:Shell>
</app:Application>
Notice that I did add a ViewManager, this is necessary because Mosaic needs something to control the views that I will add later.
Now I need to add some code to the Header tile so it can load the proper view. I’ll need to check to see if there are any saved views for the user. If there are, then I add them to the application. If not; then I’ll call another function to load the default view.
/**
* check to see if the user has any saved views. If so
* bring them up. If not, bring up the default
*/
protected function getUserViews():void{
var userViews:Array = mosaicApp.userViews;
for each (var view:IView in userViews ){
mosaicApp.addView(view);
showDefaultBtn.visible = true;
}
if (userViews.length == 0){
showDefaultView();
}
}
/**
* show the default view
*/
protected function showDefaultView():void{
var catalog:ICatalog = mosaicApp.getCatalog(“ViewDemo_Catalog”);
var defaultView:IView = catalog.getView(“defaultView”); //load a view template called defaultView from the catalog
mosaicApp.addView(defaultView);
showDefaultBtn.visible = false;
}
To make sure this happens when the app first gets loaded, I’ll add a call to the getUserViews(); function to the Header’s init function.
I could have easily done this in one function, but I wanted user’s that have a saved view to “reload” the default one. To do that I added a button (showDefaultBtn) that will fire the showDefaultView function. If the default view is there, I want the button to be invisible.
Conclusion
By taking advantage of the Mosaic view features – inter-tile communication, view layout saving, view data saving and view related APIs – you can build a highly customizable application. This will allow your users to have the freedom to set the system up the way they want.
The source code used in this post can be found here. | https://blogs.adobe.com/steampowered/2011/02/view-master.html | CC-MAIN-2015-32 | refinedweb | 2,028 | 55.84 |
Create a class called EvenCount that contains a method called count that takes an integer as an argument and returns an integer (this method should not be declared static). The count method should use a recursive approach to find the amount of even digits in the integer passed into the method, and return this number. For example, if the count method was called with the input value 783312 as an argument it should return 2 (there are two even numbers in the input value). Hint: in Java a single digit that is even will produce a result of 0 when the remainder operator (the percent sign) is used to find its remainder when divided by two. For example 2 % 2 and 6 % 2 will return 0, whereas 3 % 2 and 7 % 2 will produce a result of 1.
package mocktest; public class EvenCount { public String count(String x) { if (x < 10 && x % 2 == 0) return 1; else if (x < 10 && % 2 !0) return 0; else if ((x >=10 && (x / 10 ) % 2 == 0) return 1 + count(x/10) + 1; else return count (x/10); } public static void main (String args[]); { EvenCount c = new EvenCount (); System.out.println(x.count(73221)); } }
Why do i have so many errors? | http://www.javaprogrammingforums.com/whats-wrong-my-code/17315-code-evencount.html | CC-MAIN-2014-10 | refinedweb | 206 | 63.43 |
Technical Article iRules 101 - #08 - Classes Updated 22-Jun-2017•Originally posted on 10-Dec-2007 by Colin Walker 3810 article 101 application delivery basic big-ip dev devops getting started irules news This article has been re-released as part of the new Intermediate iRules series in the Data-Groups articles. When dealing with iRules there is sometimes a need to store static information in lists that you can search when your iRule is executed. Are you looking to check every incoming connection for a certain list of Client IPs? Perhaps you want to parse the incoming URI and direct to different pools based on what URI parts are found. To perform inspections/actions like this you need to have a defined list of data to search for, and that list needs to remain constant across multiple connections. This is exactly what classes are designed We'll be going over some of the common questions that seem to crop up when talking to/with people about classes in iRules. Hopefully by the time we're through here you'll have a clear understanding of what classes are, how you can use them, and perhaps even when/why you would. ;) What is the difference between a "class" and a "Data Group" when dealing with F5 systems? Nothing! These terms are interchangeable, which can sometimes throw people off. They are referred as "Data Groups" via the GUI, and "class"(es) via the configuration file. This can be a bit confusing, but I assure you they really mean the same thing. For the rest of this document, however, I will refer to them as classes. Are there different types of classes? Yes. There are four kinds of classes that you can choose to make use of via iRules. Each of these, as you might imagine, can serve different purposes: String - The "string" type class is the most basic and general type of class provided for your use. This is the type of class that will likely be used most often as it allows you to store any type of data in string format to be used later by your iRules to perform tasks like the URI substitution we spoke about above. Address - Address Classes allow you to store IP addresses and/or address ranges to be searched via matchclass or findclass which we'll talk about more later. This can be very useful when trying to search for multiple IP addresses that happen to be within a network range and can save a fair amount of hassle over adding each IP individually to, say, a string class. Integer - Allowing you to store integer values for quick referencing and comparison, the integer class type can be useful and efficient when dealing with this specific type of data. External File - This unique class type actually allows you to store your class information in an external file, as opposed to the bigip.conf with the rest of your iRules config data. This can be beneficial for administration clarity and automation. How do I create a class? Like most things you create in your F5 device configurations, there are a few main ways you can create classes for your iRule. You can create them via the GUI, CLI, or in this case, via the iRule Editor as well. GUI - To see/create a class via the GUI, navigate to Local Traffic -> iRules -> Data Group List. Here you can see your current classes to edit them, or create a new one to use. CLI - Via the bigpipe class command and the permutations therin, you can add, modify and delete the classes on your BIG-IP as desired. To learn more about this type bigpipe class help from the command line of your system. iRule Editor - If you happen to have the handy iRule editor installed (available on DevCentral - Here) you can create and manage your classes directly from the editor while writing/modifying your iRules. Just go to the "Tools" menu, and select "Data Group Editor". Here you'll be able to add, remove or modify classes as needed. How can I search through classes? The two main ways to search through a class are with the matchclass and findclass commands. These commands have similar syntax and functionality, but accomplish different tasks. matchclass - The matchclass command searches a data group list for a member that matches exactly a specified search parameter and returns a true/false value (0/1) indicating the success of the match. This can be very useful when building logic checks, such as: when HTTP_REQUEST { if { matchclass [HTTP::uri] equals $::uri_list } { ... } } findclass - The findclass command searches a data group list for a member that starts with a specified search parameter and returns the matching class member. This is similar to the matchclass command, except that the member is not required to be equal; instead, the member is only required to start with the string and the command returns the entire member value. Also, this command can be used to return a matching portion of a class member. For instance, if your class member looks like "192.168.5.42 pool1", you can use the findclass command to return the second portion of the class member, after the space separator, thereby making findclass very useful for matching key/value pairs in your iRule class. It would look something like: when HTTP_REQUEST { if { matchclass [HTTP::uri] starts_with $::uri_list } { set myPool [findclass [HTTP::uri] starts_with $::uri_list " "] pool $myPool ... } } Can I modify a class real-time with my iRule? Technically, yes. Once the configuration is loaded into memory, you can technically modify a class with TCL's list commands. Doing so, however, not only converts the data in the class from an efficient, hashed format into a simple list format, thereby slowing down queries; but the changes made are also not permanent, as they cannot be written back to the file that stores the class data. This makes the changes effective only until TMM is restarted. In general, there is usually another way of structuring your code to avoid this that would be preferred. Get the Flash Player to see this player. Average Rating: 4.4 Log in to rate this content Print Download Share Comments on this Article Comment made 22-Jan-2008 by Nick Lawes 0 If iControl is used to update the contents of a class, are the changes apparent immediately within the iRule, or is some action required to reload the data? 0 Comment made 21-Jul-2008 by hoolio 2473 matchclass will return the element number of the first match, which can be useful.# Create a sample list (same concept as a class for testing matchclass)set ::test_list [list {one} {two} {three}]# Log the matchclass output for the first element which starts with "t"log local0. "matched element #[matchclass $::test_list starts_with "t"]"Output:Rule : matched element #2Also, matchclass needs to be wrapped in []'s to execute it. So this: if { matchclass [HTTP::uri] equals $::uri_list } {Should be: if { [matchclass [HTTP::uri] equals $::uri_list] } {--Aaron 0 Comment made 17-Dec-2008 by bduncan 0 As far as I know (according to the docs), findclass does not have operators. So the example with findclass using the starts_with operator is wrong I think. 0 Comment made 26-Feb-2009 by brad 373 I'm gathering that a class can't be defined to be a set of two other classes. I have a need to have a class that contains everything that already is defined in another class plus a set of additional entries. Any ideas?! 0 Comment made 30-Apr-2010 by tarsier 5 What is the size limit for classes? I have been loading images in classes for use in maintenance pages, but just recently discovered that there is apparently a limit around 100k. The class loads fine, but only part of the image/class is returned when accessed. 0 Comment made 09-May-2011 by hoolio 2473 An update for v10 would be great :) 0 Comment made 14-Sep-2011 by Charles Roth 0 The explanation of matchclass could use more help, e.g. a syntax definition something like:matchclass item condition classwhere "condition" can be one of... well, I don't know. Are these hard-coded to matchclass, or are they essentially pass-by-reference function names, in which case any dyadic TCL operator would work? There's no way to even guess from this documentation.And the paragraphs that say things like "the matchclass command" are rather confusing... it should be "the matchclass example below..." or some such. Matchclass by itself is not tied to a specific condition.In general, the info in these lessons is great, but some of the writing is C+. I hope these comments prove useful. 0 Comment made 14-Sep-2011 by Charles Roth 0 P.S. I thought I posted this earlier, but it may have gotten lost. I think the line: set myPool [findclass ]HTTP::uri[ starts_with $::uri_list " "]is supposed to read: set myPool [findclass [HTTP::uri] starts_with $::uri_list " "] 0 Comment made 21-Oct-2011 by shawno 4 does the use of a datagroup/class preclude the use of cmp? 0 Comment made 15-Nov-2011 by Colin Walker 3810 No, if you reference your classes appropriately, you can absolutely use data groups and have it be CMP compatible. #Colin 0 Comment made 16-Nov-2011 by shawno 4 Are they referenced inappropriately in these examples? The solution articles says CMP is disabled if: "An iRule which refers to a Data Group List (class) using the $:: global variable prefix"Are there non-global variable prefixes that are more appropriate? 0 Comment made 16-Nov-2011 by Jason Rahm F5 this article was written pre-CMP. When CMP was introduced, datagroups could be reference without the leading $:: so no pinning occurred. If you need a global variable and still want the benefits of CMP, beginning in v10 the static namespace was introduced. So using $static::myvar allows for global usage without demotion from CMP. 0 Comment made 16-Nov-2011 by hoolio 2473 For more info on CMP (and datagroups), you can check this wiki page: 0 Comment made 17-Nov-2011 by shawno 4 TL;DR: "In 9.4.4 and higher, when referencing the class with the findclass or matchclass commands you should not use :: or $:: prefix" -1 Comment made 28-Jan-2012 by Neo.Moon 0 Here are two examples for not using "$:: or :: prefix" using class command instead of using depreciated findclass and matchclass commands. The class command deprecates the findclass and matchclass commands as it offers better functionality and performance than the older commands.Example 1)matchclass [HTTP::uri] contains $::cache_listcan be changed to:class match [HTTP::uri] contains cache_listif { [HTTP::path] starts_with $::pusus_pass_uri } { ASM::disable}can be changed to: if { [class match [HTTP::path] starts_with pusus_pass_uri] } { ASM::disable} 0 Comment made 01-Feb-2012 by hoolio 2473 The searching through data groups section could be replaced by these related articles:v10 - data groups: iRules Data Group Updates the class command: 0 Comment made 05-Aug-2012 by Adrian 0 I do have simple iRules for some sites, redirecting the traffic to different types of Farms, I do need to run the same instruction on a VIP using HTTP Class.Could you please help me on this? 0 | https://devcentral.f5.com/articles/irules-101-08-classes | CC-MAIN-2018-13 | refinedweb | 1,888 | 59.43 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.